Cloudera Developer Training for Spark and hadoop
Course Time:2016年6月27-30日
Course Location:上海市 浦东新区 张江高科 伯克利工程创新中心
Contact us:400-679-6113
QQ:1438118790
Certification:CCA-175
Learn how toimport data into your Apache Hadoop closter and process it with spark、hive、flume、sqoop、impala and other Hadoop ecosystem tools.
Audience and Prerequisites
This coursedesigned for developers and engineers who have programming experience. Apachespark examples and hands-on exercises are presented in Scala and Python, so theability to program in one of those languages is required. Basic familiaritywith the Linux command line is assumed. Basic knowledge of SQL is helpful. Priorknowledge of Hadoop is not required.
Course outline:DeveloperTraining for Spark and hadoop
-
Introduction to Hadoop and the Hadoop ecosystem
-
Hadoop architecture and HDFS
-
Importing relational data with Apache spoop
-
Introduction to impala and hive
-
Modeling and managing data with impala and hive
-
Data formats
-
Data partitioning
-
Capturing data with Apache flume
-
Spark basics
-
Working with RDDs in spark
-
Writing and deploying spark applications
-
Parallel programming with spark
-
Spark caching and persistence
-
Common patterns in spark data processing
-
Preview:spark SQL
原创文章,作者:carmelaweatherly,如若转载,请注明出处:https://blog.ytso.com/194467.html