IBM banner image

Course Details

 Course Code: BD0111EN

 Audience: Anyone

 Course Level: Beginner

 Time to Complete: 5-6 hours

 Language: English

About the Course

Learn the basics of Apache Hadoop, a free, open source, Java-based programming framework. Why was it invented?

Learn about Hadoop's architecture and core components, such as MapReduce and the Hadoop Distributed File System (HDFS).
Learn how to add and remove nodes from Hadoop clusters, how to check available disk space on each node, and how to modify configuration parameters.
Learn about other Apache projects that are part of the Hadoop ecosystem, including Pig, Hive, HBase, ZooKeeper, Oozie, Sqoop, Flume, among others. BDUprovides separate courses on these other projects, but we recommend you start here.

Course Syllabus

  • Understand what Hadoop is
  • Understand what Big Data is
  • Learn about other open source software related to Hadoop
  • Understand how Big Data solutions can work on the Cloud
  • Understand the main Hadoop components
  • Learn how HDFS works
  • List data access patterns for which HDFS is designed
  • Describe how data is stored in an HDFS cluster
  • Add and remove nodes from a cluster
  • Verify the health of a clusterStart and stop a clusters components
  • Modify Hadoop configuration parameters
  • Setup a rack topology
  • Describe the MapReduce philosophy
  • Explain how Pig and Hive can be used in a Hadoop environment
  • Describe how Flume and Sqoop can be used to move data into Hadoop
  • Describe how Oozie is used to schedule and control Hadoop job execution

GENERAL INFORMATION

  • This course is free.
  • It is self-paced.
  • It can be taken at any time.
  • It can be audited as many times as you wish.

Recommended Existing Skills

  • Knowledge about Big Data concepts

Requirements

  • None.

Course Staff

Warren Pettit

Warren Pettit

Warren has been with IBM for over 30 years. For the last 16 years, he has worked in Information Management education where he has been both an instructor and a course developer in the Data Warehouse and Big Data curriculums. For the nine years prior to his joining IBM, he was an application programmer and was responsible for developing a training program for newly hired programmers.

Asma Desai

Asma Desai

Asma Desai just recently started with IBM. She has developed course content for introductory Java and graph theory. Prior to course development, she worked as a consultant using Big Data to fight fraud.

Daniel Tran

Daniel Tran

Daniel Tran is an IBM Co-op Student working as a Technical Curriculum Developer in Toronto, Ontario. He develops courses to improve the education of customers who seek knowledge in the Big Data field. He has also reworked previously developed courses, updating them to be compatible with the newest software releases, as well as work at the forefront of recreating courses on a newly developed cloud environment. He has worked with various components that deal with Big Data, including Hadoop, Pig, Hive, HBase, MapReduce & YARN, Sqoop, Oozie, and Phoenix. He has also worked on separate courses involving Machine Learning. Daniel is from the University of Alberta, where he has completed his third year of traditional Computer Engineering Co-op.

Leons Petrazickis

Leons Petrazickis

Asma Desai is the Ombud for Hadoop content on IBM Big Data U as well as the Platform Architect for Big Data U Labs. As a senior software developer at IBM, he uses Ruby, Python, and Javascript to develop microservices and web applications, as well as manage containerized infrastructure.