Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

The Ultimate Hands-On Hadoop

Packt via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Updated in May 2025. This course now features Coursera Coach — your interactive learning companion that helps you test your knowledge, challenge assumptions, and deepen your understanding as you progress. Build a strong, hands-on foundation in Hadoop and big data processing with this comprehensive course designed for data engineers, developers, and IT professionals. From installation to advanced analytics, you’ll learn how to work confidently with Hadoop’s ecosystem and design scalable solutions for real-world data challenges. You’ll begin by installing the Hortonworks Data Platform (HDP) Sandbox on your local machine, giving you an isolated environment to explore Hadoop’s core components. Through guided exercises, you’ll work with the Hadoop Distributed File System (HDFS) and build your understanding of MapReduce, learning how large-scale distributed processing works behind the scenes. As you progress, you’ll move into advanced Hadoop programming with Pig, Hive, and Spark. You’ll write complex queries, analyze large datasets, and work with real-world data to build scalable data workflows. You’ll also explore machine learning with Spark MLLib, giving you a practical introduction to distributed ML techniques. In the final modules, you’ll learn how to manage and optimize Hadoop clusters using YARN, ZooKeeper, Oozie, and Kafka. You’ll practice feeding data into your cluster, orchestrating workflows, managing resources, and analyzing streaming data in real time — essential skills for production-grade environments. By the end of this course, you will have: - Installed and configured the Hortonworks Sandbox for Hadoop development. - Worked with HDFS, MapReduce, and Hadoop’s core data processing concepts. - Written queries and pipelines using Pig, Hive, and Spark. - Performed distributed machine learning with Spark MLLib. - Integrated relational and non-relational data sources with Hadoop. - Managed clusters and streaming workflows with YARN, ZooKeeper, Oozie, and Kafka. - Gained the confidence to design and implement Hadoop-based data solutions. This course is ideal for data engineers, developers, and IT professionals with basic programming or data management experience. Familiarity with Java, SQL, or the Linux command line is helpful but not required.

Syllabus

  • Learning All the Buzzwords and Installing the Hortonworks Data Platform Sandbox
    • In this module, we will dive into the world of Hadoop, starting with its installation and setup using the Hortonworks Data Platform Sandbox. You'll explore the key buzzwords and technologies that make up the Hadoop ecosystem, learn about the historical context and impact of the Hortonworks and Cloudera merger, and begin working with real data to get a feel for Hadoop's capabilities.
  • Using the Hadoop's Core: Hadoop Distributed File System (HDFS) and MapReduce
    • In this module, we will explore the core components of Hadoop: the Hadoop Distributed File System (HDFS) and MapReduce. You'll learn how HDFS reliably stores massive data sets across a cluster and how MapReduce enables distributed data processing. Through hands-on activities, you'll import datasets, set up a MapReduce environment, and write scripts to analyze data, including breaking down movie ratings and ranking movies by popularity.
  • Programming Hadoop with Pig
    • In this module, we will delve into Pig, a high-level scripting language that simplifies Hadoop programming. You'll start by exploring the Ambari web-based UI, which makes working with Pig more accessible. The module includes practical examples and activities, such as finding the oldest five-star movies and identifying the most-rated one-star movies using Pig scripts. You'll also learn about the capabilities of Pig Latin and test your skills through challenges and result comparisons.
  • Programming Hadoop with Spark
    • In this module, we will explore the power of Apache Spark, a key technology in the Hadoop ecosystem known for its speed and versatility. You’ll start by understanding why Spark is a game-changer in big data. The module will cover Resilient Distributed Datasets (RDDs) and Datasets, showing you how to use them to analyze movie ratings data. You'll also delve into Spark's machine learning library (MLLib) to create a movie recommendation system. Through hands-on activities, you'll practice writing Spark scripts and refining your data analysis skills.
  • Using Relational Datastores with Hadoop
    • In this module, we will explore the integration of relational datastores with Hadoop, focusing on Apache Hive and MySQL. You'll start by learning how Hive enables SQL queries on data within HDFS, followed by hands-on activities to find popular and highly-rated movies using Hive. The module also covers the installation and integration of MySQL with Hadoop, using Sqoop to seamlessly transfer data between MySQL and Hadoop's HDFS/Hive. Through practical exercises, you'll gain proficiency in managing and querying relational data within the Hadoop ecosystem.
  • Using Non-Relational Data Stores with Hadoop
    • In this module, we will explore the use of non-relational (NoSQL) data stores within the Hadoop ecosystem. You'll learn why NoSQL databases are crucial for scalability and efficiency, and dive into specific technologies like HBase, Cassandra, and MongoDB. Through a series of activities, you'll practice importing data into HBase, integrating it with Pig, and using Cassandra and MongoDB alongside Spark. The module concludes with exercises to help you choose the most suitable NoSQL database for different scenarios, empowering you to make informed decisions in big data management.
  • Querying Data Interactively
    • In this module, we will focus on interactive querying tools that allow you to quickly access and analyze big data across multiple sources. You'll explore technologies like Drill, Phoenix, and Presto, learning how each one solves specific challenges in querying large datasets. The module includes hands-on activities where you'll set up these tools, execute queries that span across databases such as MongoDB, Hive, HBase, and Cassandra, and integrate these tools with other Hadoop ecosystem components. By the end of this module, you'll be equipped to perform efficient, real-time data analysis across varied data stores.
  • Managing Your Cluster
    • In this module, we will explore the critical components involved in managing a Hadoop cluster. You'll learn about YARN's resource management capabilities, how Tez optimizes task execution using Directed Acyclic Graphs, and the differences between Mesos and YARN. We'll dive into ZooKeeper for maintaining reliable operations and Oozie for orchestrating complex workflows. Hands-on activities will guide you through setting up and using Zeppelin for interactive data analysis and using Hue for a more user-friendly interface. The module also touches on other noteworthy technologies like Chukwa and Ganglia, providing a comprehensive understanding of cluster management in Hadoop.
  • Feeding Data to Your Cluster
    • In this module, we will explore the essential tools for feeding data into your Hadoop cluster, focusing on Kafka and Flume. You'll learn how Kafka supports scalable and reliable data collection across a cluster and how to set it up to publish and consume data. Additionally, you'll discover how Flume's architecture differs from Kafka and how to use it for real-time data ingestion. Through hands-on activities, you'll configure Kafka to monitor Apache logs and Flume to watch directories, publishing incoming data into HDFS. These skills will help you manage and process streaming data effectively in your Hadoop environment.
  • Analyzing Streams of Data
    • In this module, we will focus on analyzing streams of data using real-time processing frameworks such as Spark Streaming, Apache Storm, and Flink. You’ll start by learning how Spark Streaming processes micro-batches of data in real-time and participate in activities that include analyzing web logs streamed by Flume. The module then introduces Apache Storm and Flink, providing hands-on exercises to implement word count applications with these tools. By the end of this module, you will be able to build continuous applications that efficiently process and analyze streaming data.
  • Designing Real-World Systems
    • In this module, we will focus on designing and implementing real-world systems using a combination of Hadoop ecosystem tools. You'll start by exploring additional technologies like Impala, NiFi, and AWS Kinesis, learning how they fit into broader Hadoop-based solutions. The module then guides you through the process of understanding system requirements and designing applications that consume and analyze large-scale data, such as web server logs or movie recommendations. By the end of this module, you’ll be equipped to design and build complex, efficient, and scalable data systems tailored to specific business needs.
  • Learning More
    • In this final module, we will provide you with a selection of books, online resources, and tools recommended by the author to further your knowledge of Hadoop and related technologies. This module serves as a guide for continued learning, offering you the means to stay updated with the latest developments in the Hadoop ecosystem and expand your skills beyond this course.

Taught by

Packt - Course Instructors

Reviews

Start your review of The Ultimate Hands-On Hadoop

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.