Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

O.P. Jindal Global University

Big Data Analytics

O.P. Jindal Global University via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
The Big Data Analytics course offers a deep dive into the technologies, tools, and techniques used to process and analyze large-scale data. Learners will explore the Hadoop and Spark ecosystems, gaining hands-on experience with essential components such as Hadoop Distributed File System (HDFS), MapReduce, Pig, and Hive. The course also covers both relational (SQL) and nonrelational (NoSQL) databases, helping learners understand the appropriate contexts for each type of data storage. A significant focus is placed on Apache Spark, known for its high-speed, in-memory data processing capabilities, which is vital for handling big data applications. Learners will also work through real-world exercises, including implementing and deploying a machine learning application that processes streaming data on the cloud. Designed for professionals with a background in predictive analytics, basic SQL, and Python programming, this course equips learners with the practical skills to manage data characterized by high volume, velocity, and variety. By the end of the course, participants will be able to derive actionable insights from big data and apply them in business contexts, contributing to improved decision-making and competitive advantage in data-driven environments.

Syllabus

  • Introduction to Big Data and Hadoop   
    • Welcome to the Big Data Analytics course! By the end of this course, you will develop an understanding of the various technologies associated with Hadoop and the Spark ecosystem of tools and technologies. You will get hands-on experience working with core Hadoop components like MapReduce and Hadoop Distributed File System (HDFS). You will learn to write Pig scripts and Hive queries and extract data stored across Hadoop clusters. You will also learn about relational (SQL) and nonrelational (NoSQL) databases and discuss scenarios in which one is preferred over the other for data storage. You will also gain insight into the Spark ecosystem which makes running jobs across clusters very fast, thereby having several emerging applications. You will also learn a hands-on example of implementing and deploying a machine-learning application that handles streaming data on the cloud. This is an advanced-level course, intended for learners with a background using predictive tools and techniques, experience in writing basic Structured Query Language (SQL) queries, and an understanding of Python programming. The knowledge you gain from this course will help you make a career as a business analyst. You will gain skills to draw insights from data that has characteristics of high velocity, volume, and variety. The data with such characteristics is called big data and is increasingly being used by organizations for competitive advantage and decision-making. In this module, you will learn about Big Data applications and the various components of the Hadoop ecosystem. The module also discusses the MapReduce paradigm that facilitates distributed processing of data. You will also gain an insight into the HDFS and use it for storing files. Hands-on examples are provided using Hortonworks Data Platform Sandbox, which can be installed on a Windows/Mac computer with at least 8 GB of available RAM.
  • Weekly Summative Assessment: Introduction to Big Data and Hadoop
    • This assessment is a graded quiz based on the module covered in this week.
  • Introduction to Data Mining with Hive
    • In this module, you will learn about the Hive scripting language and its usage for mining data from Hadoop clusters. Hive provides an SQL dialect called Hive Query Language (abbreviated HiveQL or just HQL) for querying data stored in a Hadoop cluster. Hive is most suited for data warehouse applications, where relatively static data is analyzed, fast response times are not required, and when the data is not changing rapidly. Hive makes it easier for developers to port SQL-based applications to Hadoop, compared with other Hadoop languages and tools. Like all SQL dialects in widespread use, it does not fully conform to any particular revision of the ANSI SQL standard. It is perhaps closest to MySQL’s dialect, but with significant differences. Hive supports several sizes of integer and floating-point types, a boolean type, and character strings of arbitrary length. Lastly, taking a real-world data set, you will load it in the Ambari environment for analysis using HDFS and HQL. You will go through the process of creating tables, loading data, and analyzing it using a Hive Query Language.
  • Weekly Summative Assessment: Introduction to Data Mining with Hive 
    • This assessment is a graded quiz based on the modules covered this week. 
  • The Pig Scripting Languages
    • In this module, you will learn about the Pig Latin scripting language and how you can leverage it to query big data on Hadoop clusters. You will also learn about the different data types and commands available in the Pig Latin language and how they can be used to define and manipulate data in the Hadoop ecosystem. Furthermore, you will be to work on a practical example of a publicly available data set to run Pig Latin scripts for data analysis.
  • NoSQL Databases and the CAP Theorem 
    • In this module, you will be introduced to the need for NoSQL databases. You will also get introduced to HBase, a NoSQL database, and its role in the Hadoop ecosystem. You will learn about the CAP theorem and how it affects the trade-offs between choosing the different NoSQL database options available on Hadoop. You will also learn about CAP consistency, availability, and partition tolerance in detail and how they affect our choice of technology to access and manipulate data on Hadoop. Lastly, you will get insights into other emerging cloud-based NoSQL solutions.
  • Weekly Summative Assessment: NoSQL Databases and the CAP Theorem 
    • This assessment is a graded quiz based on the modules covered this week.
  • Introduction to Spark
    • In this module, you will be introduced to the popular Apache Spark platform for Big Data processing. You will explore the key components of Apache Spark that provide significant benefits in distributed computing. You will also be introduced to the Resilient Distributed Datastores (RDD) and the Spark DataFrames. Furthermore, you will be introduced to Spark SQL and Spark Streaming.
  • Weekly Summative Assessment: Introduction to Spark
    • This assessment is a graded quiz based on the module covered in this week.
  • Introduction to Machine Learning on Spark
    • In this module, you will learn about MLlib, which is used for making predictions on large datasets that need distributed processing. You will be working on regression and classification tasks for large datasets. Then, a hands-on exercise with streaming data from the twitter API is implemented. This is a predictive streaming application to show participants an end-to-end big data scenario.
  • Course Wrap-Up Video
    • Course Wrap-Up Video

Taught by

Dr. Mohit Bhatnagar

Tags

Reviews

Start your review of Big Data Analytics

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.