Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to build data pipelines on the Databricks Lakehouse Platform — from architecture concepts to hands-on Spark and Delta Lake. This beginner course starts with why the lakehouse pattern replaced separate data warehouses and data lakes, then moves directly into the Databricks workspace where you'll configure compute, write PySpark and SQL queries, and manage data with Unity Catalog's three-level namespace.
Week by week, you'll progress from navigating the platform to transforming DataFrames with select, filter, groupBy, and joins, then to creating Delta Lake tables with ACID transactions, schema enforcement, and time travel. You'll perform real DML operations — INSERT, UPDATE, DELETE, and MERGE — and learn to schedule production pipelines using Databricks Jobs with DAG-based orchestration.
The course runs entirely on Databricks Free Edition, so there's no cloud billing. Six hands-on labs reinforce each module: explore the workspace, write notebook-based transformations, build Delta tables, and wire up an automated workflow. By the end, you'll have built a complete data engineering pipeline from raw ingestion through Delta Lake to scheduled production jobs.