Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Build production-ready data pipelines using Delta Live Tables and the Medallion Architecture on Databricks. This hands-on course teaches you to design, implement, and monitor ETL workflows that transform raw data into reliable, business-ready datasets through a structured bronze-silver-gold layering pattern.
This course is primarily aimed at first- and second-year undergraduates interested in engineering or science, along with professionals with an interest in programming.
You will start by mastering DLT fundamentals — declarative pipeline syntax in both SQL and Python, streaming ingestion with Auto Loader, and schema evolution strategies. Next, you will implement each Medallion Architecture layer: bronze for raw ingestion with lineage tracking, silver for data cleaning with expectations-based quality gates, and gold for business aggregations optimized with Z-ordering and partitioning.
The course culminates in a capstone project where you build a complete inventory management system using Change Data Capture with `apply_changes()`, multi-source ingestion, and end-to-end pipeline orchestration. Every concept is reinforced through labs on Databricks Community Edition — no paid account required.
Whether you are transitioning from batch ETL to streaming or building your first lakehouse pipeline, this course gives you the practical skills employers demand in modern data engineering roles.