Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Udemy

PySpark: Python, Spark and Hadoop Coding Framework & Testing

via Udemy

Overview

PyCharm : Big Data Python Spark, PySpark Coding Framework, Logging, Error Handling, Unit Testing, PostgreSQL, Hive

What you'll learn:
  • Python Spark PySpark industry standard coding practices - Logging, Error Handling, reading configuration, unit testing
  • Building a data pipeline using Hive, Spark and PostgreSQL
  • Python Spark Hadoop development using PyCharm

This course will bridge the gap between academic learning and real-world applications, preparing you for an entry-level Big Data Python Spark developer role. You will gain hands-on experience and learn industry-standard best practices for developing Python Spark applications. Covering both Windows and Mac environments, this course ensures a smooth learning experience regardless of your operating system.

You will learn Python Spark coding best practices to write clean, efficient, and maintainable code. Logging techniques will help you track application behavior and troubleshoot issues effectively, while error handling strategies will ensure your applications are robust and fault-tolerant. You will also learn how to read configurations from a properties file, making your code more adaptable and scalable. Key Modules :


  • Python Spark coding best practices for clean, efficient, and maintainable code using PyCharm

  • Implementing logging to track application behavior and troubleshoot issues

  • Error handling strategies to build robust and fault-tolerant applications

  • Reading configurations from a properties file for flexible and scalable code

  • Developing applications using PyCharm in both Windows and Mac environments

  • Setting up and using your local environment as a Hadoop Hive environment

  • Reading and writing data to a Postgres database using Spark

  • Working with Python unit testing frameworks to validate your Spark applications

  • Building a complete data pipeline using Hadoop, Spark, and Postgres

Prerequisites:

  • Basic programming skills

  • Basic database knowledge

  • Entry-level understanding of Hadoop

This course uses high-quality AI-generated text-to-speech narration to complement the powerful visuals and enhance your learning experience.

Syllabus

  • Introduction
  • Setting up Hadoop Spark development environment
  • Creating a PySpark coding framework
  • Logging and Error Handling
  • Creating a Data Pipeline with Hadoop Spark and PostgreSQL
  • Reading configuration from properties file
  • Unit testing PySpark application
  • spark-submit
  • Appendix - Big Data Hadoop Hive for beginners
  • Appendix - PySpark on Colab and DataFrame deep dive

Taught by

FutureX Skills

Reviews

4.4 rating at Udemy based on 216 ratings

Start your review of PySpark: Python, Spark and Hadoop Coding Framework & Testing

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.