Avro, Arrow, Protobuf, Parquet and Why - Data Serialization Formats for Streaming
StreamNative via YouTube
MIT Sloan AI Adoption: Build a Playbook That Drives Real Business ROI
AI, Data Science & Cloud Certificates from Google, IBM & Meta
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the critical landscape of data serialization formats in this 42-minute conference talk that addresses the overhead challenges of JSON in streaming data applications. Dive deep into the technical advantages and specific use cases of Avro, Apache Arrow, Protocol Buffers (Protobuf), and Parquet formats, understanding how each serves distinct roles in modern data architectures. Learn practical strategies for schema management and evolution, discovering how proper format selection prevents data conflicts while ensuring backward and forward compatibility across streaming systems. Examine real-world scenarios including SaaS platform optimization, petabyte-scale data processing with Apache Flink, and data lake architectures for long-term storage. Understand the computational efficiency benefits of Apache Arrow's columnar in-memory format for analytical streaming systems, and explore the trade-offs between different serialization approaches. Gain insights into tools evolution, data portability considerations, and performance optimization techniques that can transform streaming data operations from potential failure points into seamless, scalable solutions.
Syllabus
Avro, arrow, protobuf, parquet and why
Taught by
StreamNative