Avro, Arrow, Protobuf, Parquet and Why - Data Serialization Formats for Streaming
StreamNative via YouTube
Power BI Fundamentals - Create visualizations and dashboards from scratch
AI Product Expert Certification - Master Generative AI Skills
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical landscape of data serialization formats in this 42-minute conference talk that addresses the overhead challenges of JSON in streaming data applications. Dive deep into the technical advantages and specific use cases of Avro, Apache Arrow, Protocol Buffers (Protobuf), and Parquet formats, understanding how each serves distinct roles in modern data architectures. Learn practical strategies for schema management and evolution, discovering how proper format selection prevents data conflicts while ensuring backward and forward compatibility across streaming systems. Examine real-world scenarios including SaaS platform optimization, petabyte-scale data processing with Apache Flink, and data lake architectures for long-term storage. Understand the computational efficiency benefits of Apache Arrow's columnar in-memory format for analytical streaming systems, and explore the trade-offs between different serialization approaches. Gain insights into tools evolution, data portability considerations, and performance optimization techniques that can transform streaming data operations from potential failure points into seamless, scalable solutions.
Syllabus
Avro, arrow, protobuf, parquet and why
Taught by
StreamNative