Engineering Lakehouses with Open Table Formats: Build scalable and efficient lakehouses with Apache Iceberg, Apache Hudi, and Delta Lake

Engineering Lakehouses with Open Table Formats: Build scalable and efficient lakehouses with Apache Iceberg, Apache Hudi, and Delta Lake book cover

Engineering Lakehouses with Open Table Formats: Build scalable and efficient lakehouses with Apache Iceberg, Apache Hudi, and Delta Lake

Author(s): Dipankar Mazumdar (Author), Vinoth Govindarajan (Author)

  • Publisher finelybook 出版社: Packt Publishing
  • Publication Date 出版日期: December 26, 2025
  • Language 语言: English
  • Print length 页数: 414 pages
  • ISBN-10: 1836207239
  • ISBN-13: 9781836207238

Book Description

Jump-start your journey toward mastering open data architectural patterns by learning the fundamentals and applications of open table formats

Key Features

  • Build lakehouses with open table formats using compute engines such as Apache Spark, Flink, Trino, and Python
  • Optimize lakehouses with techniques such as pruning, partitioning, compaction, indexing, and clustering
  • Find out how to enable seamless integration, data management, and interoperability using Apache XTable
  • Purchase of the print or Kindle book includes a free PDF eBook

Book Description

Engineering Lakehouses with Open Table Formats provides detailed insights into lakehouse concepts, and dives deep into the practical implementation of open table formats such as Apache Iceberg, Apache Hudi, and Delta Lake.

You’ll explore the internals of a table format and learn in detail about the transactional capabilities of lakehouses. You’ll also get hands on with each table format with exercises using popular computing engines, such as Apache Spark, Flink, Trino, and Python-based tools. The book addresses advanced topics, including performance optimization techniques and interoperability among different formats, equipping you to build production-ready lakehouses. With step-by-step explanations, you’ll get to grips with the key components of lakehouse architecture and learn how to build, maintain, and optimize them.

By the end of this book, you’ll be proficient in evaluating and implementing open table formats, optimizing lakehouse performance, and applying these concepts to real-world scenarios, ensuring you make informed decisions in selecting the right architecture for your organization’s data needs.

What you will learn

  • Explore lakehouse fundamentals, such as table formats, file formats, compute engines, and catalogs
  • Gain a complete understanding of data lifecycle management in lakehouses
  • Learn how to systematically evaluate and choose the right lakehouse table format
  • Optimize performance with sorting, clustering, and indexing techniques
  • Use the open table format data with ML frameworks like TensorFlow and MLflow
  • Interoperate across different table formats with Apache XTable and UniForm
  • Secure your lakehouse with access controls and ensure regulatory compliance

Who this book is for

This book is for data engineers, software engineers, and data architects who want to deepen their understanding of open table formats, such as Apache Iceberg, Apache Hudi, and Delta Lake, and see how they are used to build lakehouses. It is also valuable for professionals working with traditional data warehouses, relational databases, and data lakes who wish to transition to an open data architectural pattern. Basic knowledge of databases, Python, Apache Spark, Java, and SQL is recommended for a smooth learning experience.

Table of Contents

  1. Open Data Lakehouse: A New Architectural Paradigm
  2. Transactional Capabilities of the Lakehouse
  3. Apache Iceberg Deep Dive
  4. Apache Hudi Deep Dive
  5. Delta Lake Deep Dive
  6. Catalog and Metadata Management
  7. Interoperability in Lakehouses
  8. Performance Optimization and Tuning in a Lakehouse
  9. Data Governance and Security in Lakehouses
  10. Evaluating and Selecting Open Table Formats
  11. Real-World Applications and Learnings

About the Author

Dipankar Mazumdar is currently a Staff Data Engineer Advocate at Onehouse.ai, where he focuses on open source projects such as Apache Hudi and XTable to help engineering teams build and scale robust data analytics platforms. Before this, he worked on critical open source projects such as Apache Iceberg and Apache Arrow at Dremio. For most of his career, he worked at the intersection of data visualization and machine learning. He has also been a speaker at numerous conferences, such as Data+AI, ApacheCon, Scale By the Bay, and Data Day Texas, among others. Dipankar has a master’s degree in computer science with research focused on explainable AI techniques.

Vinoth Govindarajan is a seasoned data expert and staff software engineer at Apple Inc., where he spearheads data platforms using open-source technologies like Iceberg, Spark, Trino, and Flink. Before this, he worked on designing incremental ETL frameworks for real-time data processing at Uber. He is a dedicated contributor to the open source community in projects such as Apache Hudi and dbt-spark. As a thought leader, Vinoth has shared his expertise through speaking engagements at conferences such as dbt Coalesce and Hudi OSS community meetups. He has published several blogs on building open lakehouses. Holding a bachelor’s degree in information technology, Vinoth has also authored multiple research papers published in journals like IEEE.

Amazon Page

下载地址

PDF, EPUB | 13 MB | 2026-01-01

打赏
未经允许不得转载:finelybook » Engineering Lakehouses with Open Table Formats: Build scalable and efficient lakehouses with Apache Iceberg, Apache Hudi, and Delta Lake

评论 抢沙发

觉得文章有用就打赏一下文章作者

您的打赏,我们将继续给力更多优质内容

支付宝扫一扫

微信扫一扫