Data Engineering with Apache Spark,Delta Lake,and Lakehouse: Create scalable pipelines that ingest,curate,and aggregate complex data in a timely and secure way


Data Engineering with Apache Spark,Delta Lake,and Lakehouse: Create scalable pipelines that ingest,curate,and aggregate complex data in a timely and secure way
Author: Manoj Kukreja and Danil Zburivsky
Publisher finelybook 出版社:‏ Packt Publishing (22 Oct. 2021)
Language 语言: English
Print Length 页数: 480 pages
ISBN-10: 1801077746
ISBN-13: 9781801077743

Book Description


Understand the complexities of modern-day data engineering platforms and explore strategies to deal with them with the help of use case scenarios led Author: an industry expert in big data
Key Features
Become well-versed with the core concepts of Apache Spark and Delta Lake for building data platforms
Learn how to ingest,process,and analyze data that can be later used for training machine learning models
Understand how to operationalize data models in production using curated data
In the world of ever-changing data and schemas,it is important to build data pipelines that can auto-adjust to changes. This book will help you build scalable data platforms that managers,data scientists,and data analysts can rely on.
Starting with an introduction to data engineering,along with its key concepts and architectures,this book will show you how to use Microsoft Azure Cloud services effectively for data engineering. You’ll cover data lake design patterns and the different stages through which the data needs to flow in a typical data lake. Once you’ve explored the main features of Delta Lake to build data lakes with fast performance and governance in mind,you’ll advance to implementing the lambda architecture using Delta Lake. Packed with practical examples and code snippets,this book takes you through real-world examples based on production scenarios faced Author: the author in his 10 years of experience working with big data. Finally,you’ll cover data lake deployment strategies that play an important role in provisioning the cloud resources and deploying the data pipelines in a repeatable and continuous way.
Author: the end of this data engineering book,you’ll know how to effectively deal with ever-changing data and create scalable data pipelines to streamline data science,ML,and artificial intelligence (AI) tasks.
What you will learn
Discover the challenges you may face in the data engineering world
Add ACID transactions to Apache Spark using Delta Lake
Understand effective design strategies to build enterprise-grade data lakes
Explore architectural and design patterns for building efficient data ingestion pipelines
Orchestrate a data pipeline for preprocessing data using Apache Spark and Delta Lake APIs
Automate deployment and monitoring of data pipelines in production
Get to grips with securing,monitoring,and managing data pipelines models efficiently

下载地址 Download解决验证以访问链接!
打赏
未经允许不得转载:finelybook » Data Engineering with Apache Spark,Delta Lake,and Lakehouse: Create scalable pipelines that ingest,curate,and aggregate complex data in a timely and secure way

评论 抢沙发

觉得文章有用就打赏一下

您的打赏,我们将继续给力更多优质内容

支付宝扫一扫

微信扫一扫