Data Engineering with Scala and Spark: Build streaming and batch pipelines that process massive amounts of data using Scala
Author: Eric Tome (Author), Rupam Bhattacharjee (Author), David Radford (Author)
Publisher finelybook 出版社: Packt Publishing
Publication Date 出版日期: 2024-01-31
Language 语言: English
Print Length 页数: 300 pages
ISBN-10: 1804612588
ISBN-13: 9781804612583
Book Description
Take your data engineering skills to the next level by learning how to utilize Scala and functional programming to create continuous and scheduled pipelines that ingest, transform, and aggregate data
Key Features
- Transform data into a clean and trusted source of information for your organization using Scala
- Build streaming and batch-processing pipelines with step-by-step explanations
- Implement and orchestrate your pipelines by following CI/CD best practices and test-driven development (TDD)
- Purchase of the print or Kindle book includes a free PDF eBook
Book Description
Most data engineers know that performance issues in a distributed computing environment can easily lead to issues impacting the overall efficiency and effectiveness of data engineering tasks. While Python remains a popular choice for data engineering due to its ease of use, Scala shines in scenarios where the performance of distributed data processing is paramount.
This book will teach you how to leverage the Scala programming language on the Spark framework and use the latest cloud technologies to build continuous and triggered data pipelines. You’ll do this by setting up a data engineering environment for local development and scalable distributed cloud deployments using data engineering best practices, test-driven development, and CI/CD. You’ll also get to grips with DataFrame API, Dataset API, and Spark SQL API and its use. Data profiling and quality in Scala will also be covered, alongside techniques for orchestrating and performance tuning your end-to-end pipelines to deliver data to your end users.
By the end of this book, you will be able to build streaming and batch data pipelines using Scala while following software engineering best practices.
What you will learn
- Set up your development environment to build pipelines in Scala
- Get to grips with polymorphic functions, type parameterization, and Scala implicits
- Use Spark DataFrames, Datasets, and Spark SQL with Scala
- Read and write data to object stores
- Profile and clean your data using Deequ
- Performance tune your data pipelines using Scala
Who this book is for
This book is for data engineers who have experience in working with data and want to understand how to transform raw data into a clean, trusted, and valuable source of information for their organization using Scala and the latest cloud technologies.
Table of Contents
- Scala Essentials for Data Engineers
- Environment Setup
- An Introduction to Apache Spark and Its APIs – DataFrame, Dataset, and Spark SQL
- Working with Databases
- Object Stores and Data Lakes
- Understanding Data Transformation
- Data Profiling and Data Quality
- Test-Driven Development, Code Health, and Maintainability
- CI/CD with GitHub
- Data Pipeline Orchestration
- Performance Tuning
- Building Batch Pipelines Using Spark and Scala
- Building Streaming Pipelines Using Spark and Scala
About the Author
Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master’s degree in computer science and works as a senior solutions architect at Databricks.