Generative AI with LangChain: Build production-ready LLM applications and advanced agents using Python, LangChain, and LangGraph
Author: Ben Auffarth (Author), Leonid Kuligin (Author)
Publisher finelybook 出版社: Packt Publishing
Publication Date 出版日期: 2025-05-23
Edition 版本: 2nd ed.
Language 语言: English
Print Length 页数: 476 pages
ISBN-10: 1837022011
ISBN-13: 9781837022014
Book Description
Go beyond foundational LangChain documentation with detailed coverage of LangGraph interfaces, design patterns for building AI agents, and scalable architectures used in production—ideal for Python developers building GenAI applications
Key Features
- Bridge the gap between prototype and production with robust LangGraph agent architectures
- Apply enterprise-grade practices for testing, observability, and monitoring
- Build specialized agents for software development and data analysis
- Purchase of the print or Kindle book includes a free PDF eBook
Book Description
This second edition tackles the biggest challenge facing companies in AI today: moving from prototypes to production. Fully updated to reflect the latest developments in the LangChain ecosystem, it captures how modern AI systems are developed, deployed, and scaled in enterprise environments. This edition places a strong focus on multi-agent architectures, robust LangGraph workflows, and advanced retrieval-augmented generation (RAG) pipelines.
You’ll explore design patterns for building agentic systems, with practical implementations of multi-agent setups for complex tasks. The book guides you through reasoning techniques such as Tree-of -Thoughts, structured generation, and agent handoffs—complete with error handling examples. Expanded chapters on testing, evaluation, and deployment address the demands of modern LLM applications, showing you how to design secure, compliant AI systems with built-in safeguards and responsible development principles. This edition also expands RAG coverage with guidance on hybrid search, re-ranking, and fact-checking pipelines to enhance output accuracy.
Whether you’re extending existing workflows or architecting multi-agent systems from scratch, this book provides the technical depth and practical instruction needed to design LLM applications ready for success in production environments.
What you will learn
- Design and implement multi-agent systems using LangGraph
- Implement testing strategies that identify issues before deployment
- Deploy observability and monitoring solutions for production environments
- Build agentic RAG systems with re-ranking capabilities
- Architect scalable, production-ready AI agents using LangGraph and MCP
- Work with the latest LLMs and providers like Google Gemini, Anthropic, Mistral, DeepSeek, and OpenAI’s o3-mini
- Design secure, compliant AI systems aligned with modern ethical practices
Who this book is for
This book is for developers, researchers, and anyone looking to learn more about LangChain and LangGraph. With a strong emphasis on enterprise deployment patterns, it’s especially valuable for teams implementing LLM solutions at scale. While the first edition focused on individual developers, this updated edition expands its reach to support engineering teams and decision-makers working on enterprise-scale LLM strategies. A basic understanding of Python is required, and familiarity with machine learning will help you get the most out of this book.
Table of Contents
- The Rise of Generative AI: From Language Models to Agents
- First Steps with LangChain
- Building Workflows with LangGraph
- Building Intelligent RAG Systems with LangChain
- Building Intelligent Agents
- Advanced Applications and Multi-Agent Systems
- Software Development and Data Analysis Agents
- Evaluation and Testing
- Observability and Production Deployment
- The Future of LLM Applications
About the Author
Ben Auffarth is a full-stack data scientist with more than 15 years of work experience. With a background and Ph.D. in computational and cognitive neuroscience, he has designed and conducted wet lab experiments on cell cultures, analyzed experiments with terabytes of data, run brain models on IBM supercomputers with up to 64k cores, built production systems processing hundreds and thousands of transactions per day, and trained language models on a large corpus of text documents. He co-founded and is the former president of Data Science Speakers, London.
Leonid Kuligin is a staff AI engineer at Google Cloud, working on generative AI and classical machine learning solutions (such as demand forecasting or optimization problems). Leonid is one of the key maintainers of Google Cloud integrations on LangChain, and a visiting lecturer at CDTM (TUM and LMU). Prior to Google, Leonid gained more than 20 years of experience in building B2C and B2B applications based on complex machine learning and data processing solutions such as search, maps, and investment management in German, Russian, and US technological, financial, and retail companies.