
How do you build systems with AI? Not code-generating assistants, but production systems that use LLMs as part of their processing pipeline. When should you chain multiple agent calls together versus just making one LLM request? And how do you debug, test, and deploy these things? The industry is clearly in exploration mode—we're seeing good ideas implemented badly and expensive mistakes made at scale. But Google needs to get this right more than most companies, because AI is both their biggest opportunity and an existential threat to their search-based business model.
Christina Lin from Google joins us to discuss Agent Development Kit (ADK), Google's open-source Python framework for building agentic pipelines. We dig into the fundamental question of when agent pipelines make sense versus traditional code, exploring concepts like separation of concerns for agents, tool calling versus MCP servers, Google's grounding feature for citation-backed responses, and agent memory management. Christina explains A2A (Agent-to-Agent), Google's protocol for distributed agent communication that could replace both LangChain and MCP. We also cover practical concerns like debugging agent workflows, evaluation strategies, and how to think about deploying agents to production.
If you're trying to figure out when AI belongs in your processing pipeline, how to structure agent systems, or whether frameworks like ADK solve real problems versus creating new complexity, this episode breaks down Google's approach to making agentic systems practical for production use.
--
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Google Agent Development Kit (ADK): https://cloud.google.com/products/agent-development-kit
ADK Documentation: https://cloud.google.com/agent-development-kit/docs
ADK on GitHub: https://github.com/google/genai-adk
Agent-to-Agent (A2A) Protocol: https://cloud.google.com/agent-development-kit/docs/a2a
Google Gemini: https://ai.google.dev/gemini-api
Google Vertex AI: https://cloud.google.com/vertex-ai
Google AI Studio: https://aistudio.google.com/
Google Grounding with Google Search: https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/overview
Model Context Protocol (MCP): https://modelcontextprotocol.io/
Anthropic MCP Servers: https://github.com/modelcontextprotocol/servers
LangChain: https://www.langchain.com/
Ollama (Local LLM Runtime): https://ollama.com/
Claude (Anthropic): https://www.anthropic.com/claude
Cursor (AI Code Editor): https://cursor.sh/
Python: https://www.python.org/
Jujutsu (Version Control): https://github.com/martinvonz/jj
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/