Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Can Google's ADK Replace LangChain and MCP? (with Christina Lin) image

Can Google's ADK Replace LangChain and MCP? (with Christina Lin)

Developer Voices
Avatar
0 Playsin 16 hours

How do you build systems with AI? Not code-generating assistants, but production systems that use LLMs as part of their processing pipeline. When should you chain multiple agent calls together versus just making one LLM request? And how do you debug, test, and deploy these things? The industry is clearly in exploration mode—we're seeing good ideas implemented badly and expensive mistakes made at scale. But Google needs to get this right more than most companies, because AI is both their biggest opportunity and an existential threat to their search-based business model.

Christina Lin from Google joins us to discuss Agent Development Kit (ADK), Google's open-source Python framework for building agentic pipelines. We dig into the fundamental question of when agent pipelines make sense versus traditional code, exploring concepts like separation of concerns for agents, tool calling versus MCP servers, Google's grounding feature for citation-backed responses, and agent memory management. Christina explains A2A (Agent-to-Agent), Google's protocol for distributed agent communication that could replace both LangChain and MCP. We also cover practical concerns like debugging agent workflows, evaluation strategies, and how to think about deploying agents to production.

If you're trying to figure out when AI belongs in your processing pipeline, how to structure agent systems, or whether frameworks like ADK solve real problems versus creating new complexity, this episode breaks down Google's approach to making agentic systems practical for production use.

--

Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices

Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join

Google Agent Development Kit (ADK): https://cloud.google.com/products/agent-development-kit

ADK Documentation: https://cloud.google.com/agent-development-kit/docs

ADK on GitHub: https://github.com/google/genai-adk

Agent-to-Agent (A2A) Protocol: https://cloud.google.com/agent-development-kit/docs/a2a

Google Gemini: https://ai.google.dev/gemini-api

Google Vertex AI: https://cloud.google.com/vertex-ai

Google AI Studio: https://aistudio.google.com/

Google Grounding with Google Search: https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/overview

Model Context Protocol (MCP): https://modelcontextprotocol.io/

Anthropic MCP Servers: https://github.com/modelcontextprotocol/servers

LangChain: https://www.langchain.com/

Ollama (Local LLM Runtime): https://ollama.com/

Claude (Anthropic): https://www.anthropic.com/claude

Cursor (AI Code Editor): https://cursor.sh/

Python: https://www.python.org/

Jujutsu (Version Control): https://github.com/martinvonz/jj

Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social

Kris on Mastodon: http://mastodon.social/@krisajenkins

Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/

Recommended
Transcript

Integration of AI in System Design

00:00:00
Speaker
How do you build systems with AI? And I don't mean how do you write code with AI. I mean, how do you write code that will use AI as part of its processing?
00:00:11
Speaker
And is that good idea? mean, when is it a good idea? When is it a bad idea? I think the industry is very much in the exploration phase for all of those questions, which means we're going to see a lot of good ideas implemented badly and a lot of bad ideas implemented at great expense.

Google's AI Challenges and Opportunities

00:00:31
Speaker
I don't think anyone really has the answer yet, but I do know one company that really needs to get it right, and that's Google. Because I think for Google, AI is both a massive opportunity and an existential threat.
00:00:46
Speaker
I say it's a threat because I find myself starting a search these days with Claude or Cursor almost as much as I start searching with Google. and If other people start doing that, if Google is no longer synonymous with search, that kills their ad revenue, which could kill the company as we know it.
00:01:07
Speaker
existential threat. But AI is also a massive opportunity for them. mean, the company was built on search and LLMs are the best thing to happen to searching random documents since the search engine itself was invented.
00:01:23
Speaker
It's also amazing for connecting systems together in a way we couldn't before. There's probably a billion dollar business just in find the questions in this document, turn them into SQL,
00:01:36
Speaker
run those queries, turn the answers into a slide deck and share that with the head of marketing. That probably is a billion dollar business. And it's now possible to build something like that.
00:01:48
Speaker
And Google have a huge opportunity to connect those systems together with LLMs. If they can answer the question I started with, how do you build systems that use AI as part of their processing pipelines?
00:02:02
Speaker
Well, they have one answer and they've just open sourced it.

Christina Lynn's Role and AI's Influence at Google

00:02:06
Speaker
So joining me this week to discuss it is Christina Lynn of Google. And we're going to talk about the effect of AI on Google, about Agent Development Kit, which is their Python toolkit for building agentic development pipelines, and about A2A, which is their answer to MCP and their attempt to get agents talking to each other directly.
00:02:30
Speaker
This stack could replace Langchain and MCP and several other answers to this question. Heaven knows Google kind of need it to. They are motivated to try. So let's find out what it involves and what kind of systems we could build with it.
00:02:46
Speaker
I'm your host, Chris Jenkins. This is Developer Voices. And today's voice is Christina Lin.
00:03:03
Speaker
My guest today is Christina Lin. Christina, how are you? Great. Thanks for having me here, Chris. You're back. It's very cool you're back. You're one of the early guests working for a different company. I know. um now you're Things have changed a lot. Now you're at Google, right? Yes.
00:03:20
Speaker
yeah What's it like being in a giant, the giant megacorp? um I think it's rather challenging and things are moving a lot a lot faster because I think it's just because of AI, generally. And there's a lot of a lot more folks I can talk to. um The community is very vibrant here. So yeah, love love love everything here.
00:03:44
Speaker
Okay, well, as an official, non-official spokesperson for Google, official spokesperson for a tiny cog in the giant Google empire. Yes. i'd I want to get your broad perspective before we dig into the technology, right?

AI's Impact on Google's Mission

00:03:59
Speaker
Because I i look at Google and I wonder what they're doing. because Let me frame it this way.
00:04:06
Speaker
Google have always had this ah what's the mission to organize the world's data. That was how they used to frame it. And I could see AI as being a fantastic opportunity for a company that wanted to organize vast amounts of data, the world's unstructured data. right I can also see AI being a massive threat to Google because we all know that Google's main business model is advertising on search.
00:04:33
Speaker
And I think AI is changing the way people search for things. so it So it's an opportunity. it could also be an existential threat to what Where on the inside, where do you see AI affecting the thrust of the company?
00:04:48
Speaker
So AI has been changing Google from top to bottom, right? I mean, not just the old search, but also how we operate on things and how we kind of design our products and all this kind of things, right? So I think on top of...
00:05:04
Speaker
If you think about how we develop software today in Google, 25% of our code has been generated by AI. So in that terms, we are being a lot more efficient on getting the software out, getting something, product that will be faster to our developers or for our users. At the same time, I don't think people are moving away from search. It's just the way that they see search is different.
00:05:27
Speaker
um they tend to kind of ah go through every single one of the results. Now they just they just want to see the summaries of these search results. So Google has been changing our um way of representing the search results. I don't know if you've been using Google search recently. You see this summary of search on the very top of your search results. That's something that we're constantly evolving.
00:05:49
Speaker
and But we're not just removing all the things that we had before because if you want to dive into the details, you can still do that. And to be honest with you, I feel like Google has been in AI for way ahead of everybody else. you know We were there doing Google google research, Google DeepMind has been doing AI, and we are the first company that does you know that claims you know that does all the AI um compliances, policies ahead of everybody else. So I think Google is very bold on getting to innovation on AIs, but also being trying to be responsible. right So that's kind of my vision. I think that's what kind of my view of what Google has been doing on the AI space.
00:06:31
Speaker
Yeah, yeah. But you there must be some degree of worry in there, too. It can't all be positive. Because for, what, 10, 20 years, every time I wanted to know something, I'd go to Google and search for it. And now maybe a third, maybe a half of my queries go through something like Clawed.
00:06:50
Speaker
Right. I mean, that's a big change there is going to be a big change, but I think it's a good change, but nothing is good. There's going to be threats, right? So there's going to be competitions coming up. So I think Google is ready for the competition.
00:07:02
Speaker
I'm not sure if it will Like, I'm not sure. I'm not a fortune teller. So I don't know what the the future will look like. But from what I see and what Google offers today, I think it's a very strong background and foundation to build things on top of it. So I think there's still a ah long way ahead of time because we're not just talking about, you know, tech search. We're also thinking about multimodality search. We're also thinking about, you know, searching ah film a film, searching film, searching, you know, a podcast, right, from what we're talking about today. So how do we make all this searchable and efficient? I don't think other competitors are there yet, but Google is way ahead of um everybody else in terms of like multimodalities, right? So I think there are different things you have to consider taken considerations and how we are going to interact with the computer is very different from what we're going to see today. Today's mainly tech space.
00:07:52
Speaker
But if you've seen the recent release from DeepSearch or or our DeepMind, um like what they've been doing is they're doing this whole, you know, real time virtual realities. So how do you know you're going to you're not going to be communicating with um your computers with your gestures? with your voices or with your movement. There are different things that the computer can detect. So it's still a long way to go.
00:08:17
Speaker
So we'll just, we'll see. We are closer than ever to that Star Trek computer that talks to you and maybe projects a holodeck to describe what you're talking about. Yeah, it's it's it's getting, it's it's um speeding up. I would say like now it's ah ever evolving and things are moving at faster speed.
00:08:37
Speaker
and Okay, well, you must be in a fun place. ah we We need to get into exactly what you're doing and how it can enable us to have some of that fun too, right? I know. with Well, one step at a time. We'll have to get there somewhere, but we're going to start somewhere.
00:08:53
Speaker
So you're working one of the things you're working on is um an agent developer toolkit. Agent developer kit. Yes. i Got to get that acronym right or otherwise it becomes ADT and that's busy. ADK. Yes.
00:09:09
Speaker
So i'm going to I'm going to get you to start off by helping me understand the structure of these things. One thing I confess I don't quite get and I don't think anyone quite gets yet that Why you build agent pipelines, right? I see some companies acting as though they're going to replace all their regular software processing with AI software processing, and that seems ridiculous.
00:09:34
Speaker
What's a good case for building agentic pipeline?

Agentic Pipelines and Workflow Evolution

00:09:39
Speaker
What even is that? So if you think about how we kind of write our pipelines ah ah a year before, it's very static. It's very linear. So you put in your intent, that's your user prompt, and then you kind of send it off to an LLM, which is a model that process your your um request, and then it will...
00:10:02
Speaker
come up with a response, and then hand it back to you, right? So that was a very original pipeline, out and it's very static. We've seen that before, and it's been doing that forever. But now we're seeing um a lot more complex use case.
00:10:15
Speaker
So there are things like, hey, what happens if I want to do multiple LLM requests? What happens if I want to do this one and then then maybe jump to another one? And then how do I aggregate all this together? Things get very complicated, right? So that's when... Explain to me what...
00:10:32
Speaker
Before we go on, explain to me why you want to do that. I mean, it seems like these modern models are really quite similar. Right. I think it's because now we have one. So if you do a task, it's if you ask ah LM to help you to do a, you know, analyze a plan that you want to do on Saturday. That's one single task. What happens if you want to ask it to book your your your entire, you know, ah night out plan.
00:11:02
Speaker
So then you have to trigger another request going out somewhere. And what happens if you want to go to three different places? So they are different things you want to do. Of course, then we get into, do I want to do that in one single LM calls? Because they are different tasks you probably want to do. And some you can do that in parallel. some Sometimes you want to do that in sequence, right? Because sometimes you want to sometimes you want to do the but the planning first, and then you want to simultaneously book your stuff. And sometimes you if you have choice if if you have option A and then you there's a backup plan if this one's full and then you also have a sequential. And sometimes you want to do a loop just to make sure that you're booking because that hotel or that specific place you want to go might be busy. So you might want constantly come back and book your space. So there are just many different workflow that can happen. So that gets very complicated. And these can all have been handled by all ends, by the way. You don't have to do it manually or you don't have to do it programmatically. And that's why we want to talk about all these pipelines.

Defining Agents and Autonomous Systems

00:12:02
Speaker
Right. Yes. i So they might be separate contexts going to the same model, but different instances of it. Exactly. Yeah, yeah. Yeah, I could see instantly how I'd want to have a completely separate context for running the tests on my code.
00:12:18
Speaker
and not letting the other agent cheat and disable the tests, right? Exactly, yes. And then sometimes we go into like ah multi-model designs or single model single model single model approaches, or sometimes I don't i don't even want to use models, right? So things like that, you have to kind of figure that out. And then this is, do i want to bring in workflow or do I not want to bring workflow? Do I want to have LM do everything together for me or do I want it to separate out? These are just design choices and design questions that you have to think about when you are you know make designing your systems.
00:12:53
Speaker
This seems like something i could put together manually. Yes. But you're you're building a toolkit to just make that easy, right? Exactly. Yes.
00:13:04
Speaker
And we call that an agent mostly because things get complicated. And traditionally, we yeah kind of just have it done um by an LLM that we call that a chain, right?
00:13:16
Speaker
Like, Lanchain was there before, Lanchain was doing all the chain stuff, so that's one request coming back. That's one interaction with LMs. But the problem right now is like, LM is a good brain because it has a very large thinking um ability to help you process things, um intellectually and cognitively, you can do that. But when it comes to actually do executions, you actually have to have it to interact with outside world, right? So that sometimes will have to have functions or APIs to interact with, right? And that brings us up to, all right, so if I have already have my LM decided to do things for me, can I have it automatically executed for me? So I want it to be autonomous, right? So that's why we want to bring in agents. So when we talk about agent development kits, we are actually developing an agent, an agent that can think for you and that can do things for you. So that's why, how we define an agent.
00:14:10
Speaker
we We're going to help you, this framework can help you define that, design that agent. So the fact that it can act is what distinguishes it from just being an LLM?

Framework of Agent Components

00:14:19
Speaker
Yes, that's correct. Right.
00:14:21
Speaker
Okay. um so I have in my head this architecture, which is LLM is the big magic eight ball of all human knowledge.
00:14:33
Speaker
Yep. A bunch of prompts to try and tease the right thing out of the magic eight ball. And then the functions you've given it permission to execute. Yes.
00:14:44
Speaker
That's a good starting point? Yeah, that would be something. Yeah, I think that's totally you're totally talking about the agent, right? So you've got okay the model itself, right? Which is, I would always imagine that as a robot where you have the kind of the processing, the CPUs, you know, the processing unit. That's the brain, that's the LM, right? yeah And then you also have um the agents, the robot needs to take in the directions from humans. So that's user prompt, right? That was your intentions and all that kind of questions. And then the robot actually have to do execute stuff. So it's the hand or the feet, the actions that it does, right? So that's the tools, right? So these are the things that... of kind of what an agent does. And, you know, like on top of that, you also have to handle a bunch of other things like um memories as your agent memorize things, right? Does it members the entire sessions of your, like of what you've talked about? Or does it remember a week of information that you've you've given it? Or does it only, you know, interact if you want?
00:15:50
Speaker
um Just depending on your designs and all that kind of stuff, right? So all that our needs needs to be built into the agent itself. So, As we dive into the actual kit, ah I think the first question is, is this a collection is this a library or is it a framework? Is it a collection of tools or is it also an opinion about how you put them together?
00:16:13
Speaker
Well, first of all, I think it's an opinion of how we think the nation should be put together and that becomes a framework that you can use. So within this framework, we have libraries that makes you that makes you write agent a lot easier, right? So we do have opinions saying that an agent will have to have a model. You have to have your intent, which is your system prompt. And then you also want to have a list of tools of how you want to how how you want your you know things to be executed. So these three are the mandatory things um that you want to have in your agents. and there are other stuff that you can opt in or opt out. So it depends on what you wanted to assemble. It's very similar to how you want to assemble your robot, right? So it depends on what you want, and you can just assemble them together.
00:16:59
Speaker
Okay, let's go for a concrete example. I can think of an agent I might want to include in my personal pipeline that I don't think exists yet. which is ah jujitsu, the version control system, right? I can easily find an agent that will work with Git.
00:17:18
Speaker
I now want to use a different version control system. What do I build to say, here's a sub pipeline for committing and status and all that stuff? So what you do is you will probably have a tool that interacts with Jujitsu, right? So, or your new um version control platform, right? So the intent of your agent won't change. So basically the model that you use won't change. The system prompt won't change because it's handling all the version controls and stuff like that. I am a version control agent. The user prompt doesn't just depends on where you put input. It is the tool that will change underneath the hood, right? So what you can do is you can probably have a tool that builds into, that connects to your newly um kind of connected version control platform.
00:18:07
Speaker
And you can also have it connect. You can also have it optional as a jujitsu as your, the other one. So in your prompt, you can say, hey, can i so can I commit my code into Jujitsu?
00:18:19
Speaker
And basically what the agent will think is like, all right, so I'm going to commit this piece of code into your Jujitsu repository. um And then you can say, well, what happens if I want to commit it to my new version control repository? And it's going to do that. Or you can say, hey, commit to my repository repository. And what the LM can do is, depending on what um method or what functions you have in your mc in your tools, you might have lists of repository. So basically what the agent would do is, we'll look for all the available repository in each version control systems and see which one it is. It will automatically pick the one and kind of commit your code to the right system. So it's just how it you can do that.
00:19:01
Speaker
So do I write some Python that says, here's a Python function that looks for dot directories to figure out which kind of version control I'm using. Here's a function that does Jujutsu status and returns that as JSON.
00:19:16
Speaker
um Actually, it done then you don't have to do that. like So basically what you need to do, you just have to have a list of um tools that you have access that the agent will have access to.
00:19:28
Speaker
couple of things that you probably need is lists of repositories. How do I commit it? And how do I interact with it? So basically all this tool will provide a name and what it does. right So it's like a list of, its think about it as a toolbox. right So like you have your jujitsu toolbox and you you have your other version control toolbox. And these are all the tools that's available here. So your agent will be very smart enough to go back and look into the toolbox and it will see all these available tools. right And it's going to see, all right, for me, I probably want to use this particular tool to see which repositories are available in my version control um system.
00:20:04
Speaker
so I'm going to use this tool. And after that, it's going to think about, all right, so if I wanted to do commit, Should I use this one and then commit it? So it's very important that you kind of specify in your system prompt saying, specifying to your agent saying that you are a version control agent. It is your job to figure out, you know, how to use these tools to help users. And one of the tools you can probably, and you probably want, you probably want to, or you you want to make sure you're getting the list right in the right way. into the right system. And if they are similar names, make sure you check with the with the user which system they want to commit it to. So things like that needs to be clear in your system prompt. The more clear it is, the better your agent performs.
00:20:49
Speaker
Okay. So there are two parts to this I need to clarify if I want to be able to use this. One is I know most LLMs are probably trained up to about 2024. Okay.
00:21:01
Speaker
but not much further, right, these days. So I'm going to write a prompt to you're a version control system. And it probably knows what that means because they were ubiquitous in 2024.
00:21:13
Speaker
Then I don't expect it's going to know a new system like Jiu-Jitsu. do I then say, and here's the manual for Jiu-Jitsu, or here's where you find the manual to understand what it means to be a version control system?
00:21:27
Speaker
ah so specific tool So basically your tool needs to define what it is, right? if the tool The tool itself will have to have some kind of definitions of what it does and what it is. So it will know ahead of the time. So basically what it will do is we'll go to these tooling boxes and search for their names and what they do. So if you are a if you are a better designer of the tools, you'll probably give a better name of your tool so the LN would know the intent. right So sometimes people don't do that, so the LN will have a hard time defining that. So all depending on how you design your tools and how you provide those information to your tools. And I think that's a good kind of a segue coming into MCP server. I'm not sure MCP because MCP is a protocol that defines how the ILM talks to ah tools. And with this MCP protocol, it allows you to better define the intent and better define what the tools are. so it will allow your LM to be better this well allow your m to discover the tools a little bit better compared to a normal function calls that you would do. right So before the MCP server came out, we we also we do something called function calling.
00:22:36
Speaker
um So the Basically, what it does is that you provide a bunch of functions ahead of time for ah LMs. And LMs are going to guess the intents of your users and figure out what function needs to call.
00:22:48
Speaker
And then you have to manually do all the runnables and executable

Tools and Parameters in Agent Execution

00:22:52
Speaker
for your calls. Like in Python, you have to make it runnable and then execute your functions. But now all you have to do is just letting your and letting all your LMs know where they are. It's going to pick up all the information and decide what to which one to execute. And the library is going to execute it for you. So it's hiding all this complicated code for you so you don't have to do it yourself.
00:23:14
Speaker
So there's something going on in the LLM to say, i as a user type, please commit this code. And the LLM also has in the prompt, there's git commit available as a function. Would you like to use that tool?
00:23:29
Speaker
And the LLM says, okay, the next step is to invoke that tool. Yes. Something like that. Yeah, something like that. It will be like invoke this tool. And this tool will need specific parameters, right? Because sometimes you need to kind of feed in parameters. So what the LLM will do is will look at your information. It will look at your prompt and see how do I extract all the parameters from all the information I have, right? So it's going to say, all right, so 90% of the time you should call this You should call this tool. And this tool will require these parameters. I'm going to assemble all that for you and then put it into the libraries. at The library is going to execute it execute the tools plus the parameters you pass in. Say if you're committing this one, if you're committing this files, it's going to say, all right, this is the file names that I'm going to commit.
00:24:15
Speaker
And this is the commit message. And this is the user. So basically, you can pass in all that and then just do that for you. So my job to build an agent then is to find some interesting functions and annotate them for the LLM's sake?
00:24:34
Speaker
Well, you yes, you can annotate them. um They are default tools that came with the ADK itself. um And they are, um you you can integrate with MCP server. So MCP will give you a list of tools.
00:24:49
Speaker
Oh, so someone else might have already annotated their functions for my Exactly, yes. Right. And I do this in Python? Is it ADK? So ADK, and that the the most mature the mo the most of ma material library is in Python. But we do have ah Java as well as JavaScript. And Golang is coming up.
00:25:09
Speaker
that's right It doesn't surprise me that those are the three languages i would have thought coming out of Google, Go would have been first. No, Python is the language of AI at the moment. I would have thought they'd Python, Go, Java.
00:25:21
Speaker
Well, I mean, java we have a very large Java user base, so Java was one of the early ones. Okay. But you've definitely got the resources to tackle all three. Definitely. Well, you'll be surprised. I think we need to find a Golang expert to no also know AI and do that kind of stuff as well. So... You've heard about this job posting here first, to go folks.
00:25:42
Speaker
Okay. um So let's stick with Python. I'm writing a Python thing that says, here's an MCP server with annotated functions, or here are the functions I annotated.
00:25:54
Speaker
um I define which model we're using. yeah Is this open source? Can I run this against like Olama um yeah models up running on my computer? Or do I have to use Google's cloud stuff? Oh, you can definitely use any models that you want. I've used it with Gemma. So Gemma, again, it's um it's it's it's a Google model, but it's an open source model. I ran it on my own laptop on Olama as well. So it all works. As long as supports like normal, like it also supports OpenAI standard as well. So um any other models that you think you can want to use, it's usable.
00:26:28
Speaker
Okay. Okay, so... tools, choose my LLM. Do you have any tips on how to set the system prompt? Because that seems like a dark art to me.
00:26:39
Speaker
It is. I'm still mastering that dark

Personal Task Agents: Planning with AI

00:26:42
Speaker
art as well. I think it's the hardest part of building an agent is getting the system prompt right. Yeah, yeah. So basically, it seems like it's it's I think there is a trick that you probably want to give an example output, if you can.
00:26:59
Speaker
So that like your agent will have an idea of how it should provide the information back. Sometimes you want to have a structured output. Sometimes you have want to have a natural language output. And what does the output look like? I think having the example for yeah your OM would be a lot more easier and for a lot easier for it to kind of figure out what you ah exactly what you want from um as as as a returned response standpoint. I think would be easier.
00:27:27
Speaker
Okay. So let's go back to your booking Friday night out example. i need um I need to choose decent LLM. I need to give it an example of what I consider to be fun on a Friday night. yes I need to give it access to various booking APIs for restaurants and cinemas and stuff.
00:27:50
Speaker
um And then I need to design some kind of fun evaluation agent. Yes, like depends on what your recreation is. um So what I did an example, i did a ah Fun Night Out demo like a couple months ago. The way I did it is I use Google Grounding. So I don't know if you're familiar with Google Grounding, Google Search, all those kind of stuff, but it can help you to retrieve the latest um information on the internet. So I'm not getting on a Friday Night Out kind of
00:28:22
Speaker
plan like that's year old because year old information is too old for me i want something that's new as it hallucinates the idea of a friday night in the amstreet exactly as i don't want to see a movie that's like a year old right i want to see something that's like the most recent i want to go to the most hipest um the most popular restaurant in the area right so i would probably right That's why I would use grounding sometimes to kind of help me. And it's one of the built-in tools coming from the ADK itself. So I would just add it as part of my plan. Hey, can you figure out in Boston area, this is where I live, ah what is the best place to go if I like anime, if I enjoyed watching movies, or if I want to go to ah very nice Japanese restaurants, how do I do that? And I can kind of put it into my prompt and it will help me get the latest and greatest information from that.
00:29:15
Speaker
So that just that description just sounded like searching for not running a Google search, right? What is grounding that isn't just search? Well, I mean, if you do a normal LM search, you're going to just search on the things that it was trained on. So it was not the you know the best thing, right? So at having that extra information on you know checking out the latest you know web results and the difference between grounding and the normal LM search is the way that it will refer back to the the the resource, the original um content. So...
00:29:53
Speaker
If you look like deep enough into the like the weeds and the code itself, the returned format is very different. So normally when you do a request and response from LLM, you get a response, and it's it's text-based, basically. But if you do a grounding, what you're going to get is you're going to get the extra metadata of things. You're going to get a bunch of search results. You're going to get a chunk of what we call search chunks, where it gives you all the original places of where you know where it gets the information from, so you can do a fact check. And it's gonna also give you a text, a sentences on where it gets the specific sentence. Say, if you like the movie, it's gonna say, okay, this specific movie was very popular in you know the past three years. It's gonna get that specific sentence for you, so you know where to refer it back. So there is ah evidence of how it's pulling the data. So there's less hallucination, just like what you said. And it's more, it you can kind of figure out like where the the data is coming from.
00:30:53
Speaker
So its response is kind of annotated with citations. Exactly. The way you'd expect a paper to be. Yes. does it then Is there a step at the end where it then goes and independently double checks those citations? It doesn't independently go and double check, but but it will do a summary of all the things it searches for you. So but because it's's it's building on top of that, whatever I search on the internet, is it's an evidence piece of data, right? So it gets all that, summarize it, and then put it all back to you.
00:31:23
Speaker
Okay, okay.

Complexity and Optimization in Agent Systems

00:31:24
Speaker
So how complex these pipelines get? It seems to me that what we've discussed so far could be done with one LLM, one prompt, and a bunch of tools.
00:31:39
Speaker
But how... How complex can ADK get and how complex have you seen it get? Oh, I've seen it get very complex in some of the areas because, you know, the problem with, you know, you putting too much into your system prompt is that it becomes, it takes longer for the LM um to um to process. Like what we said,
00:32:02
Speaker
the the most costly thing to run and l an AI system is the token numbers. So the more tokens you put in, the the more costly it is, right? So how do we kind of limit, and and it's going to, the more token you have, the more search, it the more like linkage it needs to do, and it takes longer for the resource to come back. So there's limitations of what one request can take.
00:32:28
Speaker
So it gets to a time when the this system gets really complicated. You want to start separating out different intent, like the responsibility of different agents. Say if you want to do like Friday night, think about a different example. right So if you want to book your booking travel you've quicking a travel, say... more than a night out, right? So if you want to book your hotel, you want to book your flights. These are two separate things, two separate things you want to do, right? So you probably want to isolate it that you doesn't have to search for the hotels and search for the flights and figure everything out together and all that kind of stuff. There may be a point where you want to aggregate them out together at the very end.
00:33:07
Speaker
But when it's doing the evaluations and all the the cost comparings that kind of stuff, maybe it's easier and faster to have them split out and done in parallel and then come back to aggregate them back together at the very end. So there are different things you probably want to do for...
00:33:23
Speaker
that kind of examples. And that's why you want to have separate um agents to do different things. It's called separations of constraints. I think this is a very traditional software development realm where, okay, when should I break it down to microservices? When should I have a large services that does everything, right?
00:33:41
Speaker
Yeah, the same principles still apply for regular functions, right? i think Well, I think so. I don't think you can get away from that. Yeah, okay. Okay. I... i am So does that mean you're seeing people build up like systems of agents that are almost as complex as you would build a system of functions in traditional programming?
00:34:06
Speaker
Yeah, I definitely see a lot of them because, again, um also you want to make sure that they're deploying in different places, how are they going to scale up, and which sometimes a particular model is better for certain tasks, right? Because sometimes if I want to have something specifically for images processing image processing for my specific agent, I want them to be you know tied to a separate model.
00:34:28
Speaker
um agent and different model to actually do the process, right? So these are different things that makes people to make um separate these things out. And I've seen them all being kind of um deployed across the whole entire, even within just the same business units, they will have multiple agents doing different things. Okay. And you are you seeing mostly like the social stuff or mostly programming, like coding related stuff? Or are you just, you're getting examples of all walks of life?
00:34:56
Speaker
Yeah, I think so. This is not specific for programming because in programmings, I would just argue that do we want to build everything? you want to build agents or do I want to build just normal applications? So there's another discussion that we normally talk about, like when should we introduce agents and when should just build app, right? That's kind of the way think developers would kind of, you have to think about. Is it cheaper to just hard code your logic and like into your programs, or is it much easier because it's so flexible you want an LM to determine that for you?
00:35:30
Speaker
Yeah, there are definitely, it seems like coming out of Silicon Valley, there are plenty of people calling for rewriting simple addition as an agent. Well, i'm I'm a huge advocate for writing simple addition. Like if you don't need LMS, don't use it. Like why? Why are you asking it to do one plus one? That kind of question. No, you don't want to do that. You want it to make it to process difficult workflow that used to need a human to do that. That's why you want to start into kind of introduce this agent into the process.
00:35:56
Speaker
If you can already do that with programs, do with programs, right? There are things that you probably want to do, like if you want to do refactorings, yes, you can do refactorings, but you don't want to refactor it into an agent. You want refactor your code with an agent. So you use an agent to help you like upgrade or update your code, but you you don't necessarily want to turn your code into an agent.
00:36:16
Speaker
Yeah, yeah. We are definitely finding a...

Challenges in AI Agent Development

00:36:20
Speaker
We're finding a new class of use cases for which this technology fits well. But I don't think that eliminates the old class of use cases, which would just fine as they were. Exactly. Right now, I think the problem is people like to vibe code their code. and That's the biggest problem.
00:36:37
Speaker
Yeah, yeah, yeah. yeah um I fear that vibe coding is going to make some people, in the medium terms, it makes some people miss... the power of these things in the hands of an expert. Exactly. And yeah as a person that I've been five coding um for at least a year now, um I say it feels good at the very beginning. Like you feel like you can do everything, but when you start maintaining is when the nightmare comes, right? And I also find that, um,
00:37:07
Speaker
LM is very good at happy path because it's it's an intent-based programming. So you're providing your LMs like your happy path. This is how my program should work. So the the the program will generate often the happy path, the way that it should work. But when it comes to you know edge cases or exception case, it's often very bad at doing that. So I think sometimes I have to go back and generate all these edge cases, test cases to help making sure that you know like what that this this is handled by my Happy Path application. So things things like that happens a lot.
00:37:44
Speaker
In fairness, that happens with regular programmers too, where they just code for the Happy Path. We're all guilty of doing that when deadlines crunch in. But when you do like Vype coding, you don't really think about that. You think the application just works, right? So...
00:37:56
Speaker
and If you're new the program. It's just a magic box. Exactly. Okay.

ADK Tools for Debugging AI Systems

00:38:02
Speaker
Well, in that case, let's talk about the things that an expert programmer expects to find that maybe a Vibe coder won't go looking for, such as how do you debug these things?
00:38:14
Speaker
um Debugging is a couple of different things you can do. right So for ADK, there multiple tools that we can help you. I think there's a very nice tool inside ADK that is so underrated. It's called ADK Web. don't know if you ever code it with agent. The problem is I often have to do my own like ah little test cases where I can enter my prompt and it comes back with... and then it will interact with the LMS, it will come back with a prompt, something that I have to write myself.
00:38:46
Speaker
But the ADK web, it will create a very nice UI. It will do all the user interface for you so you don't have to do it. So every prompt you put in, it's going to automatically you know get get your application started, it's going to interact with your LMS, it's going to record all the... The tracing, basically, the tracing, what I meant by tracing, is one of the things that ADK is very good at, its observability. But within these tools, it's going to give you very basic tracing of how long did it take for the LM to actually run the prompt and how long does the tool takes to actually do the prompt. And when you are going into multi-agent systems, like I have one root agent and multiple subagent that goes within different workflow, you can see which agent it will call because sometimes you're not calling every single sub-agent that you have. And you can see that workflow of where everything is.
00:39:37
Speaker
And you get to see the time it takes. You get to see the flow. And you get to see the the response. And within the tool, you can also do testing with it as well. So you can pre-define your test case. you can predefine your test case set in order to evaluate if your program is running the way you want it to be. So it's it's a one big package of like testing tools that for you to develop locally. um So you don't have to do that.
00:40:02
Speaker
Give me an example of a test, though, because it's not like I can say, if I give it this prompt, I expect this text back, because the text is always slightly different every time. Right. So what does a test case look like? So the test case is, so you've got to define the criteria first. There's two main criterias you want to set, the tool trajectories and the response, um kind of expected response. um I forgot the exact name for it, but these are the two ones. and The tooling trajectory is the tool, sometimes you want to do tooling calls, right? Because when we talk about aging, we talk about you know it should do it should take the hammer to do the job, right? But what happens if you pick up a difference if you if you pick out screwdriver? That's not a good tooling trajectory, right? So you want to make sure it's calling the right tool. So it's the cool tooling trajectory. Is it calling the right tool to do the right jobs? And second of all, it's the response. Right. Is the response similar in a similar direction as what it wants to be? so
00:41:00
Speaker
So in ADK, we're using an algorithm called ROUGE. So ROUGE allows you to kind of summarize, um kind of see if the two intents of the response are similar enough. that So it will get a higher score. So you can define how close, does it have to be exactly the same or does it have to be you know the similar way? And i'm I'm okay with that because with natural language, it's really hard to get like exact numbers. So it just depends on what, but with structured data, you can. so So there are things you can tweak on like how you want to evaluate the response coming back from the agent. Yeah.
00:41:37
Speaker
Okay, let me check I've understood that. So if I am writing my agent that deals with version control, if I so if i give it prompt like, okay, commit that, I'm expecting it to invoke the status function, right? Yes.
00:41:53
Speaker
And the commit function. So I can write tests for that. Yes. Then I'm expecting it to come back with a new hash. And i I write, okay, I write my test case as it should emit a new hash or something.
00:42:07
Speaker
Yeah, so basically you say, you know, I'm going to get a ah hash, but you probably you probably want to have it a very low rating. So you probably want to you don't want to compare that because every commit will be different. So this one, you will probably just look at the tool tra trajectory, seeing if it's calling the right tools. But for a night out example, right, like, am I getting the right recommendation?
00:42:28
Speaker
So that's something that you probably don't want to, you can look at the tool trajectory, but that's not important. The the important is the response that you're getting back. So depending on what you want the agent to do, you can kind of just kind of adjust the the the numbers you want to see, like how you want to evaluate your agent's work.
00:42:46
Speaker
Okay. So I might write a test case that had a response that had a table with like time of day and activity headers. How are you comparing those? Because it seems like you need another LLM to be able to compare those natural language headers.
00:43:03
Speaker
So natural language is- The test case with reaction response. The test is, ah so the response is mostly natural language, by the way. It's not a table or something like that. Like the table is probably not a good case for this type of things because like the the way that we expect the LN will response back is natural language. right I'm too used to asking my LLMs to give me markdown as a response. well back That's why I say that. Well, the markdown is also like a, like there's a language, natural language. So basically it's an algorithm that we call, it's a mathematic algorithm that we call to look at the actual information
00:43:35
Speaker
summarization intent. So basically you're using that algorithm to do that. So you're actually not using a specific model to do that. You can. And they are actually like static rubrics that you can introduce in this kind of things. um I'm not sure. i't I actually haven't done it before. I'm mostly using Rooge because it's like the out-of-box ones, it's easier, and I'm just checking the response. But I think you can actually introduce different rubrics in order to kind of figure out if they are similar. Like in your case, if you want to do a specific um comparison between the two tables, you can probably do that. But in most cases, rouges will be good enough to kind of check the like the response similarities between the expected results and your actual result.
00:44:16
Speaker
Right, and you say this test passes if it's 80% similar or higher. Yeah, sometimes I do 20% because sometimes I want it to be very creative. So for for the for test for the night out plan, I want to be 20% because I want it to come back with different results every single time, with different yeah times and stuff like that. So it depends.
00:44:35
Speaker
Right. Okay. this This feels a bit fuzzy for my personal taste in unit testing, but I see why. i I'm glad there's something to test. But for tooling trajectory, you want to make sure it's 100% the same, right? So I would say... yeah So we should always call Git status. Exactly. Yeah, yeah.
00:44:55
Speaker
Okay, so I've got a mix of options. like What's that like in practice? I mean, are you testing, when you run the test suite, is it trying to run five or 10 cases in parallel?
00:45:07
Speaker
So there are two two options in ADK. You can run it as a single single test. So this is like a single session, like a quick request and response type of test cases that you can build. Or you can use an eval set.
00:45:19
Speaker
We call it evaluation set, where you can just have set of requests and response and toolings that's called. So it's it's like a multi-conversational interactions where you can do, and it will have multiple agents being called. It will go through that whole thing. So it contains a whole history. So normally what I do is I will just do a one that expected all the right answers, extract those those information out. It will be my value evaluation case, or you can use synthetic data. So sometimes synthetic like a model to generate that synthetic data for me. So it will follow all the rules. So I can check it. I can have this evaluation set kind of compared to a output and just do that kind of test. So they are two different things. You can do that with unit test case and other ones for like the integration test.
00:46:07
Speaker
Okay. Yeah, I can see that. Okay. So I've reached the point where I can check that it's working the way I expect. I've defined it. What does deployment look like in this?

AI Agent Deployment Challenges

00:46:20
Speaker
do I productionize it? Deployment is very similar to a traditional microservices. I think this is where you know all the platform engineers were like, ah nothing is easy. So basically, it's it's a Python application, basically. right So just wrap it around as a Python app and then deploy that onto any, you know your preferred environment. If you want to do in your VM, if you want to do in your Kubernetes, just wrap it around as ah a Docker image, whatever, just deploy that on top of it.
00:46:48
Speaker
The only thing is you probably want to have access to the LMs. So LM deployment is ah is a separate concern. It's a separate topic. That's a huge topic that you want to talk about. So how do I deploy my LMs? Do I want to have it as ah a vendor-hosted LMs, or do I want to host my LMs?
00:47:04
Speaker
And should I run it on the VLMs? And how do I fine-tune my VLMs? It's a whole separate story. But agent ops, agent deployment agent, is very simple. It's how you would normally deploy a Python app is how you would normally deploy an agent. Yeah.
00:47:18
Speaker
Another difference... Because it is just Python code in the end, right? Apart from the LL. ah Yeah. 100%. thing you probably want to think about is memory memory memory management. So an agent is very different from HTTP server. So when you think about a request and response, it's crisp, it's it's crisp it's it's easy, it's simple. But when you have an agent, it's a multi-term conversation. So the sessions- Yes, it's stateful. Exactly. So ah so the check histories and all that kind of stuff needs to stay somewhere, right? So all these information needs to be stored. So how are they going to be stored? It's part of the component that came with ADK is you can define how would I manage my sessions, right? how When should I terminate my sessions? ah And should I store them temporarily in memories? Or should I store them outside of my memory? So when my you know my application, my agent host hosting images died or my runtime died, I can still retrieve it. So there certain options you can have. You can have them stored in a and a different database or different fireto fire storage file storage, or you can store that in our
00:48:28
Speaker
platform, right? So you can do that. Other than that, they are, i know, like recently, like people talks about context engineering, people talks about context, right? You want to provide extra context for your agents, things like policies, like security policies. These are important things. These shouldn these things should even by shouldn't even be answered by the LMS. Things are, you can actually put in extra. Oh, yes, want get into that too. Yeah, so if things you want to put in as an extra information for your OOMs, you need to actually put those in the um as a context. So how do you store them? And how do you retrieve them when they're needed? So all these are a separate place to store. So when you deploy to production, you want to think about all these you know like um components that goes with that. So do you want to spin up a separate storage, separate database, or a separate like instances to hold those memories?
00:49:21
Speaker
Right, yeah, yeah. Because I'm going to want to store people's prompts in something like Postgres or something, ah BigQuery, I guess I should say, in your case. Yeah, or sometimes um sometimes you want to store them. Right now, like ah the most popular one is probably... one of the REC use cases where i just want to retrieve like the histories of our, because we might have a very long conversation history for like the past one year. i only want to retrieve the ones that's relevant. So I'm just going to retrieve the ones that are relevant and put it as my context. So, yeah
00:49:55
Speaker
yeah The one the one day they implemented Vertex is that they would do a those kind of search. um So they call it memory, like agent memory bank. They would just retrieve the related information back to your agent. So there are different things that's implemented inside the whole um the whole agent system kit, development kit, sorry.
00:50:15
Speaker
Right. Yeah. and Specifically on the, um there's probably a term for this and I don't know, it but the kind of safety guardrails thing, like I write an agent that is does customer support, but I want to make sure that it never ever says you can have one for free.
00:50:33
Speaker
how do i write How do I write that kind of guardrail? ah So there are many different safety aspects like in terms of like agents. like We can talk about all day. There are things that you want to have have access to. it You want to make sure there's no harmful information that comes out. You want to make sure that it's been checked. like It's not giving you free stuff. right like these All these Information guards are not used be built with it. There's different things. um So for your use case, it's like not getting like a free stuff. You probably want to have that as part of your policies that is defined in your agent. So that could be part of your contacts. So whenever this agent is called, it will be built in. So it will be part of the rule that gets sent into the LLM. So the LLM will take in that as a part of the rules of how it should behave and it will come back with that. So that's one way of that. But my defining that as part of the system prompt or is it separate? It's part of, it could be part of a system prompt, but it could be dynamically retrieved, um but also as part of the system prompt as well. So there're just there's a fixed system prompt and there's a dynamic system prompt you can kind of interpret
00:51:43
Speaker
add into it. So depends on where you are. But yes, a system prompt would be a good place to put it. So this is woven into the thing the user asks rather than as a kind of post-processing step.
00:51:55
Speaker
I would say that because it's ah it's a rule, right? So I would probably just build that instead of like having an afterthought. I think that would be easier um in terms of like, I don't have to process every single one of them. And it's it's just easier for people to maintain in the future as well. So it's right there, right there when you see it. So yes. Okay, okay. um
00:52:19
Speaker
Sorry. Go on. But also in terms of security, there are different things. Like they are LLM security as well. So think the harmful things you don't want to actually get into the LLMs, they are model, in Google we call it model armor. So it goes a protection layer. So it will just block every malicious call. Even before you actually get into the model, it will just block it. So there's one little, they have a little algorithmic side model that kind of protects it on the side. So it's a separate process. So it's going to cost you, so your malicious attack won't cost you anything because it's been blocked.
00:52:52
Speaker
um There are different things like security wise, there are different things like MCP calls that you want to make sure that some of the unauthorized agent cannot get access to your MCP server. So there are different things you want to set up for security. kind of authenticate the authorize your MCP servers.
00:53:07
Speaker
And when your agents try to talk to another agent, you want to make sure that they are authorized to talk to each other. So there are things you want to implement in front to make sure that they are there. So you can do that with all auth right now um with ADK and all that kind of stuff.
00:53:23
Speaker
If I wanted, that raises an important question for something I'm working on on the side at the moment. If I'm happy with my LLM and my interface to it and and my prompts and all that stuff, if I just wanted to build MCP tools with ADK and not take the whole package, can I do that?
00:53:41
Speaker
So eight mc ADK doesn't build MCP, integrates with MCP. So you build your MCP server and just integrate with it. Oh, I thought I could define tools. You can define tools, but the the actual underlying MCP is still coming from the MCP libraries. We do have a wrapper to wrap around it for it to integrate with ADKs.
00:54:03
Speaker
Okay. okay and you've Okay. Okay. I'm slowly building up all the pieces in my mind. One thing you mentioned that we haven't really talked about is RAG, right?
00:54:15
Speaker
Mm-hmm. give me an embedding for the conversation we've had so far, go to a database and find similar documents. That's a fair summary of RAG.
00:54:26
Speaker
You haven't mentioned where RAG fits in this, does it? I would say reg is a part of the tools. Considered, I think it's called Gentic reg now, that people use reg as part of the way to retrieve data. ah It's just being as of Oh, so it's no longer separate? It's not a separate thing. it's It's a way of retrieving your data. It's similar to going using a SQL to retrieve your data. But in this case, it's using vector, right? So it's vector embeddings. Very similar things. The way that you would query a database will be very similar to how you queryt do a reg.
00:54:58
Speaker
OK. Yeah, that makes more sense because the MCP protocol would support that. So why have two protocols? Exactly. Which reminds me of another thing I needed to ask you. Why have two protocols? Google are coming up with a different protocol that isn't MCP. So tell me about that. I know you bring this up. So MCP, A2A, what are the difference? i So I would say, yeah I would think about it as different directions. um MCP is a vertical directional... um
00:55:29
Speaker
interfaces where it's how agent talk to the tools, right? Because for tools, and they are certain things you have to make sure it provides a specific tooling names. It will have assemble have to assemble all the parameters for it to execute. So these are all the essential things for MCP server to run. So that's a specific protocol for an agent to actually do, to get things executed in the MCP server side to get the the tooling executed, right? So that's that.
00:55:58
Speaker
But when what what happens if you have multiple agents? So it's ah it's a horizontal line where you're having agents trying to talk to a different talk to a different agent.
00:56:10
Speaker
That's when you want to use that at that protocol. So it's agent to agent protocol instead of agent to tooling protocol. So I would think of it as different that way. Right, yeah. So having got to this point where agents are calling tools. We're now into the world of sub-agents and that needs its own protocol. Yes, because i might have an um ordering agents where I need to talk to a payment agent and then a logistic agent. So these are things and they all can be cut belong to different companies. And they all talk in natural languages. Sometimes they take in different modalities, sometimes you know all that kind of stuff. So you just want to make sure these all are all part of the built-in protocols. And because agents are mostly in natural talking language natural languages, so you want to make sure you capture all that and send it to the other agents.
00:57:04
Speaker
Yeah. Okay. Naively, that seems to me like it could just be I'll send the sub agent some text, it will send me some text back. like off the top of my head, that protocol doesn't need to be more than text file, text file, back and forth. It's not just that. you got What am I missing? You've got to have the histories, like your chatting histories. And you also want to, sometimes we have um ah different files. So voice files, image files, video files are all part of it. And also you on top of that, you also have the securities. And how does the agent discover to discover each other? So there's mechanism mechanisms like agent card. Agent card is very similar to how you and I would exchange um
00:57:45
Speaker
things in the in the past. If I will give you my business card, this is what I do. And this is my address. This is how you find me. Similar to that. So agent will provide an agent card to the other agents and the agent, the the other agent can look at your card and say, do I need you or do I not need you? And these are all the things you can do. I might ask you to do things for me. So that's very similar to that.
00:58:05
Speaker
Agents going to cocktail parties saying, what business function do you provide? Exactly. That's probably it. Yeah, yeah, I can totally see that. Okay. Then um I have used sub-agents and Claude, right? Yes.
00:58:21
Speaker
But they're all local to one machine. Are you also providing like networking? Does ADK let me distribute sub-agents over different physical machines? So ADK has all the sub-agents in a single instance. So they have something called root agent that is kind of the core coordinators for all the sub-agents. So every okay root agent will have multiple sub-agents underneath the hood.
00:58:49
Speaker
right So they can do that. And you can also have multiple agents talking to each other in different instances. That's what they call a remote agent. So you can call them as a remote agent. So you can integrate remote agents. So this remote agent can be your sub-agent. So you can assign them as a sub-agent as well. So there are different things you can do that as well. So you can have your own. The easiest way I would do is don't overcomplicate it yourself. You can have your sub-agents in your local instances, but you can also have your ah sub-agent as a remote agent running in a different instance.
00:59:24
Speaker
Okay, yeah. You've made me see it's now a matter of time before someone builds an actor model framework that has this all built in. Well, I mean, we'll see. I don't know. It's still early. like could i can I can see that coming. but I definitely can see it. One framework to rule them all. I think there will be like a repository for the agent in the future. I don't think we see it yet because now we have MCP server repository, MCP servers, but I think there's going to be an agent repository somewhere. where I'm just going to you know figure out which agent I want interact with in the future.
00:59:57
Speaker
yeah Yeah, I can totally see that for for all the things we would have deployed a REST API for every time. now we'll be deploying agent APIs for all the time. Yeah, I think that'd be a fun future. Okay.
01:00:10
Speaker
Then how do I get started on it? what would What would I do if I wanted to go and build something with ADK? So ill just go to our ADK, just Google Agent Development Kit, and you'll find all the information there. um So depending on what what kind of languages you prefer, they are different getting started. um pages where you can get started with. um So yeah, or you can just use ah Gemini COI as one of our ah coding tools that is heavily integrated with ATK. So if you want to, you know, live code your agent there, I think it will be your most safe bet because it has all the documents pre-pulled into that. That will generate the the right syntax for you.
01:00:51
Speaker
Yeah, of course it does. And if i wanted to build if I wanted to use this to build something like my own Gemini that was specialized to Rust, ah I don't know, that seems like a fun example.
01:01:03
Speaker
Would that be feasible, do you think? I think so. A good hobby project? I think so, but I haven't dug into that too much yet. so Okay. What have you done? What are you building with it? oh that makes it's the fast Maybe that's the last question. i did Well, I did a ah fun project. with So I've been touring around the country right now. We are kind of kind of getting people up to speed on what is ADK. So we built this like super cool RPG-related theme.
01:01:32
Speaker
i i feed you this