Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Trusting Agentic AI with Dr. Dawn Song image

Trusting Agentic AI with Dr. Dawn Song

Hanselminutes with Scott Hanselman
Avatar
0 Plays2 seconds ago

In this partnership episode between Hanselminutes and the ACM Bytecast, Scott talks with Dr. Dawn Song, MacArthur Fellow and leading researcher in computer security and AI and co-director at the Berkeley Center for Responsible Decentralized Intelligence about how privacy-preserving computation, fairness, and accountability can help us design intelligent systems we can actually trust.    

https://agenticai-learning.org

This episode is sponsored by Mailtrap.
Check out https://l.rw.rw/hanselminutes for modern email delivery that developers love.

Recommended
Transcript

The Mystery and Power of Large Models

00:00:00
Speaker
As I mentioned, even though LMs are so powerful, but it's amazing that none of us really understands how it works. That seems concerning. Someone ought to figure that out. Right. It hallucinates. It can be jailbreak and yeah various issues.
00:00:19
Speaker
And think about it. That provides the intelligence of our authentic AI systems.

Episode Sponsorship by MailTrap

00:00:28
Speaker
Hey friends, it's Scott. I want to thank our new sponsor, MailTrap, modern email delivery for developers. They integrate straight into your code with their SDKs. You get unified transactional and promotional email delivery, 24-7 support. You contact humans, not AI chatbots. We'll give you 3,500 emails monthly in the free tier, and you can try them out at mailtrap.io today. That's M-A-I-L-T-R-A-P.io today.

Introduction to Dr. Dawn Song

00:00:58
Speaker
Hi, I'm Scott Hanselman, and this episode of Hansel Minutes is in association with the ACM Bytecast. Today, I have the distinct pleasure of speaking with Dr. Dawn Song. She's a professor in computer science at UC Berkeley and the co-director of the Berkeley Center on Responsible Decentralized Intelligence.
00:01:14
Speaker
She's also the recipient of various awards, including the MacArthur Fellowship, the Guggenheim Fellowship, and more and more. And I'm just thrilled to be chatting with you today. Thank you so much, Dr. Song, for spending time with

From Physics to Cybersecurity: Dr. Song's Journey

00:01:25
Speaker
us.
00:01:25
Speaker
Great. Thanks lot for having me. So you're, uh, you're, you have got such an impressive background. I'm just curious, uh, you know, when you started your journey in your academic journey in security, did you think that the work you would be doing would be so recognized? It would be such a big, fun, long career.
00:01:48
Speaker
Oh, I see. Okay. and Yeah, thank you. Thanks for the question. and Actually, when I started working in cybersecurity, so first of all, ah the field was really, really small.
00:02:00
Speaker
and I mean, the conference that you go to is only like maybe a hundred, couple hundred people. and And also, when I started, i just actually transitioned, switched from being a physics major to computer science.
00:02:19
Speaker
so So yeah, so I actually did my you know undergrads in physics and I only switched to computer science in grad school. And when I first switched, I was trying to figure out what I want to focus on, like you know the domain.
00:02:35
Speaker
And actually found security really interesting. And also I liked the combination of theory and practice. So that's why I chose it. and you know given,
00:02:46
Speaker
And right given the fresh transition and that was the field was very small, so it was I think it was difficult to predict what's going to happen in

Impact of the MacArthur Fellowship on Dr. Song's Research

00:02:55
Speaker
the future. But I do know that security was important and was going to be a lot more important. So I'm very happy that I chose the path.
00:03:05
Speaker
Yeah. It's funny. Sometimes people ask me like my career, like, did you plan all of this? And it's easy to say looking back, oh yeah, it was all a plan, but it, you just work hard. It, you did your best and people, people recognize it and you follow your, your sense of smell to like, what is the next thing?
00:03:22
Speaker
Yes. That's actually a really good way. I put, yeah, I'm putting that. Yeah. Yeah. Now, the the MacArthur Fellowship and some of the other recognitions that you've had, like you're an ACM fellow, you're an IEEE fellow, these are rare and you're stacking them up. I'm curious, when you got something like a genius grant like the MacArthur Fellowship, did that change what you chose to follow or do you still plan your agenda based on your on your gut and where the research takes you?
00:03:48
Speaker
ah Thanks. That's a very good question. I think in some sense it does give me maybe like more freedom, more a sense of courage is to really

Dr. Song's Diverse Research Interests

00:03:58
Speaker
explore things that like I find interesting that I feel that can be impactful in the future.
00:04:07
Speaker
and So yes, actually my trajectory after the MacArthur Fellowship has really even further broadened my research domain. And I think I actually have been taking a quite a unusual path than I think a lot of and lots of people. I like that idea that it gave you courage in the sense of like, it's a very big validation. And it's also like the direction we're headed is a good one. I'm going to now take some risks, make some make some strong decisions. Did it change how you formed your team? Did it change your your feelings about taking risks?
00:04:43
Speaker
Yes, yeah, that's ah that's a very good question. So I would say, like in my research career, it has been quite different from a lot of people, from most people, and that I have, as you mentioned at the beginning, i actually have explored fairly broadly, and also at the same time, you know, deeply in a number of different domains.
00:05:04
Speaker
So yes, ah so after the MacArthur, you know, fellowship, I, right. So as you mentioned, initially my career started in security, security and privacy.

Balancing Academia and Innovation

00:05:16
Speaker
And, and also always been interested in, know, how the brain works, ah how, and want to build intelligent machines. So, so yeah. So after the, my Catholic fellowship, i actually, and then I also did some startups and my startup was was acquired and I was asking myself what do I want to do if I.
00:05:37
Speaker
and you know retired ah had retired, then the conclusion was that I want to build intelligent machines. So that actually, you know, switched to my whole group and actually focused on deep learning. Before deep learning was actually hot. This was, you know, even before AlphaGo and, you know, the the last wave and so on.
00:06:01
Speaker
So yeah, so i I would say it's and you know for most people that would be pretty big change. I was in a meeting recently that I felt maybe had too many managers and a a person texted me in the meeting privately and they said, there's a lot of talkers in this meeting and not a lot of doers.
00:06:22
Speaker
And one of the things that I would give you a compliment about about your career is it seems like as academics go, you're a doer, like you create centers, you make conferences, you you are outward facing, you're talking to people, you're creating you know massively online courses.
00:06:39
Speaker
While other academics tend to kind of fold within themselves and they just kind of like disappear and write a paper for a year and then they kind of pop back up occasionally. And you said you did startups as well. How do you find that balance between what academia expects from a people talking in a room perspective and they let's do things, let's ship products, let's make lives better. youre You seem to be a doer, not a talker.
00:07:03
Speaker
I see. Okay. So first of all, I think, um I mean, everybody has their own path. Everybody has their own preferences and people make contributions in their own ways. And I wouldn't say people who are just working on their papers and maybe, you know, in their offices are talkers. I think they are, I mean, some of, ah you know, some great work actually came out of that, ah you know, that kind of settings as well. And so, yes, I wouldn't say necessarily right one you know one approach is necessarily better than others, but I think people they people have different aspirations.
00:07:42
Speaker
People like to do different things. I'm glad that and the path that I chose, the type of works that I have been doing, i have impacted a lot of people. And you know with the massive open online course, for example, helped you know like tens of thousands or hundreds of thousands of people to actually learn about cutting edge new topics and and so on. And also, you know the startups help transition and research technologies into the real world and all these things.
00:08:16
Speaker
Yes. So I think I'm very happy that my work has been able to help a lot of people. But I think people also write, they contribute in different ways.

Mission of the Berkeley Center for Responsible Decentralized Intelligence

00:08:26
Speaker
I appreciate that. I apologize if that was an indelicate question.
00:08:30
Speaker
it was and It was just meant to show the difference between you know really making things happen in a very physical, impactful way. But you're right, impact comes in different flavors, including our friends that are maybe more quiet in their writing.
00:08:42
Speaker
Now you co-direct the Berkeley Center for Responsible Decentralized Intelligence, RDI. Can you explain that mission and what that means? And then how do you select the areas that the center focuses on?
00:08:54
Speaker
Yeah, thanks. and That's a very good question. So the Berkeley Center RDI Responsible Decentralized Intelligence ah works at the intersection of responsible innovation, decentralization, and intelligence as AI, for example. And I would say Agenda AI is actually a very good example of the kind of work that we focus on.
00:09:15
Speaker
If you look at Agenda AI, we want it to be, it's really important that it's safe and secure and responsible. So we need to, we want Agenda AI to help with responsible innovation.
00:09:27
Speaker
And also, ah right, intelligence is a key part of a jeate i And also we hope that ah the Agente AI future that we built is not centralized, it's decentralized. you know Each of us, we may have our own personal and assistant, personal agents that represent us or help us to interact with others, with other agents and so on. And we'll have lots and lots of different agents that actually perform different tasks.
00:09:56
Speaker
I have different capabilities to write to help make a better world and for all of us and for society. And also at the same time it's safe and secure and responsible.

Advancements and Challenges in Agentic AI

00:10:08
Speaker
it It feels like for the ah people out in the community, like the non-technical people that AI is having a moment because it's being well-branded. We're hearing the word agentic just in the last year or two, but this has been something you've been thinking about for six, seven, eight years. Like what? what does it feel like to hear things you've been working on for six or seven years now start to break out into the the mainstream? Because I think even now, regular people have struggled to understand what what is an agent and what is agentic.
00:10:40
Speaker
Is it just an LLM that has the ability to call a tool or is it is there something more there? I see. Okay. Yes, that's a very good question. So first of all, I think it's not even, it's not just six, seven years, actually it's been much longer than that, right? I mean, I mean, it has been in the making for for many decades and even from my own transition into deep learning, as I mentioned, I started working in the field of deep learning even before actually the term really became became popular and before and no most people actually started working in the area.
00:11:19
Speaker
But even then, i think, yes, I would say almost all of us have been really surprised at the speed of advancements for you know frontier AI and so on.
00:11:33
Speaker
and There has been you know polls. And also, if you just ask most AI researchers who are working in AI today, back then, before like CHGPT came out before GPT 3.5 or GPT 4 came out, like what people expected for a lot of the tasks, people would expect that still it would take decades for those tasks, you know, to be able to be accomplished by But today, you know, here's where we are.
00:12:07
Speaker
And I think most people, almost everyone um has been very surprised. Yeah, yeah. Certainly the math, the work, the deep learning, the so you know the subset of machine learning, the multilayered neural networks. This is something that you said has been been worked on for decades. It popped when GPT started, when the transformer architecture was introduced. Do you think that there's an overemphasis on next token prediction on transformer architecture when there's so much other really interesting work happening in deep learning and in machine learning?
00:12:40
Speaker
Yeah, that's a great question. I think so. So of course, ah what has been shown now is this next token prediction paradigm has been very powerful.
00:12:52
Speaker
And also recently, the reinforcement learning based approaches also have been shown and to be really helpful, effective at improving the model capabilities and also in particular for agent capabilities and and so on.
00:13:08
Speaker
and And of course, I think now this is a big question is, is this transformer with um right the current ah you know training paradigm with RL and so on will this path be sufficient for us to get to where we want?
00:13:30
Speaker
And I mean, the truth of the matter is nobody really knows, but so far we' are continuing to see and still the fast progress. of model capabilities and also the you know the agent developments and so on. So, I mean, of course, I think we would love to see and more exploration, more diverse ideas and so on.
00:13:56
Speaker
And even the current paradigm, still there are many and limitations, shortcomings, know not very data efficient and and so on. so So we do hope that we can continue.
00:14:09
Speaker
to make further progress and identify new ideas, new breakthroughs and so on. And in the meantime, i do foresee that we'll continue to get, see the improvements and the model capabilities and

Security Challenges in Agentic AI

00:14:26
Speaker
so on.
00:14:26
Speaker
So at at a very, so as a very simplistic example, if I take a small GPT on my computer and I give it access to tools and I let it run around on my file system and edit files and do things, I have the basics of an agentic AI, but. Putting agents in that case. In an agent, yeah, a small agent, making a small basic agent. I'm basically.
00:14:47
Speaker
letting next token prediction run shell scripts on my machine and maybe productivity comes out of it. But one of the themes in your research bio is the intersection of deep learning and security. i think i think about this little agent that runs on my machine and then maybe a robot in my house that has arms and legs and has a model behind it. you know Both of those instances do no harm has always been one of the kind of ideas around robotics. The first rule is like, do no harm.
00:15:17
Speaker
Is that something that is possible for and a Gentic AI to be both secure and helpful, or are we always going to have that tension?
00:15:28
Speaker
oh Oh, that's, that's a very good question. So um Okay, so also first, when, you know, earlier you also asked about what is agentic AI. Thank you. And so, right, so the examples that you mentioned, these are very good examples of some of the things that agentic AI can do.
00:15:45
Speaker
But when talk about agentic AI in general, it's not just about, you know, one type of agent and so on It's actually, in fact, it's a very broad spectrum. In our recent and overview paper, we actually laid out and a general landscape for agentic AI.
00:16:02
Speaker
along a number of different dimensions. Along each of these dimensions, essentially the Agente AI systems can be you know less flexible versus more flexible.
00:16:14
Speaker
and So for example, you know the kind of tools that they use, whether the tools are pre-specified ah in a static set, or they can even use dynamic yeah dynamically selected tools during long time that they didn't even no the developer didn't even know.
00:16:31
Speaker
that I didn didn't even specify ahead of time and so on. And you know the level of autonomy, the level of how and how flexible the flow, the control flow and the workflow of the agent is. So it's a very broad spectrum.
00:16:46
Speaker
And given that, so what we also have shown is along with each dimension, as the agentic AI system becomes more and more flexible and more and more dynamic and so on, it also increases the attack surface.
00:17:02
Speaker
and And also when we talk about you know safety and security authentic AI, there are actually two main difference ah ah different aspects. So one is whether the authentic AI system itself is secure, whether it can be you know secure against malicious attacks on the authentic AI system itself. So for example, in the example that you mentioned, you have a little coding agent that works on your files and so on.
00:17:31
Speaker
You want to be careful that there's no malicious attacks, ah ah attacking the coding agent so that the coding agent somehow misbehaves, delete your database and then send out and also send out some like sensitive data ah right from your files to the attacker and and so on.
00:17:49
Speaker
So this is one type of concern. And then another type of concern is these agents, as they become powerful, ah attackers may misuse them as well.
00:18:01
Speaker
to launch attacks to other systems, to the internet, to the rest of the world and so on. So that's also and a responsibility that we have as we build these a agentic AI is you know what people say is with strong and capabilities, also there's the yeah strong responsibilities as well, right? Yeah.
00:18:27
Speaker
So, right, so it's both sides and both sides has its own set of challenges. And i would say cybersecurity has always been challenging, a challenging domain. We are seeing you know attacks every day today already and cyber attacks are causing like you know billions and billions of dollars of and financial loss and damages every year.
00:18:55
Speaker
And now when we add agentic AI, I think on both sides, actually things get a lot worse. So for the agentic AI systems, first of all, big because it's much more complex and much more dynamic.
00:19:08
Speaker
and so And also we don't actually understand how these large language models work. They have intrinsic vulnerabilities, issues and like jailbreak, prompt injection and so on.
00:19:21
Speaker
So agentic AI system itself is actually much harder to secure, to protect against sim malicious attacks and i on its own. And then on the other hand, when genetic AI system becomes more powerful, when attackers misuse them, the consequence can be much worse as well.
00:19:42
Speaker
And this also has been illustrated with some of our own recent work in actually evaluating what AI can do in cybersecurity like CyberGym and so on.
00:19:53
Speaker
yeah A little bit of ah of a side rant. I remember in the early 90s when they told us never to trust user input, right? And you always have your little text boxes and you always put validation on each text box and you're so careful to not trust user input. And now the internet is just one giant text box where we type pros and we're expected to trust user input, but that's the now the attack vector.

Complexities and Unpredictability of AI Systems

00:20:21
Speaker
While the stack is so deep,
00:20:24
Speaker
Like I have this Altair behind me and I have a PDP 11 over there. Those are computers where you can hold in your brain almost the entire computer. But now there's no such thing as a full stack engineer because a chat bot on the internet is a distributed system within another distributed system within virtual machines. And, you know, it's complexity all the way down.
00:20:45
Speaker
is Is it problematic that none of us can hold the full stack in our in our brain any anymore? Yeah. That is, yes, I think that's a very good question. it is actually ah huge issue. As I mentioned, so first of all, I mean, it's not just about whether we can hold it in our brain. Like the...
00:21:05
Speaker
As I mentioned, even though LMs are so powerful, but it's amazing that and none of us really understands how it works. That seems concerning. Like someone to figure that out. Right. Like a hallucinant, right? It can be jailbreak and various issues.
00:21:25
Speaker
And and the think about it. That provides the intelligence of our organic AI systems. so So we have this really powerful system and and also we give it to all sorts of um you know privileges so that it can do things on our behalf. In the future, we may give it our credit card numbers so we can shop for us. And then we give it privileges in our systems. Right. It can write to.
00:21:52
Speaker
take actions on our systems and so on. So it's so powerful. And with all these privileges that we gave but at the same time, we have no idea how it works. We don't know when it can break down. We don't know right how it's going to behave under different situations.
00:22:10
Speaker
So I think this really causes us a huge concerns. So that's why also some of you know my work has been focused on what can we do, how we can build more secure solutions for these and ah for these type of systems. And ideally, we want to also develop new approaches to still to act to even have provable guarantees of certain security properties, even for these agentic AI systems. I think that's something that we really need in order to actually have agentic AI systems to take critical actions for us.
00:22:45
Speaker
That's a great point. Like here we are making these giant distributed programs where the fundamental for loop in the middle is a black box that's non-deterministic and we can't trust it because it could suddenly decide to be angry and and cause cause problems. How do you design a system around that? How do you make it so the light switch that that can flip off doesn't hurt someone?
00:23:07
Speaker
I'm curious, what do you think about the stochastic parrot analogy? I think there's arguments that that it's probabilistic mimicry and the and the LLM is a kind of a parrot, but then there's also maybe that that's a simplistic analogy and it it under undersells the emergent capabilities of LLMs. I'm curious which side you're on.
00:23:28
Speaker
and see. Yeah, that's a very good question. i mean, again, to at heam as I mentioned, and we really don't have a very good understanding of how these ah LMs work at all. And and the we do see very interesting phenomenon. On one hand, these LMS, they can that win the gold medal in these Olympia you know math competitions, programming contests, and so on.
00:23:57
Speaker
And they can, in certain cases, solve very hard math problems. But on the other hand, you can easily see it actually makes very silly mistakes, ah very simple right problems.
00:24:12
Speaker
And also, you know we call the LM has this jagged intelligence. On certain things, right it does really, really well. And then on other things, right it does very poorly. And also, we have done and some recent work also trying to understand better um oh whether what is what i am is learning, whether a how well can it generalize, we actually develop some new benchmarks, omega, delta, and so on, to try to and develop these controlled experiments.
00:24:48
Speaker
to really understand how LAM is doing generalization, both with ah supervised fine tuning, as well as reinforcement learning and so on. So what our work has shown is that, I mean, so first of all, yes, I mean, in certain cases, LAM's capabilities, it is amazing. But then on the other hand, our work does show that is there's still and significant limitations in terms of how these LMs actually, how we can generalize.

Dr. Song's MOOC on Agentic AI

00:25:19
Speaker
ah In particular, as we increase both the difficulty level of the problems and also the compositional complexity,
00:25:31
Speaker
that doesn't generate and general generalize that well. And also um still, I think a can come up with some, you know, new ideas so on. But in general, like with our benchmark evaluation shows that still, when some problems are required, really new solutions, new type of solutions is still not very good at those. Yeah. I like that term jagged intelligence, like to to assume that one individual is uniquely smart in all things is to oversimplify. you know, I could be a poor driver and I could be a genius in math. I could, you know, there's a number of things that I am not single dimensional. So neither are the LLMs.
00:26:13
Speaker
ah You've been teaching for so long and now you're teaching massively open online courses. There's a really exciting one that you've been doing, the agentic AI. This is this is blown up. Did you how many people did you expect would come to the agentic AI MOOC and how many are coming now?
00:26:30
Speaker
Yes, yes. Yeah, thanks. So, right. So I first started this massive open online course MOOC on Agentic AI actually fall off last year, 2024.
00:26:42
Speaker
And when I started actually back then, and the Agentic AI our um agents wasn't quite a thing yet. And not many people were talking about it.
00:26:54
Speaker
But however, I could foresee that this is this is the future. This is the next frontier. So that's how I actually you know started teaching the class. And I think because it was the first, it was literally, I think, the first class of Agents, Agents, Agents, AI, and it's the first MOOC on the on the topic as well.
00:27:18
Speaker
So I think it really caught people's attention. And now we're actually running the third edition and for the class. And so we have overall over 32,000 enrolled globally.
00:27:30
Speaker
So that's been really exciting. And also suddenly, you know, this year, it's now called the Year of Agents. And even though...
00:27:42
Speaker
As I said, when I started the class, it's because i could foresee that this is the next frontier. But even I did not expect things to explode so fast. Like this year, and especially after the reasoning models and came out, and that really helped with the the reasoning capability and capability of agents overall. And we are really seeing the field ah exploded. So that's been really exciting to see.
00:28:11
Speaker
At what level should a person feel comfort around their level of computer science and AI before they join a MOOC like this? Like, is this for high school students? Is this for graduate students? Is this for practitioners like myself?
00:28:26
Speaker
what What should I come into a course like this knowing and being prepared for? Yeah, that's that's a great question. So I would say actually the the course is designed to have something to offer for and for people, all people at different levels.
00:28:42
Speaker
So of course the course is mainly designed for people with technical backgrounds and in computer science and so on. And we actually do systematically cover you know both in terms of different layers of the Agente AI stack, all the way you know from the foundation the foundations, the model and development capabilities and to genetic framework, all the way to applications, and both horizontal and vertical applications and so on.

Open Competition for Agent Evaluation

00:29:16
Speaker
um so And the class is technical, but on the other hand, and also, I think even just from the lectures, even for people who don't have too much background, there's still, I think, quite a bit that they can learn about just a general, uh, overall development and in the, in the space.
00:29:35
Speaker
Yeah. It's really a huge source of, of material to explore, to look back at spring and fall of last year. It's worth noting that the supplemental readings, the links to all of the quizzes, the slides, the videos are all available online. So People should go back and explore. This is really very formal and structured and deep with a lot of really great guest speakers that you've put together. And now you've even got a contest for the greater good that you're doing an open competition for agents that hopefully make people's lives better.
00:30:11
Speaker
Yes. Yeah. Thanks. Yeah. So we are running a competition. so for each edition of the MOOC, we actually have organized the competition. So for example, the last one ah in this ah past spring,
00:30:23
Speaker
We had close to 1,000 teams that participated globally. And for this semester, for this edition, we have a new competition which actually focuses on agent evaluation.
00:30:36
Speaker
So as we develop agents, actually, it's really important to have ah good agent evaluation and to have good methodologies for agent evaluation. Because, you know, we, I mean, they're just saying that we can only improve what we can measure.
00:30:52
Speaker
and also in general evaluations, um essentially the goalposts for development for the community.
00:31:03
Speaker
and So you know my group, and we've had earlier work in a space like MMLU and some of these other benchmarks have been widely actually adopted the community.
00:31:17
Speaker
And, uh, but a lot of those were focused on, I would say evaluation and the model level. But for agent evaluation, actually it requires, um, different, uh, it's a different focus given that the agent is not just a model. Uh, agent evaluation is not just a model evaluation. You actually have the model and also you have the agent itself, like also called the harness that actually, ah you know, uses the model to.
00:31:46
Speaker
to essentially perform tasks and so on. So the agent evaluation essentially has my has more components for the evaluation. And it's very important to have um open, standardized, reproducible evaluations for agents. And so far, this has been lacking.
00:32:07
Speaker
So we actually have developed a new paradigm. We call it a Gentified ah Agent Assessment, AAA. That actually helps to meet this need and to enable a new paradigm of this open reproducible ah standard standardized agent evaluation.
00:32:26
Speaker
And also we are developing a platform for this as well. And this agent evaluation competition is really to help bring the communities together to develop the best benchmark evaluation that's standardized, reproducible, and has broad coverage ah in diverse domains.
00:32:44
Speaker
that helps guide the community development. And then we really hope that ah more people can join the competition. we have actually over $1 million dollars in prizes and resources and provided by sponsors and partners of the competition, including Google DeepMind and many others and so on.
00:33:05
Speaker
and So I think this will be really fun. And also, it's a great opportunity for the community to come together to develop on public good so So we hope that more people can join us ah in this competition.
00:33:17
Speaker
Yeah, this is very exciting. And folks can explore all this stuff. There's agentbeats.org. They can see the code. They can take a look at rdi.berkeley.edu to learn all about Agent X and about Agent Beats.
00:33:29
Speaker
And they can learn about the MOOC at agenticai-learning.org.

Practical Applications of Agentic AI in Workflows

00:33:34
Speaker
I'll put links in the show notes for all of this stuff. As we get ready to close, I want to ask you, as a person on the forefront of this technology,
00:33:44
Speaker
in your day to day, what agents are you using that are helping you be a better professor, be a better thinker, be a better teacher? Are you using commercial products that you just have a subscription to, or are you writing these things custom? Are you using cutting edge things? What's an agent expert using for their own agents?
00:34:03
Speaker
And that's a very good question. So, so I'm actually trying to develop some of my own agents to, to better, automate some of my own workflows. As you you know, you mentioned like I do a lot of different things actually.
00:34:15
Speaker
mean, they do take a lot of time and even when I have assistants and so on, it can still take a lot of time and so on. And lot of these things now can really be automated or like largely hugely helped with agents and so on. So that's some of the things that I'm doing as well.
00:34:35
Speaker
so i like to So I've heard in the space of robotics, someone said that a robot should do things that are dull, dirty, or dangerous. And we use the term toil. So I assume you're trying to automate the boring stuff so that you can do the interesting, fun thinking.
00:34:49
Speaker
Yes, absolutely. That's fantastic. Well, thank you so much, Dr. Song, for spending time with us today. Great. Thank you. Thank you so much for having me. We have been chatting with Don Song in association with the ACM Bytecast, and this has been another episode of Hansel Minutes, and we'll see you again next week.