AI and Platform Engineers: A Critical Synergy
00:00:00
Speaker
Platform engineers are actually more important and more critical for the organization's success in adopting AI.
00:00:30
Speaker
Welcome to the forward slash podcast where we lean into the future of IT by inviting fellow thought leaders, innovators and problem solvers to slash through its complexity. Today we're talking to Matan Grady.
00:00:42
Speaker
Matan is the lead AI product manager at Port, a developer platform that transforms how engineering teams operate. Born and raised in Tel Aviv, Israel, where he lives with his wife and two young kids.
00:00:53
Speaker
Matan's journey to product started in the IDF as a teacher, then shifted to human resources, where he discovered his passion for building systems and automations. This led him to product management, where his technical skills allow him to contribute hands-on alongside the engineering team.
Matan Grady's Journey and Passions
00:01:10
Speaker
He joined Port, which became the perfect blend of his passions and capabilities. Outside work, Matan builds automations like an Inbox Zero agent and ai call tracker.
00:01:21
Speaker
The joke being that his AI agents keep working even when he's hiking with his kids. That's funny. A vegan for 15 years, Matan is excited to be at the forefront of how AI is fundamentally changing what it means to be a developer.
00:01:36
Speaker
Welcome to the show. Hi, James. Thank you for having me today. Yeah, well welcome from, you're you're in Tel Aviv right now, right? Yeah, yeah, I am. Oh, nice. Uh, so um we recently had our director of, uh, product, I'm sorry, of, uh, platform engineering, uh, Dylan
Trust and Challenges in AI Handling
00:01:56
Speaker
courts on the, on the show. And we were talking about platform is doing a little bit.
00:01:59
Speaker
Um, it seems, you know, even though AI seems to be kind of stealing the show for everything, it seems like it, but platform engineering is holding in there strong. So how, how is business over at port? How are you all doing?
00:02:13
Speaker
Yeah, that's ah that's a great question. um So what you call like developer portals is something that is not going, like it's not getting erased.
00:02:24
Speaker
What we're hearing is that platform engineers are actually more important and more critical for the organization's success in adopting AI. So AI is not sitting in the show because everyone wants AI and everyone trying to build with AI and build AI products. AI is everywhere, obviously.
00:02:42
Speaker
But once you see the value of it, then comes you know the worries and will it do okay? Can I trust it with my secrets? Can I trust it with our production environment?
00:02:55
Speaker
And you know We know that engineer not only code, actually most of the time, downed code. like He builds, he operates, he monitors, he deploys.
00:03:06
Speaker
And this is what platform engineers are about. And even more than that, AI agents that you know start to roam among us, they also need the platform. They
Building Platforms in a Post-AI World
00:03:18
Speaker
also need access. They also need tools.
00:03:20
Speaker
so So I think all of this just set the stage for platform engineers to to succeed and thrive in this world. Awesome. Yeah. I was really excited to have you on the show because you kind of have a unique perspective. I mean, you're building, we've had folks on, you know, product folks on in the past, but you're, you're, you have a unique perspective that you're building products for software engineers and, and, you know, platform engineers.
00:03:44
Speaker
So that's kind of a different audience. So that was, that's why I wanted to have you on. And I think it's a really cool perspective that you might have, ah but what is it like, you know, product, you know, we've been doing this product thing for a long time, building software products, but,
00:03:58
Speaker
AI, mean, you touched on a little bit already, but what is it like in this new, you know, post AI world and and trying to build products? How has it changed, you know, expectations with with our users and what does that look like now?
00:04:14
Speaker
Right. So so in ah in a platform kind of context, you know, you have, you're building all kinds of systems, like you're building actions, you're building pages, you're building dashboards, you're building your your data model, like you're building all kind of things in in the platform and through platform engineering.
00:04:33
Speaker
And building could be code, you know, it could be low code. And you know, we we have this joke that and we have code and we have low code, then no code. And now it's like prompt or chat and everything that did like, I go to a product and like, I try, I currently like assess products by, do they have a good MCP server?
00:04:54
Speaker
Do they have integration to like leading chat providers? So definitely I think that our patience is lowered. Like we can get everything like in a chat, in a prompt, in a glance.
00:05:07
Speaker
And when you have a product that requires more work, that even if it's an internal product or a system, then you just think twice. So in that sense, I believe and and I see that users are expecting this.
00:05:22
Speaker
Let me ask something. Let me get the answer. You know, we want to deploy. Let me just... Let's deploy this to production. Let's see it happening. Let's have this kind of magical workflows happening our systems.
Future of AI: From Text to Seamless Integration
00:05:34
Speaker
This is kind of the dream and and the promise, and it's very hard to get there. This is what we're trying to do, and this is what we're building. But I would say that's from the user side. And from the platform side,
00:05:46
Speaker
And we kind of touched on that. You see the AI. Like you see someone got access to, I don't know, to cloud code. And he was able to build something. And then AWS comes with some offering. They connected to this. They connected a couple of MCP servers.
00:05:59
Speaker
And, you know, you come after a day and say, i have an app or I have a production-ready app. And it's frightening. Like it's scary. And so why why do we have this project that's kind of waiting three months? And why do we have this platform team building like a new pipeline or a new migration project?
00:06:16
Speaker
And within within this kind of chaos, we need to understand how and where to adopt AI you know safely, in in where it brings value. So that's also from the platform's side that needs to see how they leverage AI more, but also in a good way, right? So the results are good.
00:06:40
Speaker
And I know like i wonder, it seems like the the expectations have changed of our users these days. Like, you know, it used to be you're you're worried about the user experience and the user and interface a lot, right?
00:06:54
Speaker
How can I make novel and unique ways to present information and data to my users? But, you know, everybody's, you know, they're all gaga over this chat GPT thing, which is just typing in text, right? we're We're just typing and talking back and forth with a computer.
00:07:09
Speaker
are that Are you seeing kind of a transition to where people are expecting just just a textual interface with with their products these days? Is that more of a demand? So I would split it into two.
00:07:21
Speaker
First of all, ah let's say in the in the chat era that I think we are currently like right in the in the in the middle of like being post chat era, but in the pre-chat era or de-chat era, I would say that the chat is like a very strong interface and it has many delicate UX pieces. Even if you go into ChatGPT or Cloud or whatever chat you're using, it's not just typing text and see what happens. Like you type a text and then If you count how much how many seconds from the second like you click send, I think the answer is ready. You have like 30 seconds probably at best.
00:08:00
Speaker
And to build a UX that lets you live these 30 seconds, like you see the reasoning and you see things and you see it's activating tools and you see that the live streaming of answers and you see sometimes graphs.
00:08:13
Speaker
So I think it's actually fascinating that just a chat became like our new canvas for work and you see the chat like you see the che companies just competing on which feature the next one is just bringing in in the chat.
00:08:29
Speaker
But I would also say that we are somewhat post the chat era where we say, like, I just want to give you a task and like let you know let me know when you're over with it. Go find me a ticket, go purchase me like and you like a new clothes.
00:08:44
Speaker
And in this context, I think that it it's a new challenge because you're you're not even in the chat. You're, don't know, you're in the world. You're not even in the product. You just say go build me an app and what then?
00:08:58
Speaker
And I think that that's like a huge challenge. I think that we are, I would say like a couple of years from like this, as someone, I don't remember who, but someone said this sentence as, you know, Iron Man, Tony Stark doesn't use a computer.
00:09:13
Speaker
So I believe you are like one, two years from being in this state that you kind you don't really need a computer. Like, why should I chat if I can talk and
AI's Role in Commerce and Oversight
00:09:23
Speaker
Why should I dictate if, I don't know maybe someday it will beat my mind, but, or maybe if we'll see through my glasses what I'm seeing and we kind of, So ah we I'm kind of, I think the leading AI provider, the really visionary companies now kind of think how the interactions between humans and machines look in like three, five years.
00:09:42
Speaker
And I'm pretty sure it's not the chat. So let's see where it takes us. Now, am I going to get to flying around and in a metal suit soon? Is that what you're telling me? Like, that sounds, that sounds cool.
00:09:54
Speaker
That's so amazing. I mean, it definitely serves the morning traffic problem. AI can do that now, right? Now, you bring up an interesting point, like that we're kind of transitioning from this kind of chat, interactive back and forth world to where it's more of an asynchronous, agentic world where I say, i I need this thing, can go go off and do.
00:10:15
Speaker
Now, may and and maybe this is a transitionary period, do you do you feel like the the interface or the expectation from the users will be more of a like, so you mentioned like, go buy me some clothes.
00:10:28
Speaker
Right now, especially with my experience with like ChatGPT and these things, they're they're a little flaky at times and they they kind of go off in their own direction. I don't know that I would trust them to go just buy me some clothes and then the stuff just shows up on my doorstep and I'm okay with it. I would think there's there's probably like that human in the loop kind of contact. Like maybe it might go off and say, I've picked out five outfits. Let me know what you think. Is that kind of...
00:10:50
Speaker
Do you think that's the next evolution? Is this like, I go and do present you with options and then, you know, then we figure it out from there or is, or, and do you think ultimately it'll go to like, just go buy me an outfit for, for prom next month, you know, something like that. Right.
00:11:05
Speaker
Um, I think you nailed Like this is really what we're working on, bor When you talked, what I thought is that there's, I think, some something psychological in us that, like, you send someone, even let's let's put AI outside of the of the equation for a second. Sure. We are employees, and I send you to a task, and I'm your manager, and, like, I just go something, like, go build this report. And then, like, I'm waiting, like, an hour.
00:11:31
Speaker
half a day, a day, like the the day after, like I request a status update, right? And usually the status is coming with some question marks, some, you know, some some some' good, some bad.
00:11:42
Speaker
So I think the same goes with AI. The main difference is that A, it's much faster and B, as you mentioned, it can do some chaotic things. We we call it the agentic chaos. Yes.
00:11:53
Speaker
yeah So I think it was two days ago or so that OpenAI and Stripe announced this kind of agentic commerce where basically you did exactly what you mentioned in a prompt, only you also connect your Stripe account and you can actually charge and you can actually purchase on your behalf.
00:12:11
Speaker
And now, as you said it, I'm not sure you would trust it. Like, what if it would buy the wrong outfit? What if it would we pay 10 pieces of this outfit? So I believe that, first of all, humans are here to stay, agents, hopefully. And if there are any agents listening, then, you know, we know you're you're up to good. But we're here to live in peace and harmony.
00:12:31
Speaker
and Sure. I believe the human in the loop and the and the ability to say in these kind of scenarios, we must have humans to approve or review. And in this kind of scenario, you can be more autonomous.
00:12:44
Speaker
I believe this is kind of the future platforms we need. We need the ability to say when humans are needed, when they are not. We want to leverage humans when they need to collaborate, think, architect.
00:12:56
Speaker
We don't want them when we have like repetitive, tedious tasks. And agree that the way to trust AI is to start by baby steps, see it succeeds, go to the next steps, pretty much like you would do with a new
Integrating AI at Port: Challenges and Transitions
00:13:08
Speaker
You wouldn't say, build, like summarize the last quarter in your report. You would say, like, send me a one pager on this project. And then if it's good enough, you give the next task. So this is kind of how I see us onboarding our AI agents.
00:13:22
Speaker
Okay, makes sense. Now, Port, is a they they were or they've been around for a little while, right? So they've they were before this kind of AI craze.
00:13:33
Speaker
What was the transition like as as you know we're learning the world is like, okay, This AI thing is catching on in that and it's caught on like wildfire now. What was the transition like as a product company from like, okay, we we roadmapped everything with no AI built in and now it's like everybody's expected to have some sort of flavor of AI features in their product.
00:13:55
Speaker
How was that transition? did did the Did the product team have to re-educate themselves? did I'm sure the engineering team had to, but what was that transition like? Right.
00:14:06
Speaker
So from the perspective of today, I would say it was ah amazing amazing, fascinating, also frightening at times. Oh, yeah. Probably like a year or so ago.
00:14:18
Speaker
um So I think there was the chapter of chats where we saw like Claude and Chagipati, they called it Build Code. We kind of started to baby steps in trusting AI.
00:14:29
Speaker
And I believe at that point of time, we kind of sat the product team, which was approximately like a year ago, and kind of tried to reimagine what would the future look like. And today when you say future with AI, it's like a day or two.
00:14:41
Speaker
At the time we say it's like, okay, like a few years, like let's let's assume technology will get there. How will it look like? And we kind of tried started to tackle ourselves with questions like would we like, would engineers need to approve pull requests? Like, would it still make sense?
00:14:57
Speaker
Or why do I need to think if I want to deploy the latest version to production? like Shouldn't I just assume always production needs to be latest? Or if we have a security issue, why do I need someone to review and say, yeah, we need to fix this?
00:15:13
Speaker
So we got started to to raise all these kind of assumptions that we had in in the pre-AI world and started to tackle them. And out of these conversations, we agreed that we must get our you know our foot in AI hands-on.
00:15:29
Speaker
And, you know, my take here is that the only way to be, so to speak, in this trend is just try yourself. Like, try to avoid the the LinkedIn, the X, the whatever it is that says, I build it up in a day or all these products are gone, Chagipiti killed them all. Just go and try. And it's hard because there are so many products, so many trends.
00:15:54
Speaker
And we just, you know, you know put there in the mud, we formed a team, I started leading it as a PM, had amazing coworkers, colleagues from the engineering team, from the marketing team, and just started day and night you know sitting, thinking, it wasn't like we have a roadmap and we have this feature request and like we have this kind of one year roadmap, not at all. It just, like what went out of the news yesterday, what we can do today, let's build it, by the end of the day we see it,
00:16:25
Speaker
Okay, it's good enough. It's getting us there. It's not. Let's delete everything. It was kind of back and forth. And I believe it was like three months, more more or less, we came up with a closed beta offering.
00:16:39
Speaker
And this started to give us lots of leads and conversations. And we started hearing... about what worries platform engineers and why they want to adopt AI and what why leadership wants AI to be in the forefront, but what are the challenges there?
00:16:53
Speaker
And these set up in a new kind of path. And today, for our analysis, we I can say safely that you know platforms and platform engineers are here to stay, you know as we talked on.
00:17:06
Speaker
And I believe there's many to come there. And as a product team, it kind of required us to, first of all, try, try hands on, like try writing prompts, try to evaluate AI, try to see why it doesn't get things right. Try to ask the same question five times and see why the third out of five what came up wrong and kind of it's it's a new kind of product i I think product skill and staying up to date with the news and have engineers that you know they didn't write prompts in the past now they need to learn to write to learn it and find the right infrastructure so it was really a crazy ride but we were focused as hell on like we need to get this out and
00:17:49
Speaker
we We have like, we kind of, I see it as kind of a startup within a startup kind of vibe. And I believe this was very successful and with the place that my team is currently in, we are starting you know to influence the other teams. We we kind of, we learned, we kind of did that the hyper growth of our learning on AI. Now we are kind of starting to to shift it in and and teach all the other teams and eventually become an AI native company. I believe everyone wants to be.
Balancing AI Product Development
00:18:19
Speaker
So we are also in this journey. Nice. And, you know, as we were talking about at the ah beginning that you all are developing, you know, products for developers, is it, you know, you have to be developers to develop software products. So is, do you feel like, is there any bias that, is it difficult to kind of take that step back and say, you know, how do I think from a unbiased perspective about the products that software engineers use? well how What's that experience like, like building for software developers?
00:18:50
Speaker
Right. um In a way, you might think that, but then I kind of think that most leading most leading AI products today are really built by engineers for engineers, like you think on the coding agents, you think sure building gaps.
00:19:04
Speaker
And I would also say that this kind of AI trend and hype in technology allows to for more people to participate in building software.
00:19:16
Speaker
And I think that kind of the team we form have both, you know, software engineers that the way I tackle the bias is like, if you try to build something, for example, with AI for developers, let's say connecting their ID to the portal, then okay, let's first try it inside. Like, does it give you value as an engineer in my team to be able to connect your ID to port, for example. And if it is, what is missing or what could make it 10x better?
00:19:46
Speaker
And then only we go out to market. And as a PM, I am quite technical, but I'm not in the weeds. I'm not kind of building and tackling the day-to-day struggles.
00:19:58
Speaker
So I believe I have the way to balance the team and and say, but we heard these customers who says, this is this this challenge. Or we have we read this Gartner report that says, these are the kind of the market changers. And we see this kind of trend in LinkedIn, what do we think about that?
00:20:13
Speaker
So I think it's a lot of of collaboration in challenging one another and see that we are not getting too you know too much in love with the solution, as you know as products folks like to say.
00:20:26
Speaker
Yeah. And really more with the problem. So even if we think we have a good solution, we always say, okay, but we have to get five customers that actually say it solved the problem. So, you know, let's not go too deep on that.
00:20:39
Speaker
But I think it's, again, I think it's actually quite and amazing to to build software product for software people by software engineers or build with AI products for platform engineers will try to adopt AI. I think it's like this kind of dog footing flywheel that is really, I think it's benefits to win-win for all situations, but as anything else, we need to make sure we challenge ourselves and stay grounded.
00:21:09
Speaker
Yeah, I love that idea of dogfooding. And some product companies don't have the opportunity to do that. i mean, it's kind of cool that you're building software tools for software developers and you are software developers, so you can use your own tools. Yeah, so that's really a ah fortuitous ah thing for you all.
00:21:25
Speaker
Now, what about, you know, when it comes to this AI stuff, obviously the world is changing around us every single day. There's a new headline of something cool coming out. I would guess that software engineers are probably a little more savvy when it comes to that stuff. They're they're more aware of of the new trends and stuff like that. Does it make it harder as a product company for software engineers that like, oh my goodness, I gotta stay up with all of the latest trends because this is my audience. Whereas if you were building a product for you know, kind of the mom and pop, you know, like my my parents or something.
00:21:56
Speaker
They don't really, they're not going to care about an MCP server necessarily, at least not right now, maybe in a year, maybe two, but but the developers are going to be a lot closer to that bleeding edge. Does that make it harder as ah as a product company to build products for software engineers?
00:22:11
Speaker
First of all, I put in in my calendar to check in the year if my mom is looking for an MCP server. That would be a good one. I guarantee she will. She will. If I'm wrong, you call me back and I'm wrong. hear you, James. I will write it down.
00:22:31
Speaker
I would say like yes and no. like No, because I wouldn't say that software engineers, the technical people are so savvy that, you know, they stay up with the news, they try everything and they're asking or craving for this MCP server.
00:22:47
Speaker
What I do see is they hear the buzz. They're not trying it out. Like MCP, I think it was like in April or May that it kind of blew up. And like everyone was talking MCP.
00:22:58
Speaker
And then we hear from customers, do we have an MCP server? At this point we had, but before I said it, I kind of asking, why are you looking at MCP server for? And then it's like, hmm, not sure.
00:23:11
Speaker
I just know I need to ask that. Yeah. Yeah. And like, what kind of your top three MCP servers you tried out so far? Well, actually, i just heard a friend who use them. So I think it's more of a trend and people like to say they're trying things and and you want to show that you're up to date. And if everyone are talking on MCP, probably there's something to it. If everyone is talking on AI agents, probably there is something that we could leverage there.
00:23:39
Speaker
So the way i see it, our job is to kind of cut the nonsense from the important things. and show you why you need an MCP server, or what would a good MCP server do for you, or how you could send the AI agent to work, coding agent, for example, and trust his outputs.
00:23:58
Speaker
This is the challenge I'm taking. And I think what we saw there... then the questions of do you have these, do you have that are really less important. Like I could get a question, are you integrated with Stripe and do you allow to charge your credit card with AI agents? But I don't think that's relevant to our industry. So I think we need to be very opinionated and very focused on what is the problem you're here to solve?
00:24:21
Speaker
I'm not sure AI, I mean, it really depends, but AI didn't change like the the basic fact of what we're here to solve. like The problem is still the problem we're solving. AI is just here and it's a very good ah and evolving fast evolving technology we can leverage.
00:24:38
Speaker
And as long as we stay true to that, I believe the the buzz and and and the fear of not being up to date can be minimized. Yet, you know, scrolling through my feed, you always feel left out and that you must explore in your way.
00:24:53
Speaker
so So it's a hard, like, how you balance between those two. Yeah. um I love what you said there. Like, it doesn't change the fundamental business that we're doing, of whatever business that is. And I think we it's interesting as an industry, probably for a year, we were in this, like, well, everybody's just, we just got to be doing AI, you know, and it doesn it doesn't matter what business you're doing. I got i gotta to be doing AI stuff.
00:25:16
Speaker
But it feels like the industry, everybody's kind of calmed down a little bit. And now they've kind of gone back to the basics, those first principles of like, okay, what is our business? Oh, we're a bank. We take people's money. We put it in a big metal room.
AI's Impact on Industries and Business Objectives
00:25:28
Speaker
And when they come back and want it again, we can give it to them. Like, that's very much trivializing banking. There's a lot more to it, I think. I think.
00:25:35
Speaker
ah But but yeah, I mean, getting back to those basics of, of you know, what is the business we're here to do? You know, those those things don't change the fundamental aspects of of the core business that you do have not changed. It's just AI as a tool. How how can we you know do that business differently or better?
00:25:53
Speaker
um Yeah, I think that we're settling down. Yeah, and you know, in some industries, and in some kind of verticals, it might be very frightening and make sense to rethink what the company do.
00:26:06
Speaker
So, like, if you have, like, one thing I'm keeping, like, I'm thinking about is, like, the financial industry. In the financial industry, when you kind of need to be reimbursed on expenses and you kind of have, you know, the month end, you need to fill in and send receipts.
00:26:22
Speaker
So it doesn't make sense for me that this would stay. I wouldn't say the industry is lost. I just say that. do you really need a person and the software to document receipts?
00:26:34
Speaker
Probably not. There will be a solution. But does this change like the basic nature of the problem, which is in this case to have like accurate up-to-date financials and make sure employees get what they need and their benefits.
00:26:48
Speaker
No, this is still the job. So it's just like apply critical thinking on what you believe AI could solve and always try to be one step ahead. Like the fact that it current can't solve it now doesn't mean it won't.
00:27:03
Speaker
So you kind of need to say, I think it's kind of 60, 70% okay now. So it probably makes sense to build a business, to assume it will get to 90% over time.
Enhancing AI with MCP Servers
00:27:14
Speaker
And if in those 10%, we have the like human in the loop or some kind of guardrails, then this is how you, no, this is how you adopt AI.
00:27:23
Speaker
You never have 100% and you never wait for it to be 90% because then you're behind the crew. So...
00:27:32
Speaker
So we've, um, yeah, I agree. i think that you've got to take that staged approach. Um, we've talked about MCP servers a little bit and I'm kind of, I don't know that we've ever like kind of dove into what an MCP server is on the show. ah tell me kind of briefly, like what, what is the the gist of what is, what is an MCP server?
00:27:53
Speaker
So as as someone from our company said it, think of a dog and like you play. Yeah, I know. i'm i'm I'm all ears right now.
00:28:04
Speaker
Right. So you kind of play fetch with your dog and you do it in your living room, for example. Now, if you want to play in a different room and not in the living room,
00:28:17
Speaker
then you have a door, right? You can't throw the ball at the door. You need to open the door. Now, think of the other room as, you know, a new opportunity, new data, new excitement for display of fetch.
00:28:30
Speaker
And think of the door as the gateway or the MCP server, so to speak, that exists in the middle. Similarly in the chat, when you kind of go back and forth in the chat, like, how are you today? I'm okay, hope build this document. Here are the documents for your review.
00:28:46
Speaker
And then you want to say, ah want to compare this document with the actual code base, or I want to compare this document with our guidelines the way we have them in our knowledge repository.
00:28:59
Speaker
If you just ask the chat for something like that, it won't know, right? Like it don't know what is your knowledge repository, it don't know your code base. You need to kind of open the door a new kind of path.
00:29:09
Speaker
So this is kind of that the very basics of of MCP. But if they go one one step, further and I'll go even a bit deeper, NCP is just thinking humans needs dashboards, they need text, they need typography, they need links.
00:29:27
Speaker
What agents need? Agents need, like they need data, they need knowledge. How can we surface it to them? And this is where MCP comes in. And I believe the value of MCP is not by just saying, yeah, just this is just APIs for agents.
00:29:44
Speaker
I think APIs are solvable for agents. In thinking, what is the experience that an agent need? Like, let's take an example. If if an agent is building... um let's say a notion page in your knowledge repository.
00:29:58
Speaker
So you'll need to think what is the tool that it needs to build the page. Maybe it needs to add addings, maybe needs markdowns, maybe it needs to summarize the page.
00:30:10
Speaker
You have to think what are the tools they just need to succeed in the task. And these tools are the core of what an MCP is. An MCP server comes from all different third party providers,
00:30:21
Speaker
So you kind of extend your personal AI coding assistant or just the AI chat with access to all these third party tools in a way that's very like accessible and adapted to agents.
00:30:35
Speaker
And it's just amazing. Like to give an example, like I'm using Cloud in my day to day. And I can say in the chat things like, what is the next meeting on my calendar? Have me prepare for it.
00:30:46
Speaker
Send a Slack message with the agenda on this channel and write a Google Doc
Adapting to Accelerated Development with AI
00:30:51
Speaker
on this. And everything happens from the chat interface. And this is the power that MCP gives me. Okay.
00:30:58
Speaker
Yeah, I was trying to think of like the like the the core platform of like what ChatGPT or Claude, what they provide is, you know, they they have a bunch of things that they can do. They do have capabilities that are built in, but there are certain things, as you said, there's these third-party things that it may not know anything about.
00:31:17
Speaker
It's exposing the functionalities and features of those third-party things to the to the core system, kind like a A plug-in or maybe like – maybe a way to think of it would be like when you buy an iPhone and you open it, it can do some things like take pictures or, you know, it's got some apps built in. But for other things, like if I want to track my my run on – or my bike, I got to go download Strava, for instance, right? So maybe that – Yeah, it's kind of like an app for phone. Okay. Okay.
00:31:46
Speaker
All right. And that's, is that becoming, um, maybe not but our parents level yet, but it is becoming an expectation for our, from our users that, that having exposure and being able to plug in those MCP servers is, is, uh, important to them.
00:32:03
Speaker
Yeah, yeah, in a way. And i also think that in in this case, and I believe technical people, software engineers, they are really the early adopters. So MCP, if we would talk on it API, then today API is, you know, no one worries on API, it just becomes a standard.
00:32:22
Speaker
But MCP just laid the ground of what is possible. And then when you saw it, you started, you know, ninja your way to it, try to gather a token from here, installing something locally here to get the benefits of MCPs.
00:32:36
Speaker
But ah I think that today, first of all, yes, it is an expectation, but I also think that we are transitioning to a period where it's more hidden in a way and MCPs kind of exist, but it's not the experience we give you in the product.
00:32:52
Speaker
like It was just a very, I wouldn't say very, but relatively fast way, very kind of quick and fast way to give you access, your AI agents, your AI coding assistants to leverage third party tools.
00:33:08
Speaker
But then when you think about it, why would you leverage third party tools? And then you kind of go to a world of more native integrations or just build everything inside one product. So you have all kinds of variations here.
00:33:20
Speaker
um But I do hear that, to like to me, MCP is kind of needed. I need an MCP to work with tools because I'm not writing anymore. I'm not building, you know, all kinds of configurations anymore. I need a platform that's kind of very prompt, human facing, you know, AI supported kind of a kind of visionary.
00:33:43
Speaker
Yeah, and the one I think the one thing that's kind of cool when it comes to like LLMs using MCP servers is like you talk about APIs. It doesn't necessarily have to be like that concrete API like we used to think of, like an open a ai i'm sorry open API specification.
00:33:59
Speaker
ah It can be much more loosey-goosey to the LLM because the LLM understands language. And so and the the the MCP server can kind of describe its capabilities in just text.
00:34:09
Speaker
And it's like, okay, yeah, and and I can figure out how to work with you. that's kind of cool that that integration is a little, it's malleable, right? It's it's very, exactly exactly it's it's easier to adapt. So that that that's pretty cool.
00:34:21
Speaker
Exactly. So i would also emphasize that, you know, way there are you can benefit from MCP servers. I talk to companies where MCP is like forbidden and like you can't connect an MCP server.
00:34:34
Speaker
There are also many risks to it. And when you want let's say like in a very, in a large company, you want to enable the benefits and the power of MCP, which is really connecting your third-party data to an LLM, it's not that trivial.
00:34:50
Speaker
they Like you need guardrails, you need approvals, you need ah way to kind of, delegate and and change access controls between teams. You need platforms that that can do that. And you know this is what part of what we are building, but I just think that it is a challenge. It is a challenge to even use get to the point you're using MCP in a professional kind of settings.
00:35:14
Speaker
um Yeah, i've I've just started to kind of incorporate MCP into my kind of daily workflows too. And it's just kind of mind blowing. So it's it's kind of cool. um All right, so you guys are developing products, as we said, for software engineers. And, you know, of course, all the buzz is ah we're not going to need software engineering, you know, that sort of thing. Like more nobody's going to have a job in five years. What what is the future like for product company building products for software developers and and and platform engineers?
00:35:42
Speaker
What does the future look like? what what are what do we How are we evolving? What is what is everything going to look like five years from now? Right. so but Need to say that with AI, when you speak future, it's probably a month.
00:35:54
Speaker
so Exactly, yeah. Five minutes from now. Armageddon. yeah But thep what I'm seeing in the market is that First of all, like on the question on will there be software engineers? So I would ask you if you are a CTO of a company and you see that AI in its promise at least allows developers engineers to build like five X more, 10 X more, would you hire more or less engineers?
00:36:23
Speaker
probably more. You want to build more. No no one wants to be less. I mean, yeah you want to build a a better product, you want to make more audiences. So I'm not, I don't think software engineers are going anyway. I do think that a few years from now, and and i sometimes I feel it's month from now, if you're able to successfully run coding agents in your organization, and we have customers who do that, then you're in a point that developers, engineers are not really coding.
00:36:55
Speaker
they are kind of reviewing, architecting, planning, and we we talked earlier on the on the visuals and the interfaces. This is where an interface like is very handy.
00:37:06
Speaker
I imagine the software engine, they are in the organization to be from, I think today it's somewhat the daily routines of like, you know, you have the daily standup, this is what did yesterday, this is what doing today.
00:37:19
Speaker
And even if you have some sense of urgency, it's kind of pretty smooth, so to speak. I imagine transitioning to something more like a call center, support center experience. You have these all kind of dashboards. I think you have to have the visuals here.
00:37:32
Speaker
You have hundreds of agents running around and just waiting for the point where the agents need humans. Like yeah they know what to build. Like you have feature requests, you have bugs, you have escalations, you have security issues, you have failed deployment. You have all these sources of tasks. You don't need to invent anything. You have PMs.
00:37:49
Speaker
you know and and product teams thinking what they want to achieve. And then you have agents doing all of this, and we agree they can't do it alone, they need humans. So once they once they need the humans, they bring them into the loop.
00:38:01
Speaker
Like, help me review this PR. I'm stuck in this deployment, I need permissions. Can I fix this incident? I think I can, but it's 2 a.m., And it's affecting this this this gold customer.
00:38:14
Speaker
And I'm not sure we can deploy this hour. So this becomes like a 24 seven engineering organization where developers become orchestrators. They become sort of a attack leads.
00:38:26
Speaker
And in this kind of scenario, I believe that first of all, there will be more of them. And when this is the future, the role of the platform, and we spoke about it, the platform engineers are really more crucial because you start to think,
00:38:41
Speaker
how we can make like something very autonomous, how we can make something very agentic, how we can make a process that when there's a failed build in our deployment pipeline, it automatically heals itself and just deploys again.
00:38:56
Speaker
And I don't need to worry about it. When there's an incident, why do I need to think twice? Like we have a runbook, we need to open a Zoom, we need to open a Slack channel, we need to fix obviously the issue, we need to report to everyone, write a post-mortem,
00:39:08
Speaker
What's in there that AI can't do? i I'm not saying it needs to do it alone, but how can I build this workflow that's AI first rather than human first, sometimes humans using AI on the side.
00:39:20
Speaker
So this is the transitioning I'm i'm seeing. And I think it will just be a world where software engineers benefits from having a more holistic skills like PMing, like UX, like researching, like experimenting, because your core of how to build good software is something you can now, in a way, delegate, instruct, write, architect to tens, hundreds of agents.
00:39:49
Speaker
And you, in a way, have some spare time. So someone needs to think on what needs to be built and what is good, what good means in our product. And I encourage, you know, ah software engineers to think like that. Think like a PM.
00:40:01
Speaker
and And in a way, I'm, as a PM, trying think more like a software engineer. Like, how can I build it myself? Like, do I really need to open a task on it for my team? Or can I somehow contribute him myself with the, know, AI info we have?
00:40:14
Speaker
Yeah, i love I love that look on, you know, kind of that that we may need more engineers. I guess the we get stuck in our current thinking of like, okay, this is just how the world works right now. You know, we a person comes up with an idea and a team of five or six software engineers are going to take six months and they're going to argue back and forth. and And then finally, we're going to have something that we can put in front of our users.
00:40:40
Speaker
But obviously this changes that calculus, the estimates and all of that. So I think it's it's it's an accelerator. But I think it's a matter of like, I think we were throttling our own brains and our own creativity. Because if i get if you know a product person has one idea and then it's implemented tomorrow or the next day, it's not like their brain stops working. they're They've had another idea. How can I make the world better for my users and my customers today?
00:41:03
Speaker
Again and again and again, we're not going to run out of new ideas of how to do things. We're not going to just finish all the bright ideas in the world in in a couple months and just sit around. Well, now what do we do? You know, like we'll still we have more and more ideas. what It's just that we couldn't have gotten to those ideas as quickly in the traditional without, without AI, right. Without that accelerator. So I love that, that kind of take it. on You didn't say it that way, but that's kind of the way I was hearing it in my brain.
Integrating AI for Efficiency
00:41:29
Speaker
and And when you said that there are kind of multiple opinions, then i remember like it was probably, Four or five years ago, when I was in a company, we kind of redesigned our site and there was all kind of design guidelines or design, ah I don't know, like kind of ways of designers to think on the product.
00:41:51
Speaker
So we had like three designers, each one working on his idea for two weeks, coming back after two weeks and say, this is what we have. And we have this kind of vote of what we think best.
00:42:02
Speaker
And in today's world, it it's not the future. Today's world, you kind of open like a mocking app. There are a bunch of those. All have a free tier. And you just have, I want this idea one, this idea two, this idea three.
00:42:16
Speaker
You wait five to ten minutes. You have the three ideas already. You can spin up a quick prototype using your coding agent. And you just send a survey to your customers. What do you prefer? And within a day, you have an answer that is customer-based. And this is today.
00:42:30
Speaker
so I just think that the marginal cost, the cost of building is so low that why not build everything? Let's build all the ideas. Let's build all the variations and let's yeah stick to what's what's working.
00:42:42
Speaker
the The question then becomes what is our success criteria? Like how do we know what we are optimizing for? which is very crucial for AI, right? AI needs to know what it's like success with ERR, otherwise would fail.
00:42:55
Speaker
So I think these are just more like, it's good questions to focus on. I barely on my theme focus on what success mean, rather than do we prefer like do we prefer Kafka or RabbitMQ?
00:43:08
Speaker
Right. Yeah, I mean, you think about like commerce back in the day of like, you know, if we wanted to sell something from one country to another, you'd have to put all those things on a big boat.
00:43:19
Speaker
We still do some of that, but to this day, but like it would take three months, right? Or in in order to get in contact with another country, I'd have to get on a boat and take three months to get there. There was no email. There was no telegram at that point. you know you had to sail somewhere and and and then go talk to those people.
00:43:34
Speaker
Well, this changed all that with the airplane. We can hop on a plane now. So it's just more business was able to happen because we were able to accelerate and do new things. So there's more businesses exploded. It wasn't like, well, we just did, you know, spice trading and that's all we do now still with planes. No, it it it compounded and allows us to do more things. So, yeah, that's great.
00:43:56
Speaker
So I'm not going to be out of a job, is what you're telling me. I i don't think you would. I don't think you are. okay If anything, I think you'll AI agents in the show. Yeah.
00:44:07
Speaker
Yeah, that'd be kind of cool. They'll have two takes. Yeah. I would be interesting to see their answers for the final segment of the show, the the lightning round, see what they say then. and if i can stump I wonder if the same way we think that AI needs human in the loop, if you'll host AI agent, will he say that sometimes human needs AI in the loop?
00:44:29
Speaker
They may just kick me out of the show. You know, the AI agents would just take over and then they... Right. the user the The listeners might like that better. All right. So the next segment of our show, and this has been fantastic. This this is great. So I'm really expanding my brain. I love it when I have guests on that I learn things and and now I'm thinking differently. So that's great.
00:44:48
Speaker
ah The next segment of our show is our segment that we call Ship It or Skip It. Ship or skip. Ship or skip. Everybody, we got to tell us you're ship or skip.
00:45:00
Speaker
Okay, first up for the ah ship it or skip it question. This is kind of an interesting one, a little bit of a thought experiment, I guess, if you will. If you could only do one or the other, one being hiring an AI agent or the other being hiring like a software engineer, a person, forever and ever, amen, for for the future, what would you do? What would you choose? To me personally, i would say an AI agent.
00:45:28
Speaker
And it's not because I don't value humans. I think that when you had a human, you just keep thinking and doing more of the same. Like you just, you're you're saying, yeah, I will use those AI agents. I will leverage AI, but like, not now. Like now I have this urgent stuff we already committed to. it I want to do it good.
00:45:47
Speaker
Let's just work with what we have. But if I'm just getting another AI agent, I must change the way I work. Like I must think how I can provide it the context it needs, how I can provide it the tools it needs to succeed, right? You've talked on MCP.
00:46:01
Speaker
So I think that bringing onboarding AI agents to the team just make you think differently and try to solve the problem from a different angle. And I think that's where you can actually 10x and upscale what you're doing versus then adding another a you know human engineer.
00:46:19
Speaker
that That's to me personally. Yet I do believe that we tend to bring more humans in and throw humans at a problem. um I think we just need to look at the problem from a different angle.
00:46:31
Speaker
Yeah, I think the we're not quite at the point where we've figured out what what is the new normal from ah a capability standpoint for one human being to do with AI agents. I don't think we understand that yet. I think there's ah there's a, there's as I said, there's a new calculus involved, like how how much can we do? What what is our capacity?
00:46:54
Speaker
I do think we we need to explore introducing more and more agents into our world on a day-to-day basis to really get the full feeling of as you said, like, what can't an AI agent do in in a software development world, right? there's There's a lot of things that they can help us with.
00:47:10
Speaker
I do like that idea of introducing, you know, more and more agents to to to really kind of push that envelope and see where where can we get. But even with that... It'll open the floodgates, right? I mean, I think, as you said before, we're going to need more and more engineers because of this, because now now the creative creative floodgates are just, they're open, right? And you know, James, if I can bring an example, in in my team, my product team,
00:47:34
Speaker
We had this kind of all different rotations of like all kind of tasks that we kind of do with us. Someone needs to do them. One of them is the change log.
00:47:44
Speaker
So every feature, every change that comes up, we need to document it, you know, have a link, have a screenshot. And eventually we want to publish something that says like, this is what we shipped this month.
00:47:56
Speaker
And this was a 100% human process. And as you as you say, like this takes from the PM capacity and you obviously need to onboard new members with time.
00:48:06
Speaker
And what I did is really what you said, like i just thought, why wouldn't AI do that? And what it needs from us And they build an agent in our team that does exactly this process, like just picking up all the releases, building the release notes, getting them sending them for approval to the PMs, and eventually publish this out. So this reduced the time it takes to build it from, i don't know,
00:48:30
Speaker
couple of days to like an hour or two just
Ensuring Product Success with AI Evaluations
00:48:33
Speaker
reviewing. So this is exactly what I think people, product folks, product teams needs to do. They need to stop, think, does it make sense for human to do this end to end?
00:48:44
Speaker
And if not, let's start to give at least a chunk of it to AI. And and you'll see what ai can do and can't do and how to work with it. that's That's the beauty of it. And when it works, you know, it's really it really is amazing.
00:48:56
Speaker
It is pretty impressive. I've been very impressed with, especially on software development, like Claude, like with UI work. I mean, it's it's pretty darn impressive. um And this is something you brought up um when we were when we were prepping for this episode, this notion of evaluations.
00:49:15
Speaker
ah And and this this notion that all project product managers should be able to do AI evaluations. as This seems like it's kind of becoming this hot topic in the in the product world.
00:49:27
Speaker
Tell me a little bit about that and what's what's your take on that? So, yeah, I'm hearing it a lot. AI evals, AI evaluations. which as you mentioned is kind of thinking, the way i the way I see it is like thinking kind of an acceptance criteria for the feature.
00:49:44
Speaker
So if you are pre-AI kind of, let's say regular feature, your acceptance criteria would be what would happen when I click this button, what happens when I open this pop-up, what is the text i expect this to show.
00:49:56
Speaker
And with AI, there's a change because it's non-deterministic. You don't know what would be the answer. If someone gets is able to get AI to answer the exact same things multiple times, then you know. let me know and As a product people as product person to think what would success mean in this kind of yeah conversation chat interface, it's harder.
00:50:21
Speaker
And then you could say kind of obvious things like the chat needs to be accurate. It shouldn't hallucinate. Yeah, okay, it's kind of like saying the software needs no bugs. Like it says nothing.
00:50:33
Speaker
So evals are kind of a way or something that is kind of gaining traction that says, and specifically it says like product managers should write AI evaluations.
00:50:45
Speaker
And what is AI evaluation? For example, AI evaluation could be the way I hear about it at least and the way people usually mean is something like, when you have this prompt, and let's say this the prompt is, we are an app that helps you buy for clothing.
00:51:06
Speaker
And with this kind of prompt, and when the user is asking, do you have size medium on these trousers? Then this would be the good response.
00:51:17
Speaker
And let's say the response is, yes, we do have a couple of those, it will cost five hundred five bucks. This is kind of a very simplified way of what usually people mean when they talk on AI evals.
00:51:29
Speaker
And I have two takes on this. One, I don't think it's different from writing exceptions criteria at the time, given when then, given user comes to shop, when he clicks on this button, then this would happen. I don't think it's really different.
00:51:45
Speaker
So I don't think there's any news there. I just think it's a very focused way to look at something broader, which is how do I measure the success of my AI product?
00:51:56
Speaker
And when this becomes the question, I believe that It's not really on writing or not writing evils. It's how do you know your product is successful? And I think it's critical for your success on to your company and the product. like It's just different metrics. For example, I want to be able to see traces of what users are asking and when AI hallucinates.
00:52:17
Speaker
I want to see how many follow-ups questions do you have on your first question. Maybe it indicates that it didn't get it right. Or I want to see how long your chat takes because maybe it c shows you're engaged.
00:52:31
Speaker
Or maybe if I go deeper, when I'm asked on a specific topic that I don't have any knowledge on, I expect you to answer that you don't have any knowledge on. So I just think it's really...
00:52:42
Speaker
Kind of what we did so far, we just need to understand that it's non-deterministic, so you kind of need to think about it a bit differently. And the second thing is I don't think it's relevant to PMs only. I don't think it's the PMs seeing the most important skill. I think it's the product team skill. I think engineers need to understand how they build successful products.
00:53:05
Speaker
pretty much the same as the PM. I see myself as part of the team as and as a product team, with engineers, with PMs, we need to make sure our product is successful and how we do it by no monitoring, by troubleshooting, travelshooting by another by and ah i don't know analyzing errors.
00:53:23
Speaker
So I think AI evils is usually ah being right about are very specific to something that I think is not very high value action. I think the high value comes from deciding what success looks like and making sure you have a successful product.
00:53:39
Speaker
Sometimes it requires evolves, other times it doesn't. So is your idea more like, it's not just about like the output that can, that shows up on the screen. It's more about the overall experience. And I think this, this notion isn't new.
00:53:57
Speaker
If I think of like marketing people, like bounce rates. I changed my website and more people are bouncing, right? They're not, they're not staying there for very long. So we have this like behavioral observation that reflects like, did my, I changed the actual output help or is it, is it better or worse? um Okay.
00:54:17
Speaker
I like it. I think that's true. And you have new challenges. Like there is a new model that came out or you want to change from Gemini to to Anthropic Sunet. How do you know your product still succeeds, still like work in the same rates as before?
Closing Remarks and Future Content
00:54:34
Speaker
it's it's a It's a different challenge. Like you don't, in some cases, you don't know. So AI evils kind of try to make a way for you to know. But again, these are just more challenges. Like if you were migrate from AWS to Azure, like how do you know the product stays successful?
00:54:52
Speaker
You do because like you just change your cloud infra. When you change your AI infra, it could twist your entire product upside down. So I just think you need to acknowledge it and kind of make sense of it and and see what skills you know you need in this kind of era.
00:55:09
Speaker
But it's definitely a new challenge and I think it's it's been there. It just, the technology just behaves a bit differently.
00:55:17
Speaker
Yeah. um ah we We have had this challenge with some of our clients already where, you know, as the new models come out and we're like, we built a system, even a simple rag based system is going to feel different after a model upgrade. And then so you have to and you know figure out, is it is different worse? is different better?
00:55:36
Speaker
Is different just different? What what is it? So, OK. All right. I like it. Yeah. All right. Well, that was great. ah now we transition to um the most important part of the show.
00:55:49
Speaker
You know, this was the warm up just to kind of, you know, take your nerves off, you know, so you don't you know, you're not nervous. We did all of the what we're talking about now just to get you to calm down for the really important part.
00:56:01
Speaker
of our show, which is the lightning round. um There are correct answers. This is high stakes. um our We do have a scoring algorithm for the answers for this.
00:56:12
Speaker
And it's it's super, super important. Are you ready for the lightning round? hope I am. I'll do my best.
00:56:39
Speaker
What was your last Halloween costume you wore? I'm trying to think when I actually wore Halloween costume. ah One I remember was a skull-like, you know, face paint and where we had a family family kind of costume.
00:56:58
Speaker
It's classic. Very good Very good. um Why can't we tickle ourselves? Can't we? and and i got The question's written here, so I assume it's true. Yeah, I don't.
00:57:11
Speaker
Yeah. So, okay. So if it's written, I want i won't challenge it. Yeah, it's on the internet, so it's got to be true, right? Yeah, definitely. I think it's because it's funnier when someone else tickles you. Like if you tickle yourself and you're kind of laughing and then you just like, I don't think the challenge here. I don't think it's really funny as someone else tickling you and just came, Oh, please stop. And you kind of continue and it just takes the fun out.
00:57:36
Speaker
right. When you go to the grocery store and you get your your groceries, paper or plastic, what do you go with? Paper. Paper. And didn't know if that's... 100%. I didn't know if that was a regional thing, like if it's if it's different in in Israel or not. But what is your ideal outdoor temperature?
00:58:00
Speaker
Hmm. um it's kind I'm Mediterranean. i like the sea. I like the beach. So i would say enjoy in like close to 30 degrees Celsius, which is pretty hot. Yeah.
00:58:15
Speaker
Okay. You like it hot. that's Okay. where I like it hot. All right. Usually I'm the one who kind of wants to turn off the air conditioning and my wife is the one that turns it on and then I'm going to bring the blanket.
00:58:30
Speaker
So I'm the warmer guy. I had to do the calculation because lot of our folks are here in the United States and we like to stick with things that nobody else in the world uses. So we use the Fahrenheit scale. That's about 86 degrees out you know for us.
00:58:43
Speaker
That is pretty, that's warm. I mean, I don't mind it. i I'm probably more down in the 78, 76, somewhere in that, which, you know, like a few degrees.
00:58:54
Speaker
But yeah, I like it. All right. Um...
00:59:01
Speaker
Do you currently own any stuffed animals?
00:59:05
Speaker
So no, first of all. And I'm vegan and I know it's it's not like necessarily correlated. But like I don't like stuffed animal or zoos or anything like animal. Like I want animals to be in the wild.
00:59:21
Speaker
Yeah. No, I don't have stuffed animals.
00:59:25
Speaker
if you were If you had to choose between either climbing a mountain or jumping out of a plane, which one would you choose? Climbing a mountain. Yeah? i kind of I'm not scared of heights, but I'm definitely scared of jumping off a plane.
00:59:41
Speaker
Yeah. don't I don't like that idea either. um What country... What is a country that you'd be okay never visiting in your life?
00:59:56
Speaker
Not that you dislike it, it would be like, if I didn't get around to getting there, I'm okay with that.
01:00:03
Speaker
I would say Russia. Russia? Yeah, like i I think there are some nice views there, but I think it's too cold for me either way the whole all year round.
01:00:14
Speaker
So I'm kind of okay being at a distance from it. Okay.
01:00:20
Speaker
Okay, since you're vegan, this is a good question for you. Black beans or refried beans in your burrito? Probably black beans. Black beans? Yeah. um We would have also accepted refried beans.
01:00:36
Speaker
That was another correct answer to that question. Oh, so so that was a tricky question. No, it was, we we asked black or refried and you said black and I said, we would have also accepted refried beans.
01:00:55
Speaker
What size of bed do you prefer to sleep on? Well, it depends. Is it only me or like my spouse? we can You can answer both ways if you'd like.
01:01:08
Speaker
We'll allow it. Like if you go to a hotel, what are you asking for? um i Probably if it was only me, i would ask for like maybe a queen size bed. don't need too much.
01:01:23
Speaker
sure ah But then if I'm not alone, then usually I'm the one cuddling on one side and then my wife is the one taking the entire bed. I probably need the king size.
01:01:34
Speaker
Okay, so some things are the same all over the world. and you know They're universal. Yeah, that that that's my experience as well. Yeah, I assume so. Yeah.
01:01:45
Speaker
Do you know how to salsa dance?
01:01:50
Speaker
Not really. I think i I had a lesson or two, and I had this one, two, three, one, two, three, but but probably that's it. So I would i would go with no. Okay. Okay.
01:02:02
Speaker
All right. And finally, ah for the lightning round, can you touch your toes without bending your knees?
01:02:11
Speaker
No, I think I could at one point, but now I guess I'm just past this age, so no. Okay. I do see my kids who sometimes practice yoga and I see they're doing these crazy things we've deal with their body.
01:02:24
Speaker
they fred like very scaling and That stuff like gives me gives me like a cramp watching them do it. Yeah, i know. Yeah. yeah yeah Kids are very bendy. Okay. Any closing remarks? Anything coming up? Like are you speaking anywhere? any sort of ah big blog posts you're red producing? Anything big coming out from Port ah that you want to let us know about? and Anything like that coming up that you'd like to share with us?
01:02:49
Speaker
So um as I mentioned, like we started our AI journey a year ago and we are later this month launching our new version of what we see as the future of developer portals.
01:03:02
Speaker
So definitely stay tuned for that. And as for myself, I'm writing a personal blog, nothing really major coming up, but I like to write on technology where AI bluffs me or kind of very super cool automations I build.
01:03:19
Speaker
So you can definitely find me and my blog at matangr.com. And given we speak now, probably I will have this post coming up in a week or two.
01:03:31
Speaker
and All right. Fantastic. Yeah, this I'll have to check out your blog. Well, this has been great. Thank you very much. i don't know if this is, is this like late in the day? No, I guess it's like mid. No, no, it's like afternoon here. Yeah.
01:03:44
Speaker
Well, thank you for for joining us from way over yonder on the other side of the world. That's great. Thank you so much. and We really appreciate you coming on and thank you so much.
01:03:55
Speaker
Thank you for having me, James. Cheers. If you'd like to get in touch with us here at the forward slash, drop us a line at the forward slash at Caliberty.com. See you next time. The forward slash podcast is created by Caliberty.
01:04:06
Speaker
Our director is Dylan Quartz, producer Ryan Wilson, with editing by Steve Berardelli. Marketing support comes from Taylor Blessing. I'm your host, James Carman. Thanks for listening.