Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#20 - Ben Hoyle - How to integrate AI into your company: Insights from ZEISS image

#20 - Ben Hoyle - How to integrate AI into your company: Insights from ZEISS

E20 · Adjmal Sarwary Podcast
Avatar
38 Plays2 days ago

What does it take to bring generative AI from groundbreaking tech buzz into practical, trusted tools embraced by a 45,000-strong global company? 

In this episode, I’m joined by Ben Hoyle, staff scientist and advanced technologist at ZEISS Group, who shares his journey from computational astrophysics to AI leadership. Ben unpacks the challenging cultural shift required for large enterprises to successfully adopt AI, balancing top-down endorsement with grassroots empowerment.

We explore:

- How Zeiss approaches AI adoption by showcasing real employee use cases to spark curiosity and inspire practical applications.

- The critical importance of building and rebuilding trust in AI within organizations, especially after early data leaks shook confidence.

- Why “just throwing AI” at problems rarely works, knowing when AI is the right tool versus sticking with proven tech.

- The evolving role of AI as an orchestration layer, enhancing rather than replacing deterministic systems.

- How scientists’ translation skills make complex AI concepts accessible across diverse teams.

- A compelling vision of a future where AI integration reduces screen time and reconnects us with more human experiences.

If you’re interested in understanding not only the tech but the human and organizational dynamics that will shape AI's real-world impact, this episode is for you.

Recommended
Transcript

Introduction to the Podcast and Guest

00:00:00
Speaker
Hey, what's up everyone? This is Sajmal Savary and welcome to another podcast episode. In this episode, I'm joined by Ben Hoyle, who takes us on a journey from astrophysics to the cutting edge of generative AI at Zeiss.
00:00:11
Speaker
We dive into how big companies are integrating AI into their workflows responsibly, the cultural challenges of adoption, and the fascinating ways this technology is reshaping how people work and solve problems.
00:00:23
Speaker
Ben also shares insights on building trust in AI, inspiring curiosity and what the future might look like when AI becomes a seamless part of our daily lives. Enjoy.

Podcast Focus and Guest Background

00:00:49
Speaker
Hey everyone, and welcome to another podcast episode. If you're new here, my name's Ajmal. I'm a neuroscientist and entrepreneur. On this podcast, we explore the links between science, technology, business, and the impact they have on all of us.
00:01:04
Speaker
Today, we talk to Ben Hoyle. Ben is a staff scientist and advanced technologist at SICE Group, where he leads a team exploring cutting edge technologies like generative AI, process automation, and homomorphic encryption.
00:01:16
Speaker
With a PhD in computational astrophysics and extensive experience spanning academia and industry, Ben brings a unique blend of deep scientific expertise and practical data science leadership.
00:01:27
Speaker
His background includes pioneering research in cosmology, managing interdisciplinary teams and co-founding a tech company, giving him a well-rounded perspective on how advanced technologies can transform both scientific discovery and business processes.
00:01:42
Speaker
All right, enough background. Let's get into it, shall we?

Transition from Astrophysics to AI

00:01:47
Speaker
Well, welcome everyone to another episode. I am very excited to talk today to Ben.
00:01:54
Speaker
Ben, um it's great to have you. I'm very thankful to my good friend Sven Spöte. He's now your colleague and he has introduced us. um We will talk about, I I don't even know where to start.
00:02:09
Speaker
It will go from science of astrophysics and maybe a little cosmology over to what so you guys are doing at SAIS. And that's already going to be five hours long if we go through all of that.
00:02:23
Speaker
So we will just scratch the surface here and there. But the main focus will be basically Gen.ai and the integration of it at size, at companies, the impact it it's going to have, the impact it already has.

Public Understanding and Cultural Challenges of AI

00:02:39
Speaker
And i think what's very, you know... I notice sometimes that we that are folk working so much with this are stuck in a little bubble. And when I start to talk to people, let's say the people in the actual real world, I sometimes get the feeling, okay, we are we are very far off.
00:02:59
Speaker
We need to bring this back to reality somehow to see what happens. how it can actually help the people that don't know how model training works, that don't know all of this. They just want to know, well, I do one thing and I can, this thing actually do this now better for me or not.
00:03:18
Speaker
So let's start very simple. um Well, I would like to ask you if you could start sharing your personal journey a little bit into AI, Gen AI.
00:03:30
Speaker
How did you start from your scientific background and Yeah. How did you did you get to where you are now? Yeah, sure thing. So first of all, thanks a lot. It's really nice to meet you. And and thanks for the opportunity of coming on here and and ah telling and talking about my journey with, ah um of course, from astrophysics and cosmology through AI into Gen AI. A lot of that time being at Zeiss and about the the learnings that that we've been um collecting and and how to bring this technology and in a nice way into the company and how to use it.
00:04:07
Speaker
um And I think so what you first actually mentioned a really nice point, which was, um you know, we're in this bubble of let's let's say a little technology bubble where we have some idea of what's happening and what's going on.
00:04:19
Speaker
And it's very easy for people who are not in this bubble. So let's say most most people. um to be at least a few years ago to be completely unaware about this of of this technology and and yet to have no idea about anything AI related and it's know just a product of our backgrounds that we have this type of uh we have the luxury of of having this type of experience um but what i definitely see is that um You know, this conversations around a generative AI, but let's just use the word chat GPT because that's when people when when you know um people who are not experts in the field talk about generative AI, they typically say chat GPT, and that's a good enough you know use of the word for me.
00:05:09
Speaker
um When ah people talk, you know, that more and more people start to interact with ChatGPT or to see it ah ah being used in their in their day-to-day lives.
00:05:21
Speaker
um and And so this this even goes as so far as um I was... um I spent a month last winter on in vacation in Fuerteventura playing beach volleyball.
00:05:33
Speaker
I was in a restaurant talking with some beach volleyball friends about alternative AI activities. um and Some guy at the next the table just next to us just you know chipped in and said, oh, yeah, I know about ChatGPT. He was telling us his experience with ChatGPT.
00:05:49
Speaker
and And I really think that you start to find these experiences happening more and more often now that people um in a much broader range of um with with a much much broader um range of the but of the population then actually now have some experience using ChatGPT or with a chatbot of some form. And so,
00:06:11
Speaker
um Whilst definitely a couple of years ago, you know it was really a domain of um those who know in a small bubble and those who don't being the wider population, I really feel that we um that bubble has somewhat been broken and that really this this this at least some exposure to this technology is now being spread quite quite in the it's quite mainstream now, people knowing about this this tech.

Encouraging AI Exploration in Companies

00:06:36
Speaker
um For sure, for sure. And I mean, I noticed this as well. i You can tell me if that if if it happens to you as well. I'm basically asked about it all the time from people where I would have i never thought about it. And they ask me things where I also have never thought about using it that way, which which startles me the most and not startle in a negative way. It's just like, oh,
00:07:03
Speaker
That's interesting. It's really impressive, huh? Yeah. Yes. Like how in in how creative humans are. Yeah. and how they think about solving their own problems using this technology or if this technology can be used to solve their problems.
00:07:19
Speaker
um yes um Completely the same. So every every conversation I have with anybody I ever meet, this it seems to be all about generative AI. And and particularly when they when people figure out that you know I have some experience here, some knowledge in this field, um then then conversations very often you know migrate in this direction.
00:07:39
Speaker
um and Of course, I enjoy it because it's it's really fun to have these dialogues. um I fear about the people around me. They get a bit bored listening to me talk about generative AI, and I do try and stop for myself and and and and and you know make sure that people aren't bored of me talking and or me monologuing. try not to monologue too much.
00:07:59
Speaker
um ah But yeah, but what I really love is that the people um ah are so creative in how they want to use this technology. And so exactly, you know we often don't know um what are the problems that people are facing and what what what are their difficulties going through life, whatever they're doing.
00:08:18
Speaker
um And so it's very difficult for us to, you know from the outside, um think of a of a way they could be using this tool to to support them. um because we don't really know what their their problem set is.
00:08:32
Speaker
And this actually maps directly into industry as well in in the business. um you know It's very difficult to think. you know you Often I give a presentation to some parts of the company, they're like, how can we use generative AI to help us in our tasks?
00:08:45
Speaker
And my first question is, you know what are your tasks? you know i yeah I don't know what your tasks are. um Just like I don't know some some member of the public what their you know day-to-day issues are they're trying to they want to tackle.
00:08:58
Speaker
um So I don't know you know some some parts of the company even. medicine fact, with a company like Carl Zeiss with 45,000 employees, I'd say the majority of the company, I've really no handle on on what the topics that these people are trying to address.
00:09:13
Speaker
And so I'm not the right person to be um um trying to tell them how they could be using generative AI to help them solve their problems. um I would say I'm the right person to show them what generative AI can do and then to open the door for them to um think for themselves, how can this help ah ah so solve ah thems solve their problems and then to act like a ah sounding board or some type of internal consultant to help them to see if this is a good use of this technology.
00:09:47
Speaker
But I can imagine, you know, that I totally agree with what you say is that, I mean, it says GPT for reason, right? General pre-trained, a general being the focus, yeah pre-trained transformer model is is like, okay, well, you can do a lot of things with it.
00:10:03
Speaker
It's not um you do A and B and you get C. It's like, no, you can do much, much more with it. But then sometimes, and and that's sometimes what I notice is that When I try to ask exactly the question that you said, instead of saying, okay, this is what you can do with it, more in terms of, okay, what are you doing in your day-to-day?
00:10:26
Speaker
i can see the frustration in the people, people's eyes, being more like, hey, this is what I, you just you're supposed to help me, right? it's it It kind of becomes a catch-22 for this, where did you start?
00:10:41
Speaker
Yeah. You know, and I wonder, how do you do that with, How do you do that with 45,000 colleagues who all have different ranges of expertise, ah different departments and and and and and yeah well different tasks, different use cases?
00:11:02
Speaker
how do you um How do you go about that in a good way? Yes. So that's a really nice question. um And so I think, so what what we've done is ah we we um take people or use cases where people have been using Genitive AI to help solve the task in some way.
00:11:28
Speaker
um And then we sort of use them as a figurehead. and We maybe make a little um a video, a little podcast video, maybe even something like this, where we ask them about their experience, why they did what they what they why they why they did what they did um and how they did it, and a way to solve which problem they have.
00:11:46
Speaker
And then we make like a little show like collection of showcases. And then we we we publish ah publish this internally um so that anyone can come and anyone in the company can come along and see how other employees are using it to solve tasks which are important to to those employees.
00:12:04
Speaker
So then that starts the this this ideation process off. So they see that um it's not only you know me or a few others just using it

Responsible AI Introduction and Compliance

00:12:14
Speaker
in some demonstration, but they really get to see, ah, this person um who's also in my business unit or who has a similar job that I have, he he or she use it for this task. so and Maybe I could also use it for that task.
00:12:27
Speaker
And so I think it's really this just trend transitional translations of, um showcasing problems that I know I need to solve and so I can solve them, bringing that problem and solution space closer to the user, or to the target person then, so it becomes more accessible or they can
00:12:52
Speaker
so it becomes then more um more accessible or or they can they can they can feel it somehow, that they can bring it get it into their hands. They they can see that um if I you know give a demonstration about building a chatbot with some document base, then that's very far away from you know what they think they need to use the tool.
00:13:13
Speaker
But if they see someone in a same a similar business department with a similar function, um ah taking ah requirements documentations for how to build some new instrument and building a chatbot on that requirements documentation um and then interacting with it.
00:13:31
Speaker
And they themselves have to you know build a requirements engineer, for example. Now they can actually see that um how the this technology, this chatbot technology or this document retrieval technology can actually be useful for them, which they maybe couldn't see when I was just giving them a general example of, oh, you can do document search or for um knowledge retrieval.
00:13:52
Speaker
oh I have a chatbot interface to some um some knowledge retrieval. source. It was like too big a cal too be a gap or a chasm for them to ah span.
00:14:05
Speaker
and By bringing solution problems and solutions closer to to their problem space, um then that can start to help them. so I think that's some ah that a nice way of of trying to tackle this problem.
00:14:19
Speaker
um And then you know training. So training is super important. of um So we provide trainings for all employees. um and these we And we have internal training platforms.
00:14:33
Speaker
um And in these platforms, we also we often highlight, hey, have you done this generative AI training? um This might be something that you we might find useful. So some we have some type of recommender system within our a training platform that that suggests um maybe you want to do this training because maybe it can be useful for you. um and so That's cool. Yeah. Trying to bring the tools and give them ah people hands-on experience with these tools.
00:15:04
Speaker
um It's how we're trying to solve this problem. But with 44,000 employees or 30, 45,000 employees now, and that's actually, of course, still a ah big task. It's ah it's a's a huge task.
00:15:16
Speaker
And I think it's it's um think it's super interesting what you say is I would... You know what you said about, okay, these are some example cases and then people can see for themselves, okay, maybe this this helps me.
00:15:30
Speaker
it's I see it sort of like it's it's almost like an inspiration, you know, it's like an inspiration trigger of, oh, I never thought about using it that way. And that happens to me almost every day now. Like, oh, I have not thought about using it that way.
00:15:45
Speaker
But you your mental toolkit of what you can use it for is just expanding and expanding. And of course, get some cross-pollination of ideas for yourself.
00:15:56
Speaker
And then try that out. and So I think that's super interesting. And coming back to how I see people use it in their day-to-day, and then they tell me about their examples. was like, oh, that's actually pretty cool.
00:16:10
Speaker
And then I start to use that as an inspiration for... or as the basis to do something that I want to do. um For example, I've seen, now I think it's more commonplace to do, but people doing um a lot of renovation work at home and they just take a picture of their living room and they say, hey, it's changed the color of the the the walls.
00:16:32
Speaker
I would have never thought of doing it that way. Just, it's never crossed my mind. yeah I'm not that much into renovating stuff anyways. But still, where I thought like, hey, that's pretty cool.
00:16:44
Speaker
Yeah, totally agree. Totally agree. So um ah just the the the broad use, I mean, it's the you said general, right? General pre-trained transformers. It's really this this general word is so nice. It's such such such a broad number of for possible use cases of which we've just been scratching the surface right now.
00:17:04
Speaker
um But I really think it's exactly because, you know let's say also decoration, I'm not a great decorator myself, um ah that we don't have the the the... We don't know really what the problems are in that space.
00:17:18
Speaker
So why would we ever think that you could use it to repaint your wall, to give you you know a rendering of what your wall would look like if it was a new color? um But of course, an expert in that field, that's the first thing they would think of. They're like, hey, oh maybe I can use this to to to redesign a studio and I could build and you know and then someone else would say, that's a great idea. I'll build an app from it. And then you have an app on your phone, suddenly you can do it.
00:17:41
Speaker
And so it's really like, what are the types of um problem spaces that that you face? and then And then figuring out if generative AI is a good solution for that.
00:17:54
Speaker
Yeah, exactly. And that, I think that's super important so that you can go from inspiration then to training. And I think, I have to speak for myself now. I don't know for you, but I think we as scientists might be, we're always a little closer to the ah abstract way of thinking about things. So when we talk about knowledge retrieval, et cetera, et cetera, I know exactly what you mean and what I can do with it.
00:18:18
Speaker
But someone who doesn't have this abstract ah background and and way to approach things to them and say, well what retrieval? Can i now talk to my documents or not?
00:18:29
Speaker
Yeah. they they they They pick up a book and they start talking to their book or something. is i mean Of course, it's not as extreme as that. But yeah, I mean, that's ah buts definitely a an issue. is um It's just terminology, right? like Translating terms into um vocabulary that people are already familiar with.
00:18:48
Speaker
um Yeah. So this process you described, um what youre what you're going through, basically the inspiration and then the training, it sounds to me like, um and because you have 45,000 colleagues, it sounds to me like... ah like a massive undertaking of um trying to shift um or introduce a way of thinking into the the culture that is at size of how to think about this.
00:19:21
Speaker
But that's easier said than done, I guess, right? ah Yeah, absolutely. so um Right. So, uh, this, so we have a whole bunch of ideas on how to try to tackle this, but yeah, this is a, a big, um, uh, a big topic. It's, um, how do we change the, change the mindsets of people, um, to make them, have them feel comfortable using this technology and to actually have them thinking about, you know, using this technology.
00:19:50
Speaker
Um, and, um, of course, different companies have got different um have different cultures. um You would think think that like smaller startup companies, um they're much more dynamic and and um ready to try new things.
00:20:05
Speaker
um And so they would you know many small startups, of course, jump on on this ah journey to AI bandwagon. or train um and ah and bigger companies which have been there, you know, Zeiss has been around for 180 years or something, um then they're a lot more hesitant as a culture or a lot more hesitant to um to directly jump and to immediately jump on new technology or new ideas. um and so
00:20:36
Speaker
ah Now, that's been fine. um Whilst technology moves very moved very slowly and whilst they accelerate you know technology has been accelerating for the last you know hundreds of years, um it's just that the steps between you know that that the the the The distances or time steps between the next hit of technology was always very large. So even if you were a um a big company um and you moved a bit slowly, that was fine.
00:21:03
Speaker
um And it's really just now with ah you know since 2010, 2015, and now definitely with our generative AI since say 2021, 2022, that this acceleration in technology is becoming more and more empower apparent.
00:21:21
Speaker
And then you so really see that it's very important that um even bigger, older, more established companies also have some type of um cultural shift or change to be able to incorporate the the um adoption of these newer technologies on a faster timescale than what they had before.

Building AI Trust and Overcoming Past Failures

00:21:43
Speaker
Right, right. and um ah Sorry, i didn't mean to interrupt. you ah and ah No, so I was going to elaborate on a couple of ways that we have to try to do this, but we can... That would have been my follow-up question. Okay.
00:21:56
Speaker
So what we found, um so it's very much trial and error. Just like like this this technology is very new for for humanity and for the planet. And so... No one knows the right way to to to be working with this technology and to bring it into companies or what to be how to be using this tech.
00:22:13
Speaker
um And that the tech changes you know constantly um from month to month, or ah so then um the landscape of this technology also changes, which makes this compounds this problem even more.
00:22:26
Speaker
um So given that we no one really knows what's the right way to use the technology, um so then there's lots of trials, say, not only within our company, but across, of course, all other companies, trying to figure out how can we how can we um ah motivate our employees to use this tech and how can we bring this into into the culture in some way.
00:22:49
Speaker
um And what we've actually found works very nicely is especially for yeah a big company like Zeiss, is if you manage, if you can convince your, um ah first of all, you need to convince your that your board of a board of directors that this is a technology that you need to be ready for and we need to prepare the workforce for.
00:23:11
Speaker
So you have this top level down type um pressure, and that of course is is great for for for changing culture. And at the same time, like at the grassroots, so at the lowest, like at the lower levels of entry level ah workers, including developers, back office staff.
00:23:30
Speaker
um we We also try to give people here access to tools and build communities where they see these tools being used and they they're able to interact with other people um about the use of these tools.
00:23:43
Speaker
And so we try to go from like a grassroots level down, grassroots level up, and and then then like a board level down um with some hope that we try to catch people either as we're coming ah growing up or as we're we're we're trickling down.
00:23:57
Speaker
um and one So that's first of all important from ah from a culture perspective, um having these these structures in place. um But then also what we found is ah like a fast track to this.
00:24:09
Speaker
is to get the highest ranking member of the company that you can find, um have him sit down and start using whatever generative AI system you've built, and have him in some you know live demo showing how he or she is using this technology right now, um how they're using it to um ah for some for some use case and how it's helping them in their in their work.
00:24:35
Speaker
and Then the instant that happens, then all of the um all all of the all of the colleagues or the subordinates think that, hey, if the boss of the boss of the boss is using this technology, then maybe it's okay if I use it too.
00:24:48
Speaker
And so that then they start to feel um comfortable that they're are even allowed to start using this technology. Right, but it's being legitimized by, and not by saying it by a policy, but by showing it's like, look, I'm using this too, it's fine.
00:25:03
Speaker
ah Exactly. And and it's um it's some ranking manager showing this. So, of course, the the higher up the the the food chain you go, then the bigger impact you have by by the number of people who are maybe reporting indirectly or directly to that person. And so you just gra you get a bigger base of then users that you can or employees that you can impress and show, hey, look, um if this person can use the tech, then for sure you can as well.
00:25:29
Speaker
Yeah, yeah, yeah, for sure. And I think it's super important what you said is that it needs to be sanctioned by by the the the top um of the company. So its it's made clear, its like, look, it is fine that you use it.
00:25:44
Speaker
You are encouraged to use it. You don't have to, but it's it's fine if you do. Because I've seen um in some instances the opposite actually happening where it is,
00:25:58
Speaker
There is no clear policy. People don't really ah talk about this openly, but kind of everybody knows that everyone is is using it, which is leading to...
00:26:14
Speaker
well, I should say rather mixed results as as there is no helping hand either of helping some to... It's almost like a black market kind of... Absolutely. Sounds like it. Sounds like ah actually pretty terrible position to be in. you And it's not a good position. And um these some of these companies, they...
00:26:35
Speaker
ah They told me, it's like, look, we we know it's happening and we know it shouldn't be happening ah specifically for you know data privacy ah reasons with a type of data um that is being placed into into, let's say, the free version of ChatGPT or things like this because it's sensitive data.
00:26:56
Speaker
So they they ah they came to us and literally said, like look, we need a type of system that is using LLMs and all of that stuff. But it needs to be for our company internally so that we can actually give something to our employees instead of keeping it in this under the table hidden use, which is bad.
00:27:20
Speaker
Nobody will acknowledge it's happening because they're not supposed to, but yet everyone knows it's happening. And that I think is super dangerous and not not necessarily only for the data, but for the culture at the company as a whole.
00:27:34
Speaker
It's not what... it so It sounds like it could definitely lead to some toxicic toxic situations where you have you know employees doing work at different speeds because one is using Genesive AI and the other is not. And then some results are being um may ah you know created faster or that one person then just seems have more free time or luxury time than another person. And so I can definitely see how that could lead to some ah yeah tricky ah situations and in the workplace.
00:28:05
Speaker
um I think that maybe this is where a big established company is actually um can tackle this problem a bit easier than, say, a mid-sized company.
00:28:19
Speaker
um
00:28:22
Speaker
And that's because you know bigger companies like Zeiss, we have um you know already departments who care about information um security. who care about um data privacy, um about you know computing infrastructure, cloud infrastructure, um AI development.
00:28:41
Speaker
um So given that we already have these teams in place and um all the computers that we use are managed by the company even, um So then it's actually maybe easier for a company to either, a big a big so a bigger company to then to say, okay, um we don't think this tech, anyone should use this tech so we can turn it off for everybody.
00:29:02
Speaker
course, that's not optimal, but that can be done now on a you know at ah at a computer level so people can't even access the websites. um So that could could be a a ah not not my recommendation for a solution, but that could be a solution.
00:29:17
Speaker
um Or then you have these these um institutions institutions in place to help you then try and figure out together, okay, how can we build a solution that but work but works, that we can give to our employees, and that is compliant.
00:29:31
Speaker
and And this is exactly what happened at Cyce. So we had this massive cross-functional effort um to... um It was exactly around this time of this data leak from um Samsung, I guess, that you were referring to, where you know companies putting secrets into the open version of ChatGPT, which yeah at that time were actually using human input for training.
00:29:50
Speaker
um And so you know they but other people could get access to the type of secrets that were being put in. um and And this is when you know we had this block at the in the company saying, like you know um we're blocking external access to generative AI.
00:30:07
Speaker
Before that occurred, then of of course we had myself and other parts, are other teams and in the in the company were already playing around with this technology and we already had showcases. we were already working closely together with Microsoft, understanding um ah use cases, ah how how we could be using applying this tech.
00:30:26
Speaker
um And so, and and we'd already been trying to deliver this in some smaller scale to parts of the company. um And of course, when this this hard block came out, um then we had a a heap of um attention from them from the mo ah from the board and also um from the grassroots level. So everyone who'd been using this tech suddenly were now told they're not allowed to use it to use it and they were then maybe even scared to use it.
00:30:54
Speaker
um So we had massive pressure from both above and below to bring some solution to the company and give people access in the company to this tech. And so we have this but big cross-functional team um caring about all the topics that one should care about. ah As I mentioned, they you know information security, cloud computing, data privacy, compliance, legals.
00:31:19
Speaker
um such that we could then bring this technology and give it to our employees in a way that now is even the playing field. So now everybody in the company has access to this tech at one level or another, um and everybody has access to trainings on how to use this tech.
00:31:35
Speaker
um and and so so yeah we we've we've and And I would say that for if you're a smaller size company, then that could still be difficult because maybe you're big enough that you know you don't want to be putting your secrets into somewhere like um some into some public forum.
00:31:52
Speaker
um maybe Maybe smaller startups don't really care about that so much. They don't probably don't have a legal team. So they just go ahead and get, you know, in a startup, you you need to do what you need to do to get the job done. um and That type of mentality. that that type of mentality um And in a mid-sized company, I guess this then will really where you have trouble because you know that you maybe have it.
00:32:14
Speaker
You know that you're not allowed to just get the job done irrespective of what it takes. um But you don't have all of the systems in place and the different teams in place to support you um to bring this tech into the kind company.
00:32:26
Speaker
um So I can definitely see how that can lead to these two issues. But what would be your, i think it's super interesting that you already said that you at size, um already thought about this ah from various ah from various angles.
00:32:43
Speaker
But, you know, when i think many companies at the moment are either overwhelmed by the hype, which is which is which is massive, or on they flip to the other side and they are um afraid of the risks um coming to the leaks, for example, or i don't know what potential... um um things that can happen down the road, whichever ones those are, whenever it comes to new technology. right We don't know and that's often an aspect. What would be your advice to those people in charge of those comments? like how
00:33:19
Speaker
How should they approach um the use or or introduction of Gen.AI in a responsible way um yeah how ah but based on the experience you had already?
00:33:32
Speaker
Yeah, so this actually touches on ah on a really tough topic. And and and i I actually think it goes in the direction of trust. So how do you trust a system?
00:33:45
Speaker
um How do you build trust in a system? How do you lose trust in a system and then regain trust in a system? um And so imagine um ah you have a brand new company starts today, just, it just you know,
00:34:00
Speaker
um comes into existence, it just you know quantum ah quantum um quantum mechanically generated just appears in in existence. And it didn't know anything about the data breaches that had happened you know way back a couple of years ago, and when people were putting secrets into public forums or open AI.
00:34:20
Speaker
um And now, in so this this company just exists right now. And they were just to look at the world around them, and they would actually see that, hey, um there There is no problem um sending data to OpenAI because now they've closed all of these issues with people learning from other people's data. You can buy plans, ah company plans, um where you you you know they they ensure data security.
00:34:44
Speaker
as As to what level that is good enough for you that you you define you decide that as a company. um But there are many now providers who give you very um safe solutions,
00:34:56
Speaker
ah which you can trust and then um you you could start using. So this company that just came into existence right now, if they would just look at the playing field, they'd be like, oh, sure, we can start using Jornative AI because it's safe and and you know we can figure out how to use it in a compliant way.
00:35:12
Speaker
um But it's the companies who who existed circa five years ago and and and and still exist today, who have been on this journey and have read about these data breaches and are now scared about um putting any secrets in this ah to to any a large language model because they think that this information will leak out somewhere.
00:35:35
Speaker
um And so they had some you know some trust in the systems way back when. um That trust was broken when these data leaks occurred. And now how do we now have them regain trust in this technology?
00:35:50
Speaker
And I think that's actually a ah really difficult topic, which is something that we're also facing in the company, but in a slightly different flavor. And that's do we and how do we um have people regain trust in a technology that they've used and maybe they didn't get good results from.
00:36:10
Speaker
So maybe people had a chatbot interface that they were playing about with GPT 3.5 or maybe GPT 4, and the results weren't very good. And they said, well, I've used that tech and it was useless.
00:36:23
Speaker
And and you know I never need to think about using that tech again because you it didn't it doesn't work. I i used it already. i I use generative AI. Right. and um and of Of course, what but people don't um you know many many of these people don't understand is that this technology has developed so much that the generative AI that we have today is so different from the generative AI that we had in terms of capabilities and and language and understanding and nuance understanding um and and what type of tools it can call and and how it can do web searches to get up to the information. um
00:36:57
Speaker
and all this type of stuff that they've, you know, that they're maybe not technical enough to to be in the details or not interested enough to be in the details. um So they don't understand that the the genitive AI we have today is very different from the genitive AI we had, ah you know, a couple of years ago, even even a year ago.
00:37:15
Speaker
um and And so now it's, you know, I sort of see as my job to try and have have build trust again, like re like have them try this technology again, because it is so different.
00:37:31
Speaker
And in a year's time, it will probably be very different again. So how do we keep like having them try this technology and then being open to the idea of building trust in this technology?
00:37:43
Speaker
Um, and, uh, so I don't have any answers for that. Uh, and, but this is something that's a topic that I've been thinking about a lot. And, um, and and There are examples of you know um systems not working well in the past. um Think about well you know cars in the past. you know Way back when the when the first cars came out, they were unreliable. They would break down. like i mean Anyone over the age of 40, I'm sure, is used to their cars breaking down all the time, right?
00:38:11
Speaker
But now, you know any new car never breaks down, basically. I've never been in a new car that's broken down ever, basically. um and and so However, this is a ah talking about a 40-year time gap for me to re get re-exposed to this technology and slowly start to understand, hey, you can get in on the car. and Yes, the electric windows do work if it gets wet.
00:38:33
Speaker
and and and The car will start if it's cold. and so I've slowly, over the last 40 years, been and building up trust again in in my car technology. so Now I trust cars.
00:38:45
Speaker
How do we take that, you know, that learning that I had over 40 years of how I rebuilt trust in a system and apply it in a, on a one year time window to generative AI, to to employees who've lost trust.
00:38:58
Speaker
How

Identifying and Integrating AI Use Cases

00:38:59
Speaker
do we do that? And that's not actually clear to me right now. Yeah. That's a massive undertaking, especially when you, when you try to cram 40 years into one year and and it's, I,
00:39:12
Speaker
It's difficult for sure. and you know, I'm wondering, there's another factor that comes in here and I'm wondering if it's, I would like to hear your take on it, if it's beneficial or or maybe even detrimental, which is,
00:39:28
Speaker
If we look at all this hype, I mean, you have you have the people that might have tried it once and they realize, well, the output was garbage. This this technology sucks. ah No need to try it again. Correct. Then they hear all around them constantly like, oh my God, do this with AI. The best thing ever. Yeah.
00:39:44
Speaker
The best thing ever, this and this. And then they, at some point at the beginning, most likely think, eh, it's just talk, it's just marketing, whatever. And maybe after three months, they might wonder,
00:39:58
Speaker
Maybe I'm missing out on something. But this is not really... and i'm wondering but this is not really This is not regaining trust. This is FOMO kicking in, you know? Right. and And it's how do you then spark curiosity in people?
00:40:13
Speaker
I think maybe that's the key, right? It's how do you get people curious again? um And maybe just being, you know, swimming in a sea of information that tells you how great this generative AI technology is.
00:40:25
Speaker
um So I won't use the word hype because I don't actually think it's a hype, but I can definitely understand how ah how one could use the word hype. um ah how ah um and they they they now they're now completely bombarded with these messages um about, Jenny, I is the next big thing. Hey, it can solve math problems.
00:40:44
Speaker
um ah Hey, it can ah you know write books better than you know an author, or it can write books for authors or help them support them writing books, or or it can um ah help write computer code, or it can um write computer code for you.
00:41:00
Speaker
and so So I guess if they're bombarded with these messages, um then then ah maybe that that gives the seed of curiosity But I do wonder if AI, I hope so too. and and I mean, it will happen. I cannot believe that in 10 years we're sat here having a conversation and there are people out there who don't use Genitory AI on a day-to-day basis, like not for everything.
00:41:25
Speaker
um yeah Because I think it will just be so embedded in us in our systems and our technology, it'll be impossible to um to to to not to to um to not be faced with, to not be interacting with this tech.
00:41:38
Speaker
just like it's impossible to get into a car that doesn't have electric windows i mean basically unless you buy some car from you know some old old timer um every car you buy now will have electric windows and so you know um it's it's sort of like we've we've been forced to use cars with electric windows now and so we're forced to regain trust in this in these um in these cars uh and um Yeah, I hope it's going to spark curiosity.
00:42:08
Speaker
Yeah, exactly. But i maybe I think that's a nice take. So I think maybe maybe we need to invest a bit of time thinking, okay, how what is how do we spark curiosity in people? And maybe that's a nice way to try and try and get ah get them excited again by trialing this tech.
00:42:24
Speaker
yeah Yeah, because of his... And you know, the reason I'm asking is because I just hear a lot of and CEOs of companies really either they are completely they're flip-flopping back and forth.
00:42:38
Speaker
It's either they're completely into it, but don't really use it in a productive way for their company, but they, they, they are like an advocate for it.
00:42:50
Speaker
And then you have the other side, which is completely against it. And then you have the one somewhere in the middle who are so afraid that they have to do something because they think they're gonna, they're gonna miss the moving train.
00:43:02
Speaker
um that is, uh, just keeps on speeding up and speeding up. And it's, um, I would say either three of these scenarios isn't very much coming from the curiosity side.
00:43:17
Speaker
It's all sort of like... ah Yeah, fear of missing out on this technology, of of yeah of losing direction of your company because and because you've not jumped on this technology early enough.
00:43:30
Speaker
Or fear that if you jump on the technology and it is all hype and doesn't produce anything, then you've invested a bunch of money in a direction which isn't taking you anywhere. And then um then then you know then then that can have detrimental effects the company moving forward because that money's not been spent elsewhere.
00:43:47
Speaker
Yeah. Yeah. ah I think like just to take a, maybe it's the right or left turn, I don't know. It's a quick, a quick ah side quest. and You know, in another interview, um you said sometimes you shouldn't do it.
00:44:07
Speaker
You know, sometimes maybe you shouldn't do it when it comes to Gen.ai. Maybe sometimes it just isn't the right fit. Do you have some some examples um when that's the case? Or I'm i'm sure there there were some in your in your past where you saw people just throwing Gen.ai at a problem where you thought, well, that's not really useful.
00:44:29
Speaker
ah Yeah, absolutely. yeah so And and and and so let's try then also and bring it back towards the CEO topic because I yeah do really think um that's and that's an interesting um direction also to take the conversation.
00:44:40
Speaker
um So, yeah. Let's say um with any new technology, then people start to use it for ways that you don't expect it to be used, um and maybe even ways that it's not fit for fit for purpose. um and I would say that without really understanding that technology, then um then then it's difficult to know if if your use case is fit for purpose. um because because it's Because, you know, some let let's take a generative AI as an example. It's a black box.
00:45:17
Speaker
um um Then ah people don't really understand what's going inside the black box. Therefore, they think, um hey, I can use it for any problem because I don't really know what's going on. Therefore, I would just try and solve any problem with it.
00:45:31
Speaker
And the times when I would say that that's a bad idea, is if there is um already an existing problem. Well, let's say this, you also said about technology changing quickly over time. And so let's take this conversation as if we were back at like a couple of years ago, um because that the you know my answer changes as a function of time.
00:45:54
Speaker
So a couple years ago, um the Ben from 2022, three would have said, yes, there's definitely times when you don't want to use generative AI. um For example, um I've seen people ah give a big Excel file of data to to a genitative ai to a large language model, and they say, hey, um how you know sum up these rows and these columns, and then build me a report um based on what you find there, and do some cost analysis.
00:46:21
Speaker
um and At the time, that was a bad use of the technology. um because ah there were and are existing tools and you can do it if you can do exactly the same tool in excel with an excel function to add up a bunch of rows and the functionality in excel is you know whatever 30 years old or something i don't know the exact numbers off the top of my head but guess 30 years old um then uh these these uh These calculations that you can perform in Excel are ah ah written on function you know with functions which are very well tested.
00:46:58
Speaker
We know exactly how they behave. um And so if you can do a task in Excel using Excel functions, um then jumping to generative AI to solve the same task for you is actually a bad use of that technology.
00:47:14
Speaker
um And by this, I mean two years ago, we just throw everything at generative AI, a large language model, and say do this problem for me. um And if if there are if there are existing solutions which are tried and tested and efficient, then, hell, use them.
00:47:31
Speaker
We don't need to you know use generative AI for every task that you ever encounter just because it's the new kid on the block. There's plenty of other tools out there one can use, including other types of machine learning. So there are some problems which are very well set for generative very standard machine learning, a bunch of decision trees or random forests, or for deep machine learning, it's an image of a cat or a dog or something.
00:48:00
Speaker
yeah there There are many tasks where you don't need the the beast or the sledgehammer that is a generative AI, large language model, um for for which you can... Maybe even it's not that even it's not the best tool for the job,
00:48:15
Speaker
maybe even today it's not the best tool for the job, and you can use some other existing technology to solve the problem. So it's sort of like don't um don't use it to solve problems which you can already solve with existing technologies would be my takeaway advice for ah ah for for when not to use generative AI.
00:48:37
Speaker
um But like I said, that was that's all two years ago. um yeah Right now, today, we can use generative AI very well. We can just ask it, can you solve this task for us?
00:48:49
Speaker
And it can give us a very good idea of if it if that task is well-suited, for a large language model to solve, or if it would rather write a pun Python function to to do some data analysis, or if it suggests, hey, you can do this in Excel, um ah then maybe you should use Excel.
00:49:09
Speaker
Or if you've got to Microsoft Copilot, then you've got access to generative AI, large language models within Copilot, within co-pilots within Excel. So you can actually just open up co-pilots and ask it to do some calculations for you. And there you're using the brains of generative AI, but embedded in a tool that you're using, um embedded in your normal workflow, that's drawing on the power of those of the tool it's embedded in. So can draw on the power of um ah Excel functionality,
00:49:41
Speaker
um Of course, nowadays, then it can also um write Python functions and execute Python functions within ah within Excel. um And so um now I think that it's a completely fine idea to first ask generative AI the question, can you use this?
00:50:00
Speaker
you know Are you a good solution to this problem? and then use it sort of like as a sparring partner or as a co-pilot ah to help you understand what's a good solution for this problem. um And then, of course, you can use it to help you help it write a um an Excel function for you to perform the tasks that you want it to perform.
00:50:21
Speaker
And you can give it a screenshot of the Excel file now, and it can um ah come up with a function for you that you can then implement. um And so that's what sort of muddying the lines a bit between um what the technology, how people were using the technology circa two years ago and how people are now using the technology um and and indeed just what the technology is capable of.
00:50:45
Speaker
Like a couple of years ago, this concept of tool calling just didn't exist. So, you know, large language language models just couldn't call aid um a Python function. They couldn't write a Python function and call it or execute it um on a dataset.
00:50:59
Speaker
And now now, of course, that's very standard technology ah in ah most um big um chatbot providers or large genitive AI providers um have this um built into their chatbot interfaces.
00:51:14
Speaker
Yeah, I think that's very interesting is, you know, from my perspective, it's been most of the time, if there is something very deterministic that can be done, well, there is no need for you to use Gen.AI for it, um especially if you want the thing that's that is supposed to be deterministic to always spit out the same thing with 100% certainty, well, then you shouldn't use Gen.AI for it.
00:51:41
Speaker
But Then did the the the exactly what you said, the development went it towards, okay, then how about we use Gen.ai as the translation layer to access these deterministic aspects, either by writing those functions, which then will be deterministic, or to call deterministic tools in the first place, which...
00:52:04
Speaker
skyrocketed the functionality and also the reliability of the output that has been given. Absolutely. Yeah. and Basically you're using like large language models as an orchestrator or as the brains of an orchestration software to figure out at what time should I be calling which tool or should I be handing over which process.
00:52:26
Speaker
And so using, using, yeah, but using it as an orchestrator is a very nice use case right now. And I'm becoming, you know, um, there are nice tools like N8N to help you do this opening. I just released, released a great orchestrator built into their chat GPT services. Um, uh, um, not within Europe yet, but I'm sure it'll come soon or at least I don't think, no, it's within Europe.
00:52:49
Speaker
Um, already. Yeah, I think so. I think I used it the other day. Um, Google also have a ah very nice solution for this. And so, um, building these complex workflows using large language models with the brains at each like decision node, when when should I be doing what, what tools should I be calling, um and then using large language models as as the brains at those decision nodes is then also a very good use case.
00:53:15
Speaker
um exactly I would say that there's often lots of discussion about how um large language models are not deterministic. um and And I would actually counter that. it is that They are deterministic because it is just a computer algorithm running on on a computer.
00:53:32
Speaker
um but But it's whether or not you have access to the the random seed, ah ah the the the seed in the random number generator that is drawing from these different probability distribution functions about what is the right next word to complete the sentence.
00:53:46
Speaker
um And OpenAI i actually released this technology for for their um for ah for for their model so you can set the random seed.
00:53:56
Speaker
um Now, it's not completely clear that you always get the same answer because actually your request, um your API call or your your request to these models might actually end up going to a different server, in which case, even with the same random seed, you might get a different answer.
00:54:11
Speaker
But it doesn't mean these things are not deterministic. It just means that they're um it's actually so complex that it appears as though it's non-deterministic. But um for example, we could take and a large language model that we can download and host our like our ah ah one or something from DeepSeq that we can host on our own machines that we can absolutely set the the seed of the random number generator and then we will absolutely get deterministic results out.
00:54:35
Speaker
yes yes but um mean But yeah, I mean, but but basically becomes so complex that like just the, yeah probably it's not not really an interesting conversation to be having anymore no but you're right it's yes they are deterministic given all those variables are controlled very very strictly exactly given you can juggle a million boards in your hand and then do everything perfectly yeah exactly then then yes for the end user it will not seem deterministic. Absolutely right.
00:55:07
Speaker
Absolutely right. they Well, they didn't it ah didn't answer the question I asked in exactly the same way that I asked before. and but and yes and probably this is But this is also probably because the majority of the population they never really...
00:55:25
Speaker
um ever had to deal with a concept of pseudo random before like probably most people have never like run a random number generator on a computer and and seen that you know if you put in a random seed and you click go 10 times you always get the same ah flow of numbers out so they probably most people They never knew the concept of pseudo-random compared to actual random.
00:55:52
Speaker
And so if they do something ah more than once and they get different answers, therefore it's absolutely random that there's no way that that cannot be but that could be a deterministic system. um right ah but So I guess it's just actually just one's background, right one what what one's used to ah dealing with... um ah But yeah, to the end user, it absolutely appears random, completely agree. And it doesn't even, it it even gets more complex because of course now when they say, well, I asked this question before, it's answering me differently now.
00:56:21
Speaker
was like, well, yes, because... your context. Exactly right. it's That's also fact. Random seeds changed when you've put in the previous context from what you were just saying.
00:56:32
Speaker
Basically. It's a different set up. It's yeah also different because it's, it's yeah. and and and i But I gave the question to Grok and I gave it to Gemini and i hand to ChatGPT and they all gave me different answers.
00:56:46
Speaker
Therefore, it's the term it's it's non-deterministic. ah Or you did that and like, Two months later, you did it again and you get different answers. and That's because the models change, the ah architectures change.
00:56:58
Speaker
ah They're just a completely different family of models. And so, yeah, it's ah it's as good as ah ah as is non-deterministic. That's definitely true. um but that's definitely true And I think now now we're exactly at this at this fine line of the, I think, having a lot of technical depth knowledge about this topic.
00:57:22
Speaker
And then at the same time, you are also, of course, trying to do to two to to foster and and and move this cultural cultural transformation forward.
00:57:33
Speaker
I mean, how do you balance these two things? Because at Zeiss, you, the range of your colleagues is is just as diverse, right?
00:57:44
Speaker
and so So how do you, and and your science, I'm wondering, is your science background here helpful or is it is it going to, him is it is it um making you fall into these, you patterns of trying to explain things in a very abstract way.
00:58:03
Speaker
i'm I'm curious. Yeah, so a nice question. Thanks. And we can segue maybe back to the intro a bit where you mentioned my scientific career. And so, yeah, so I started off life um with astrophysics and cosmology to, so let's say, relatively ah complex topics.
00:58:20
Speaker
um And I was a research scientist for like, you know, 10 years or something after my PhD. um But You know, I'm I'm I was embedded in a in my family, constantly interacting with my family and friends who weren't with an astrophysics background.
00:58:36
Speaker
And so already as a ah scientist dealing with these um complex, you know, relatively complex topics, I would always try to ah tell my friends and family if they were interested, of course, um like what type of stuff I was doing.
00:58:51
Speaker
And so already this um learning how to translate ah a topic like you know ah accretion disks around black holes, um ah high energy particle physics, and how that might be interacting with the cosmic microwave background radiation or something, ah you know your gamma ray, whatever. um Those types of concepts and translate to my gran who was ah you know and over 90, how she could understand that. And so already um uh on this journey as a ah scientist you know embedded with a a nice broad family of people with different experiences in my family um so then um
00:59:28
Speaker
and definitely ah most most of them don't have technical backgrounds, then trying to um explain what I do in a way that um everyone can at least grasp and and don't think that I'm spending all day just, you know, at university studying in books and and attending lectures, which still probably most people thought I did.
00:59:46
Speaker
um But ah yeah, so so already on that journey, I had had some experience where I was trying to the build experience in in taking relatively complex topics and try to distill them in ways that can be digested by um but by the lay person.
01:00:04
Speaker
um And ah then, you know, the last years of research career, the last seven years or so, then I was also active in in the field of AI, um researching new ideas in AI um and applying um um applying new ah new ideas and AI in in the field of astrophysics and in cosmology, which are big data regimes, lots of image analysis can be done.
01:00:27
Speaker
um Yes, it did. Exactly. yeah Yeah, exactly. Basically, astrophysics, astrophysicists were be dealing with big data before big data was a thing for companies. Exactly. so So we already had a good toolkit to be dealing and understanding these are these topics.
01:00:44
Speaker
And so when Genitive AI came out, then um I was already well-versed and I'd already trained neural networks, recurrent neural networks through language tasks way back in 2016 or something.

Zeiss's Commitment to Technology and Innovation

01:00:55
Speaker
ah predicting Shakespeare words or trying to write new Shakespeare at the time. Of course, the results were terrible, um but they were terrible, not really because of the technology, I believe, but um but the choice of technology, just the size of the available sets and the size of the models I was trying to train.
01:01:11
Speaker
I would really love to now go back and take the same you know recurrent neural networks that I was dealing with with Keras way back when, or even another provider. I can't even remember the name of the provider I used to use before um jumping on Keras.
01:01:24
Speaker
um and and and TensorFlow, um trying to if I could take that same tech, but just scale it, just make it gigantic, whether or not you'd still get very nice results ah if you just could give it enough compute and enough data. So I'd actually be interested to to see if that's possible, but i'll probably never get there because you know I don't really care so much about the answer.
01:01:47
Speaker
um Yeah, so then i then I've been working um you know with this technology for for very many years and as an active research scientist. um And then when this generative AI, let let's say large language model generative AI, really hit the market, hit the world, um then I was embedded in the team and we were already investigating this technology. And...
01:02:08
Speaker
and ah And so then I already had from my my science background, a very deep understanding of basically all things AI. And so I could understand very quickly what's actually going on and and how the technology can be used.
01:02:23
Speaker
um And then take this this ah understanding and then um help have discussions with our business leaders, um people from security information technology, um people from cloud research ah you know cloud computing and resources.
01:02:38
Speaker
um and building infrastructure and with data privacy legals and basically access this interface. And so that's really this role that I took on um for for several years was being an interface between these different teams, um taking um technical knowledge and being able to translate that in a way that the other teams ah would ah could could then process and put into their own language.
01:03:03
Speaker
um And so thats that's ah that that's yeah that that was a lot of my activities for um for a couple of years after Gen AI really hit the market. um no it sounds like me It sounds to me that your your the scientific background actually helped in doing all of this because I know some scientists where their scientific background was actually standing in their way because they were this...
01:03:30
Speaker
um let me say the subject matter expert, they didn't see themselves as the translation layer in between, which was quite unfortunate ah because they could have been.
01:03:43
Speaker
um And that made made everything actually much more cumbersome in a company that had, for example, a physicist or a computer scientist.
01:03:54
Speaker
And I don't mean a computer scientist in the sense of just a programmer, i mean the actual quote unquote actual computer scientist. Like I say, a computer research scientist. Exactly. yeah yeah Research scientist that is dealing all day with NP complete or incomplete problems. yeah ah Those types of thoughts rather than how do I program this now in React? That was not what they were mainly dealing with.
01:04:16
Speaker
Yeah. Yeah, so I would say for me, um'm ah so it's very helpful having the science background. um I think really and what was most helpful is having being embedded in a family of, of say, non-technology specialists and yeah and actually playing this game of trying to explain to the family what I've been up to exactly and really... um ah being able to translate that, ah you know act as a translator earlier on in life in in this role.
01:04:49
Speaker
um And I guess I did that because I just didn't want people to think that I was going to university and, you know, if I was a research scientist, I was just there, you know, attending lectures and ah and drinking beers in the evening or something. go Because, you know, of course, there was more to it than research scientist. but most Most, say, non-research scientists don't don't understand.
01:05:11
Speaker
um yeah And I just wanted maybe even just um for my experience, personal ego that I wanted them to know that I was a bit more than just someone just going and taking notes and then and then then then um having lots of free time. And so maybe that also helped me, pushed me to um ah to act to to to take on this translator role ah in my in my private life, which then of course helped me within the company.
01:05:36
Speaker
You know, what I'm hearing out is that, you know, maybe that's the the suggestion to give to the CEOs in terms of um and a you shouldn't do it or you should do it when it comes to the use of Gen.ai.
01:05:52
Speaker
Instead of just looking at those two, it's more like it's your job to find out how you should do it, not if and if not, but how you should do it.
01:06:03
Speaker
So maybe... look for someone that can be your translation layer. ah Absolutely. Yeah. So I think um pet like CEOs should be pulling on their networks. So they should, um um it would make sense to listen to the hype around So all the news and then try to digest some of that.
01:06:29
Speaker
but at the same time have technical people that they can call on um and that they can um really brainstorm with ah to to figure out um if this is a, you know, if if their ideas are ah completely crazy or if they make sense um and then and then sit down with technical people and and help them. And then then through those conversations, understand what is or is maybe not possible today or what might be possible in in the near future, and then help them to make to them decisions for the company.
01:07:02
Speaker
Right. You know, I'm wondering, um and I'm sure you cannot go that much into detail because those are most likely top secret projects at size. um But I'm just wondering, how is ah when you have an idea or or a potential um process or or a solution of how Gen.AI, or it doesn't even have to be Gen.AI, it can also be just a normal machine learning stuff or deep learning stuff, how When you have an idea of that, how do you go about it um to ah implement this into the broader picture at size in general?
01:07:46
Speaker
And i but we don't have to talk about specifics here, right? I'm just really curious. um It's because... I think it's what makes me so curious about this is specifically because Zeiss is so big, Zeiss is so influential, and they exist already for quite some time.
01:08:04
Speaker
And they don't exist for all this time because they have been hanging around and not doing anything. Exactly. ah It's exactly the opposite. They kept at it, and they have dev they they kept on refining processes and all of these things.
01:08:21
Speaker
And I'm pretty sure we can all learn something from that. So i'm i'm um I think, so this is just my hypothesis, it's just that I think all this experience that SAIS has accumulated is coming in very handy of now taking this new technology and trying to integrate it.
01:08:41
Speaker
And I'm i'm curious, which do you follow a blueprint? Is there a blueprint? and Yeah. and So I would say in some sense, there is a blueprint and that's really the DNA of the company.
01:08:53
Speaker
i mean, the the company is a marriage. So Carl Zeiss is a, um say a partnership a strategic partnership or marriage, if you will, between um a a world-class engineer at the time. um ah So this was Carl Zeiss and Ernst Abbe, who was a physicist, um who Carl Zeiss went to for help on um figuring out what's the the best way to design lenses for microscopes. Yeah.
01:09:18
Speaker
And Ernst Abbe went down and and and and wrote a whole bunch of them, solved a whole bunch of wave equations, and then figured out to how you can optimally create lenses. And so right from the get-go, um the company was founded based on this marriage of technology and science, some and cutting-edge science indeed.
01:09:37
Speaker
um And throughout the the course of the company, this this thread has been this this this DNA, if you will, this interwoven thread of science and technology have always remained very much in focus. So um the company invests a lot of profits, yearly profits back into R&D.
01:09:59
Speaker
So we make sure that we always invest in technology. So we're always either drivers of technology So Carl Zeiss produce um ah devices which we and in a partnership with um ASML, and these devices are used for lithography, for burning microchips, and all the best microchips in the planet, including GPUs and stuff, are burnt with technology that comes originally from Zeiss. So we're we're somehow also helping this um drive this this AI wave, this genitive AI wave.
01:10:28
Speaker
um so ah We have many areas of of technology which we are very active in. um we've you know Carl Zeiss has won or has won and and a bunch of um i was the Deutschland prize for um for for science and technology.
01:10:50
Speaker
I forget the actual name now. forgot it as well. yeah The Deutsche Zukunft Prize. um So this is, Carl Zeiss has won it in partnerships maybe with with other people um several times over the last several years. And I think we were nominated again in that that a panelist this year, and which shows that the company ah is still very much very much focused on um technology, applied tech, science.
01:11:17
Speaker
We're still a pioneer. Exactly. We're still a pioneer. And because this is so ah ah deep in our and our and and in our DNA of the company, then um
01:11:31
Speaker
we if we we we we feel empowered to bring new, to trial a new technology, to bring it into the company um and to try and give others exposure to this technology.
01:11:42
Speaker
So um at least I and many others in the company um are empowered and feel empowered. And we try to empower all ah employees to um be willing to try out new technology and understand that tech.
01:12:00
Speaker
um Of course, it's not a role that everybody wants to take, and that's fine, but it's a role that many people do want to have. and And so ah we have like technology pioneers who who go out there, find new tech, and start using this tech, or they develop their own tech.
01:12:14
Speaker
um And then through getting their hands dirty with the tech and understanding it, um We already know you know what one needs to do to bring a technology into the company in terms of um If it's an open source technology, then we can host that in some ah so secure um and environment where we run our own types of ah tests on that um that, let's say, open source ah code to make sure that if we were were to use that or give this to other members in the Zeiss community, then um information wouldn't be flowing out into the wild.
01:12:48
Speaker
um So we already have these processes in place for bringing you in external open source code, for example, hosting that internally and giving that to the employees. um We have some processes for um building showcases.
01:13:02
Speaker
And if we have a showcase, then um maybe to try and tackle some new ah new type of technology or some new problem. um Then we have a platform for showcasing that within the company so that we we see if people get interested in that. And if so, then we figure out some a funding stream in some way then of of turning this into an actual project.
01:13:24
Speaker
um One example I can give you of that is an initiative called um internal data science hackathons, um which is initiative that I started about um about four years ago now.
01:13:36
Speaker
um And every every three or four months, we get together, we identify a whole bunch of um internal um
01:13:46
Speaker
business people who own data. They just maybe don't have the resources to mine value from. um We act as sort of like data science consultants, if you will, to see is that data good enough to try and mine some value from.
01:13:59
Speaker
And then we host these internal events where we bring together a whole bunch of data architects, data scientists, um cloud engineers, ah developers. um And we basically throw everyone together in one big and one big hackathon we call it a datathon, to try and mind existing ah ah new business value from existing data.
01:14:21
Speaker
um And so and through these types of projects, then we've looked at a whole type of ah new technologies like homomorphic encryption um and, of course, generative AI, um all types of machine learning and and and then time series prediction and um basically you name it and we've been looking at it.
01:14:40
Speaker
Bayesian neural networks and all types of very cool stuff. um image image analysis, a deep machine learning image analysis. and Now, the last actually couple of years, what we've seen is that these internal datathons, where we used to have like 10 or 15 different data science projects, have slowly been turning into ah Gen AI-a-thon. So basically every topic we now address is a Gen AI topic of some form or one form or of another.
01:15:10
Speaker
Maybe we have one data science, say let's say old school data science problem, which anyway, i would still suggest people try and use. Use Genative AI as aspiring partner as they develop the technology to solve that problem.
01:15:24
Speaker
um But everything else is basically Genative AI topics that we've been we've been um um building showcases on um and and and and and trying to solve particular problems using generative AI tech technology.
01:15:39
Speaker
And if through doing this, then we have um that just a lot of exposure within the company to this type of a um ah to to to this new tech and then people can maybe get excited and ah and then start projects where we build this technology um out and scale it up and bring into some scal it scalable environment where then we can give access to lots of people.
01:16:02
Speaker
Wow, I don't even know what to say. To me, this sounds like ah yeah yeah utopia for for ah well scientists or engineers who get the um What scientists and engineers thrive on is basically problem solving, right? Exactly. Then there seem to be plenty of people that say, hey, I have a problem.
01:16:27
Speaker
I don't really know what the problem is. I just, hey, can you do something with this? It's almost like throwing a puzzle out there and let the people people figure out, hey, what's is there a puzzle here or can we make one out of it to then solve it and use all these fancy techniques they learn at the university or interact with colleagues or are learning about.
01:16:52
Speaker
And at the same time, i think this is it becomes ah self- um a self-reinforcing ah flywheel almost. Exactly. yeah that That just keeps, you you get a certain type of, how do we I phrase this? Like an enthusiasm. You build traction within the company. Yes, traction, that's the word. Exactly. You build traction into the company where,
01:17:22
Speaker
this is just normal. it's not yeah it does It's not seen as a waste of time. It's not seen something special. It seems as this is this is the way that you can explore new problems and new solution spaces. Exactly.
01:17:36
Speaker
Because often I see... the question often arising first as in what can we use this for? And oftentimes it's unclear. I think that's been and been a theme in this conversation of ours now.
01:17:50
Speaker
It's oftentimes it's unclear what you can use it for. Like walking around with a hammer. what What can I hit with this hammer? What what needs hitting? Rather than actually coming from a problem space and then say, okay, yeah um now what's a good tool for this problem?
01:18:05
Speaker
Sometimes it's a hammer and sometimes it's not. Yeah. I would also say that one of the, sorry to interject, um just one of the things that is also part of the role or what's important is to talk to business people and have them actually articulate the problem that they really have. So sometimes people want to use a technology to solve a problem, but they're actually looking for a solution so that they're trying to solve, maybe not the underlying problem, they're just trying to solve some problems some
01:18:35
Speaker
um connected problem that if you actually were sit down and talk to them and and really understand what is the topic that you're really trying to address, then that can actually sit somewhere else.
01:18:46
Speaker
Then a different technology choice could be the the right choice to to help so help them solve that underlying problem. Yeah, exactly. That brings us again back to the translation layer, right? It's just like you have to dig a little bit um to see if what they say the problem is, is actually the problem and not just a symptom.
01:19:06
Speaker
Exactly. It's ah sort of like and what's it this design thinking, I think, where you you you know start ah sting you actually sit down with the the person who's experiencing the the problem and you start to think about solutions and then you you you converge on what you think would be good a good solution.
01:19:24
Speaker
um And then you actually iterate with that person, see is this actually solving yeah the the the the problem you actually have or is it a direct, you actually have a different underlying problem. Yeah.
01:19:35
Speaker
So this internal datathon that you mentioned now, it sounds sounds to me almost like you have your own sort of ah Kaggle platform, which where where everybody can just, and for for the ones that don't know, Kaggle is a platform where companies or actually anybody can just throw some data and say, hey, I have this problem. Here's some data. Can someone help me out?
01:19:59
Speaker
And host it like competition and whoever manages to perform best, they get a they can win a reward or things like this. Exactly. But I have not seen something like that within a company.
01:20:13
Speaker
But I guess size has to is big enough to actually be able to do that, including having the talent to do it as well. Yeah, so um definitely we have the talent. Yeah, definitely we're big enough to do something like that. We have a broad enough range of problems.
01:20:25
Speaker
um Yeah, i I would say that we are not quite at the level that you would. Kaggle is great because you just go there, get any type of data set, and you can try any type of algorithm with it.
01:20:35
Speaker
um Great for exploration, um but I'd say really what we what we want to do is at the end of the datathon, we want um to be able to at least have a path to solving that problem for that person.
01:20:49
Speaker
and And so ah you know we really try to be solution focused, if you will. mean Of course, we have some exploration of new technologies just to see what they can do and what type of problems we could solve for this um i given the current status of that technology today.
01:21:04
Speaker
um but some but But most of the time we actually want to try and solve some problem for somebody. um And then once we've solved that problem or we have the path to a solution, then actually we don't need to you know we don't need to host that data set in some way in some curated fashion, such as someone can come and later play a lot around with a different algorithm to see if they can solve the problem again.
01:21:26
Speaker
um yeah That would be nice, but I yeah i don't think it's needed within a company um and unless somehow you want to build that as part of your DNA, this right this constant trialing of solutions.
01:21:39
Speaker
I can definitely see a place where that would be ah there would be a good fit for something like that. yeah But we don't have we don't exactly have something like that you know completely within ZEISS, but I guess it's similar enough. yeah Yeah, I thought about so like the other way around, as in the business people or the ones that doesn't have to be the business people only have the problems.
01:22:00
Speaker
could post something up and that's then internally where then the hackathon teams can pick and choose like, oh, this sounds cool. Almost like a this is your buffet of issues. Now let's see how we can come together and I don't know, sort sort it into a way where we say, hey, this sounds cool. Let's work on this today.
01:22:20
Speaker
That sounds cool. Or ah different teams also, of of course, have different preferences and things like yeah this. Yeah, so I think that would be a really awesome idea, actually. um But yeah, we don't have anything exactly like that. But um yeah, I do. I mean, that's a great vision. And I think it would think it would make sense.
01:22:36
Speaker
I guess the only issue is that to actually sit down and to describe the problem right is is does take quite a bit of time. It does. It's a challenge, yeah.
01:22:48
Speaker
does. I guess they cannot damage an AI again. I mean, definitely. I mean, I sort of claim everything could be done with genital AI in some one fashion or another today. um But yeah, yeah it's ah but but the it is nice. Yeah.
01:23:02
Speaker
And then if you if you, so now you're going from from from datathon to then a solution. you said then you can go into scaling it up.

AI Integration with Existing Workflows

01:23:14
Speaker
How the scaling it up, do you look at it from just the technological perspective as in, okay, let's, because, you know, when it comes to startups, it's always about, yeah, let's scale. Can this can this app be scaled? And of course, then you're all about load balancers and microservices and blah, blah, blah.
01:23:33
Speaker
um But that's not what you mean, right? When it comes to scaling, I think you mean, adoption of the solution by others internally? Yeah, so I would say both both topics um and we basically sort of address them separately. So we have teams who do exactly care about infrastructure and building up our gen AI infrastructure so they had some um so it's scalable exactly as demand requires.
01:24:00
Speaker
um And so there's been but a a lot of effort in that the last ah the last couple of years. um and then But then exactly, once we have um tools that we we bring into this ah like this this platform, then um how do we ensure that these tools are accessible to the employees, that they feel comfortable using them, um and then they know how to use them correctly?
01:24:24
Speaker
um And that's that's exactly like a different set of tasks and even different teams looking at that problem. um And that that that goes into that this by itself is ah is ah is a challenge.
01:24:38
Speaker
um and I think even the bigger challenge, right? I mean, from my perspective, is like the technology is one thing. i mean, you do make five clicks and bam, you have 500 more servers, fine.
01:24:50
Speaker
But you don't just click and have 500 more users. That's not how it goes. Absolutely. That's a lot slower process and um and exactly requires changing people's mindsets, having them giving them the confidence to use these tools. and um so What we found um ah very early on is that employees um wanted like a set of rules as guidelines. how can they How can they use these tools in a way that's compliant with ZEISS values and and in in a way that's secure, and what type of data they're allowed to give to these tools.
01:25:27
Speaker
um and so this is This was one of the really the prerequisites of being able to use our internal tools was that we created ah set of guidelines that we could give together with the tool to the employees two um so that they could at least have a first and um a a first idea of what can I do with this tool in a way that I'm compliant so I'm not breaking any company's rules.
01:25:54
Speaker
um So that was was the that inspiring to then also, I guess, increase the likelihood of something positive coming out and they not thinking, oh yeah, this tool is crap.
01:26:07
Speaker
Yeah, ah so absolutely. Although I would say that is different. So the first thing is guidelines, how to use these tools in a compliant way in in terms of what type of data can you give these tools.
01:26:18
Speaker
The next thing is exactly how to use these tools in a good way, how to use these tools such you get good results. And of course, ah you know, way back when in the years of 2023 and early 2024, then prompt engineering was a thing. So it's like how to write your prompt in a way that's optimized.
01:26:34
Speaker
And now those days, have ah you know, are a bit behind us. It's still a bit important, but nowhere near as as important as used to be. And so we also had like prompt engineering um guidelines and and in in courses that we would give to our employees to have them then start using these tools in a way that that led to a good outcome.
01:26:56
Speaker
um and Then as these tools have naturally been evolving over time and we've been growing with these tools and figuring out how to best use these tools and adding additional functionality, then then we've needed to update these trainings to say, hey, look, now you can chat with your own documents. You can um chat with ah internal Wiki pages.
01:27:18
Speaker
and You can um use it with um some data of different classification levels. um And so, yeah, basically, we we we somehow want to keep engaging ah the the user base so that they can continue to use the tools they can see and be inspired exactly about how to use these in new ways that, yeah, it's going to take the conversation back to the beginning that we don't even know they want to be using. Yeah.
01:27:49
Speaker
yeah And you know, this this makes me wonder, now I'm curious about about your opinion on this. Which pressing questions do you do you do you think still need to be investigated in this field?
01:28:03
Speaker
um And this this is I know this is a very broad question, you can take it wherever you want, either from the the cultural side, the human element or the technological AI or algorithmic side.
01:28:16
Speaker
I'm just wondering from your day to day, what what do you think this this is
01:28:24
Speaker
this needs to be figured out?
01:28:28
Speaker
love the question. so i love the question um I'd love to take the answer in a couple of directions. um Let's say the the first direction is um let's let's pretend that ah AI technology won't improve anymore from today almost.
01:28:48
Speaker
And let's take the conversation in this direction. So we just put a stop on um on on the ability that whatever new generative AI algorithms we come up with just don't give us any improvement.
01:28:59
Speaker
We're fixed with the level of technology we have today. I already think this leads to massive gains. But we only see these gains when we start to integrate them into our day-to-day workflows.
01:29:11
Speaker
um so if you So if you've been using... So we, um part of the company, have access to Microsoft Copilot. And so this is within... um This is a way to chat with all of our internal company documents.
01:29:25
Speaker
And before we had access to this tool, trying to give our documents to a large language model to ask a question of was just ah difficult. it was It was a pain.
01:29:38
Speaker
um we um We had some solutions. We were developing some solutions, but honestly, it was just a pain. um And it wasn't integrated. So you had to go to one web page. You had to remember what was the web page.
01:29:50
Speaker
You had to drag and drop some documents. um And it was just, you know, it just, it was very clunky and it just wasn't integrated in our workflows. now um ah Now, using Microsoft Copilot, a mass of our data is now accessible to us integrated into our standard workflow. So,

Future of Digital Assistants and Data Sharing

01:30:11
Speaker
Microsoft Teams is our our messaging system of choice, um and um we use...
01:30:18
Speaker
use ah And so now we can stay within Teams and we can find mass of information um about different parts of the company, about processes, and basically about any ah a massive range of documents of documents that we have within the company, our own our own documents also, then we can start to ask and answer questions of.
01:30:40
Speaker
And so the integration of these tools, the integration of this technology in a tools that we're used to using i think is the the the next step to adoption um i don't see this as a technological challenge or something that's no it's just we just have it just have to wait for it to happen it will happen i keep asking alexa when it's going to have be connected with a large language the brains of a large language model so i can do anything i want to it because alexis just an api may all start talking if i talking to you if i shout too loud alexa is just an api call to a back-end model um so why it doesn't already have all of the power of gpc5 baked into it i can just do stuff with um and the answer is it's just an integration question we just need to wait a bit of time for it to be integrated um so i think in the the
01:31:33
Speaker
the So maybe maybe it's it's just patience, having the patience to wait for these tools to be integrated um and um and to be able to share data between these different services. MCP service is the new kid on the block as of you know um November last year.
01:31:54
Speaker
And now all companies want to offer MCP servers um where you can access you know data so is seamlessly between different interfaces. And so we'll be bringing all of our data from all of all of the different systems we use directly within Teams, for example, and we'll be able to then to interact with Teams and um um without needing to change our workflow or change the tool we're using.
01:32:17
Speaker
And I think that is the um the biggest gain to be had, um ah but I don't see it as a a a a challenge, if you will. like just it just yeah Other than we just have to wait for it to occur, help it occur, but also sit back and wait sometimes for it to occur.
01:32:34
Speaker
Yeah, yeah, yeah, for sure. I think you're right because they're you can do already so much with these things. And if you, I sometimes try to explain to people as when I think about it as if,
01:32:50
Speaker
this was an assistant of yours. What would your assistant need to be able to perform a certain task to them the to your liking?
01:33:04
Speaker
Of course, they need the the the context of information to perform the task. They need to know what the tasks should be and they need to know in which format they're supposed to provide the output.
01:33:17
Speaker
If you define all of things correctly, if you define all of these things very well, you'd be amazed what these things spit out. Most people just don't do that. Yeah.
01:33:28
Speaker
Because it's not so integrated yet. Yes. You have these different systems that you have to go to. I mean, often I find myself, I'm a copy paste machine often. You know, I have some bug, I copy paste, I put it into ah um ah um a codex or whatever and say, fix my bug and it fixes the bug. And then I go back and see if it's working again. Yeah.
01:33:49
Speaker
ah And so so I find myself that I'm i'm this copy-paste machine because these systems are just not yet completely integrated. We're very close. um yeah I would say you need to add one more thing to this assistant idea is that yeah you need to give the assistant the same rights that you have or the rights that you're willing to give an assistant to delegate certain tasks. And so that assistant needs to be able to act on your behalf.
01:34:14
Speaker
um And for that to occur, then it needs to be able to have access sometimes to some of your data, sometimes to some of the processes you have, and sometimes to your bank details. It all depends how much you trust that um ah assistant and what you trust the assistant for what task as to what level of um
01:34:33
Speaker
um what level of access you give it to to to to your systems. um for sure For sure. So that was an answer about the challenges assuming technology doesn't change.
01:34:46
Speaker
um However, I don't believe that for a second. I think technology will keep improving. um We've already seen this. and either this There's no reason why this ah isn't a runaway train that doesn't keep improving and in in um ability.
01:35:01
Speaker
And if you just think about you know just the money being thrown at this from yeah all big companies, um all wanting incremental and exponential increases in and technology means we will keep seeing, ah even if it's a slow increase in tech, which I also don't believe, I expect to be a very quick increase acceleration and increase in tech, then these models will get better, they will become more integrated.
01:35:26
Speaker
So now the question, if you ask me, what do you do in a world where these systems just keep getting better if for the next few years, where they just just keep getting better, um and And then that's a very difficult question to answer.
01:35:41
Speaker
um But what I've been thinking about quite a lot, um and it really comes back to, um you know, right now when I use generative AI, which I use it almost for for very many tasks, um it's typically better than I am.
01:35:57
Speaker
almost everything. it's so It's very often more human than I am. um I gave a presentation just recently to a bunch of ah software developers about using gen how to use generative AI tools in software development.
01:36:11
Speaker
I made one slide and i I was a bit not rushed when I made the slide, but maybe didn't really process what the message I was sending. And my i said my message was basically, um you know, might you all become hobbyists in the future? My only hobbyist write code in the future was the bottom line.
01:36:30
Speaker
And had that. the day before the the The morning of the presentation, I thought, wait, maybe this is just this message is just a bit wrong. you know It's just a bit harsh. And so i i I gave this slide to um internal a large language models, GPT-5, and I said, hey, what do you think about this message?
01:36:49
Speaker
I'm giving it to presentation to a bunch of software developers and the the model came back. It's like, you know, you probably want to tone it down a bit and try twenty who tries this type of rephrasing. i mean, you're underlying, like it was something like, i love your your your passion, or I love your vigor, but let's try to ah deliver in a way that people feel comfortable with.
01:37:09
Speaker
And, you know, it's so so it's also more, more of a humanist, if you will, very often than I am. And so I find that even with today's technology, that it can it's better than me at almost all tasks.
01:37:22
Speaker
um And this technology, are you know assuming that it will keep getting better, which I definitely expect it to, then the question is, OK, what's left for us? um and um And I think the the least the the direction of these answers I've been moving towards are that One, we should do things that we… we What did we enjoy doing before Geno Tveri came along?
01:37:50
Speaker
Well, we can keep doing them. that We can keep doing tasks even if you you know if you're a software developer ah and you love writing code. Hey, keep writing code. That's okay. I know people who like doing up old cars and with with them you know without electric windows.
01:38:06
Speaker
If that's your thing, Go for it. Totally fine. There's a big market for people buying you know old cars that have been refurbished. There'll be a market for people to buy code written by a human. Look, an

AI's Impact on Human Craft and Interaction

01:38:18
Speaker
actual human wrote this code, not a bot.
01:38:20
Speaker
And so I think that um you know if you are passionate about something, just keep doing it. Stay passionate because that won't go away just because there's a a new i ah intelligence on the block that can do our task force.
01:38:36
Speaker
um ah and And humans like doing human things with humans. We like interacting with humans, talking to humans, um going out for a drink with humans, eating in a restaurant with food made by humans.
01:38:51
Speaker
you know yeah There are a bunch of jobs which you could replace right now, ah um but you don't need to because humans like humans like that there are other humans that are doing those jobs.
01:39:03
Speaker
um Compare, you know, why do people run 100 meters? Why is the 100-meter sprint a thing? We have bunch of people lined up running 100 meters. um From an evolutionary perspective right now, um who cares? From, you know, but like...
01:39:20
Speaker
The only people who care are humans, right? We care. like that That's why it's a thing, because we care. Why do we see, you know, 22 people on a football pitch running around kicking a ball at each other?
01:39:31
Speaker
Because we care. We we care about that. um Are there machines, animals or machines which run faster than humans? Absolutely.
01:39:42
Speaker
Do we pitch a race of humans against horses? No. We want to see humans running against humans. and Even if we have AIs there ah you know and robots that can run faster than humans, we we don't care about watching you know um humans race against something. I would definitely like to see robots and humans race against each other, but there'll be a lot of people who will pay just to see humans run against humans.
01:40:07
Speaker
or humans play football with humans. There'll also be new markets where I get to see humans running against robots or robots running against other robots. So I think that the amount of markets that open up when we have this new technology, human-eyed robots, or just you know the increased um ability of these these machines, there's just more markets will be opened up.
01:40:31
Speaker
um And the key and one of the causes is that many people will always want to interact with people. Many people, some people don't. Fine, they can go off and live in their bubble and and and never need to interact with humans again.
01:40:45
Speaker
If that's their thing, totally cool. But most people will still want to interact with humans. Most people want to go to a restaurant and sit there and be served by a human. um And most people say a lot of people will love to see a chef cooking food for them.
01:41:00
Speaker
um And it'll be a one-off experience that just the chef just made for you today. um Something non-deterministic, because it's a human doing it. Exactly. um And so I think that
01:41:15
Speaker
thinking about what to do in the future that you enjoy it and will bring value to you is something that we should start thinking about that right now.
01:41:27
Speaker
But I don't see that that's actually, that it will be such a big impact to our lives because we already know what we like doing and we can just continue to do that in the face of super generative AI, super power generative AI or not.
01:41:42
Speaker
Yeah. Yeah, I think so too. so think as you say, a lot of new markets will open up. I think a lot of our preferences will shift and maybe even more towards the things we actually like to do rather than sit behind a computer all day long, moving numbers from all Excel column to another. Yeah.
01:42:03
Speaker
Because I don't know many people that actually like to do that, but they just have to do it because ah they they need to have a job. um And those are not stupid people. Those are very, very smart and creative people that from some, I can't speak for all, from some that I know that actually like to work with, you know,
01:42:22
Speaker
would or make furniture or things like this. And sure, yeah, you have the the manufacturing aspect as well um when it comes to furniture, right? we all have We all know IKEA and the mass production of it.
01:42:36
Speaker
But yet you still have um woodworkers and carpenters that make custom um furniture and they make a very well living from that.
01:42:47
Speaker
Absolutely. Because humans put worth and value into things other humans do. um yeah You know, there's ah folks function functionally a table you get from ikea is identical to a table you get from um some uh carp or some wood master some woodmeister or some wood woodwork master who um whose grandson went into a forest and chopped down a a 200 year old tree and and dragged it back with a horse and this this this woodwork master um ah craftsman you know made this piece of
01:43:21
Speaker
the table out one piece of wood in one cut, and you see all the... And and he decorated it in a particular way, and it's engraved with your names, and he had you in mind as as you went and spoke to him about what type of table you want him to build.
01:43:34
Speaker
um functionally two tables one from ikea one from this um from this this this craftsman but it's the story that we tell ourselves that gives us gives it value and those stories we will still tell ourselves and they will still bring value and so exactly people um who i yeah if if people are unhappy in their current roles then um i do expect that they'll be freed up to to exactly explore the type of ah work that they think brings some value.
01:44:06
Speaker
um And there will be a an explosion in markets of of potential ah you know ways to sell this. Hey, we might be be even selling tables to generative AIs who are collectors.
01:44:18
Speaker
male ah Maybe generative AIs really care about understanding, I wonder what's going through the the mind of that um woodworker when he built this table just for me, as and I'm an AI, and then we can look at this.
01:44:29
Speaker
So that also could be a a new market selling to AIs in the future. um That's super interesting. i have I really look forward to, a the so look um just in this direction, I really look forward to a future where um you know we're we're really freed up from screen time. And I'm actually a strong believer that we'll be using screens a lot less and the internet a lot less today, ah lot less in the future than we are today. And this already using a technology without...
01:45:03
Speaker
advances in technology, this can still occur. So right now, what we could do is I could tell Alexa that I'm i'm going to Berlin and I could just start for and live in Munich. I could just start walking towards the train station.
01:45:16
Speaker
um A Uber could come and pick me up. um and on the way, ah and like it could take me towards towards the metro station or the train station, um and on the way I can just say, hey, i let me out here.
01:45:31
Speaker
I want to go and get a drink from a supermarket. I walk into the supermarket, come back, um i continue my journey, get into the train, go to Berlin, walk just to a hotel. Maybe I've picked a random one, may um and and and I check into that hotel.
01:45:45
Speaker
And what could happen today is this entire transaction or this entire interaction could happen without me needing to pay any pay money, like physically pay money to anybody, without me having to show my phone and um and pay for any any of these setups or any of these systems.
01:46:05
Speaker
um It's just a question of, ah you know is there an agent to agent ah framework ah payment framework in place? The answer is no, it's not yet in place, although technology you know just exists as as of a couple of months ago.
01:46:19
Speaker
And it's not integrated in these workflows. So um um I can't talk to Alexa, tell her I'm going to Berlin. and it you know, starts to um track where I am and figure out if it wants to send me an Uber and if it's going to buy it and and if I'm in the Uber, if this transaction occurs um ah but and if I walk into a shop and I just walk out with a product um that I've automatically been charged for and it's identification of the product in my hand.
01:46:46
Speaker
So these things that can all absolutely occur with today's technology, they're just not integrated. And so I, Now, imagine that future. um How little would I need a screen in front of me?
01:47:00
Speaker
i wouldn't need a mobile phone in my hand, figuring out know train times, um hotel options, booking hotels, um where the supermarket is.
01:47:11
Speaker
i wouldn't need to know any of that. i could just have maybe a phone in my pocket that I don't need to, um or something like a phone in my pocket that I don't need to pull out and look at. um And that entire workflow could occur without any changes in in technology or advances in technology today, just an integration of existing tech.
01:47:31
Speaker
And that's something that really excites me, ah that we most likely don't need to be sitting down in front of screens or constantly on our mobile phones. I really think there's a future there is a future if we want it, where we can basically live as if we're off grid,
01:47:48
Speaker
of course, without being off grid. um And for those people who want to be on the mobile phones and play Candy Crush or whatever, be on Instagram, hell, they can do that. That's fine.
01:47:58
Speaker
There'll be a market for them as well. But i just really like the the opportunity that that the future will give us not to be constantly and um on our phones. and And through that, I really think we actually get back to more human interaction, more more more contact time with people.
01:48:16
Speaker
um I think we move a bit away from or back towards the 1980s, if you will, when ah when we didn't have smartphones um right and people weren't constantly you know ah just lost in their phone on public transport.
01:48:31
Speaker
um At least that's one future I see that could could come about with the technology that we have today. It was just not implemented everywhere. Right.

Conclusion and Future Discussions

01:48:41
Speaker
No, I think that's a I think that's a beautiful vision to have. I think that's almost the question. i think you by now answered the question that I was about to ask, but let me ask it anyway. Maybe I'm wrong, ah which which is simply, i mean, what do you think are the biggest hurdles that need to be addressed regarding AI in industry?
01:49:05
Speaker
Do you think it is really, as you as you just described, it is the interconnectivity, the interconnectedness, the integration, or do you think there is something else?
01:49:22
Speaker
It doesn't have to be an either or, it can also be an end. Yeah, so I don't see...
01:49:33
Speaker
Just adoption. I think i think its adoption occurs. ah Integration. i think integration just occurs. um ah
01:49:46
Speaker
Connectivity just occurs. So I actually don't see any any big hurdles that we need to address um other than we need to be a bit patient.
01:49:58
Speaker
just have to sit back and have a cup of tea and wait for the time when all of this is just working um because it will absolutely absolutely start working.
01:50:10
Speaker
I'm excited about it. Me too. I think it's going to be or it's already very exciting in being at the forefront of all of this. um I'm curious and and and hopeful of what's to come for sure.
01:50:26
Speaker
Me too. Absolutely. Ben, you know, I want to thank you very much for your time and and your for this great conversation. I had a great, I mean, you you see all the papers that I have in front of me. I have like four more pages to ask you. So I'm sure we will repeat this at some point.
01:50:45
Speaker
ah it would be It would be great. um But for people that are listening in right now, if they want to reach out to you, where where can they find you? Well, first of all, thanks a lot of ah for having me here. It's been a real pleasure. I really enjoyed this conversation.
01:51:01
Speaker
Yeah. yeah And if you'd like to reach out to me, then you can find me on LinkedIn. umlingingedton ah um You can. um I'll add it to the show notes.
01:51:13
Speaker
Yeah, thanks people don't have to search around. um It's added to the show notes that they can find you on LinkedIn. Yeah, and but that's probably the best way. Yeah. ah you Perfect. Thanks a lot.
01:51:24
Speaker
No, thank you. And um yeah, thanks again. And well, to everyone listening, you have a great day. I'm sure this isn't the last time we talk. Hey everyone, just one more thing before you go.
01:51:36
Speaker
I hope you enjoyed the show and to stay up to date with future episodes and extra content, you can sign up to the blog and you'll get an email every Friday that provides some fun before you head off for the weekend.
01:51:48
Speaker
Don't worry, it'll be a short email where I share cool things that I have found or what I've been up to. If you want to receive that, just go to ajmal.com, A-D-J-M-A-L.com and you can sign up right there.
01:52:02
Speaker
I hope you enjoy. it