Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Ethan Mollick: Why AI Is a Leadership Problem, Not Just a Tech Problem image

Ethan Mollick: Why AI Is a Leadership Problem, Not Just a Tech Problem

From the Horse's Mouth: Intrepid Conversations with Phil Fersht
Avatar
217 Plays1 month ago

Lead Like You Mean It with Ethan Mollick, Professor at Wharton and Author of Co-Intelligence “You don’t need an AI course. You need to get your hands dirty.” — Ethan Mollick What You’ll Hear in 30 Minutes - Why AI is exposing leadership gaps more than technical ones - How LLMs are reshaping research, training, and decision-making - Why interns and junior roles are being replaced before we’re ready - What makes an effective internal AI lab—and why most aren’t - How AI’s personality could become a competitive differentiator - Why CEOs must experiment with the tools to lead with credibility Guest Snapshot Ethan Mollick is a professor at the Wharton School and author of Co-Intelligence: Living and Working with AI. Through his blog One Useful Thing, he helps organizations and individuals make sense of the generative AI wave. In this episode, Mollick shares how AI is challenging everything—from how we lead to how we learn—and why the best leaders are the ones who are hands-on. 

Timestamps: 

00:00 – Ethan’s Otter Test and the AI Acceleration Curve 

01:33 – Rethinking Research, Workflows & Analyst Roles 

04:53 – Apprenticeships Are Broken. Now What? 

06:30 – AI Is a Leadership Problem 

08:22 – Building an Internal AI Lab That Works 

10:41 – Why “Perfect Data” Is a Red Herring 

14:08 – How to Prep LLMs with the Right Prompts 

16:09 – AI Personalities and the Next Differentiator 

18:50 – The CEO's 5-Minute AI Briefing 

22:02 – No One Has the Answers—But You Still Have to Lead 

24:22 – Can Google Win? 

What Comes Next in the LLM Race Explore More 

📘 Co-Intelligence: https://amzn.to/3Yk0QYQ 

📰 One Useful Thing: https://www.oneusefulthing.org 

🔗 Follow Ethan: https://www.linkedin.com/in/ethanmollick/ 

🔗 Follow Phil: https://www.linkedin.com/in/pfersht/

Recommended
Transcript

Introduction of Ethan Mollick

00:00:02
Speaker
Hello, today I'm absolutely thrilled to be joined by Professor Ethan Mollick of the Wharton School. What I love about Ethan is he is the leading academic who's really figured out, without being a technical individual, how to use the plethora of large language models and magentic tools today and really educate us on how to get the best out of them,
00:00:26
Speaker
how to focus on education, on research, on business, and all these other things. He's a fantastic author of the book Co-Intelligence, and he has a blog called One Useful Thing. So welcome, Ethan Mollick.

Understanding AI Acceleration - The Otter Test

00:00:41
Speaker
Let's start with a little bit about otters, right? So your otter test wasn't just a delightful, but unexpectedly precise chronicle of AI's acceleration.
00:00:52
Speaker
How do you see this rapid technological evolution affecting how we think about intelligence and creative work? So the otter test, by the way, is I've been using otter on a plane using Wi-Fi as my test over the last three years of AI drawing ability.
00:01:08
Speaker
I think that generally benchmarks are weird for AI. It's very hard to know what AI is good or bad at without lots of testing and test sketch contaminated. So there's all sorts of kind of confusion of of understanding where things are going in the AI space.
00:01:21
Speaker
So idiosyncratic tests help a lot, both own little benchmarks and also the larger scale tests that we have from benchmarks that are widely used suggest some sort of acceleration happening. So ah probably through a combination of the fact that the models keep getting better and we keep learning more techniques to scale them even better. so really fast takeoff, at least on benchmarking approach, AI is just really improving rapidly.
00:01:46
Speaker
Right, right.

AI Agents and Autonomous Work

00:01:48
Speaker
and We're passionate about doing research at HFS, obviously we're an analyst company, but in the past it took time. And in your recent work, you know, you talk about the bottleneck that isn't the research anymore. It's figuring out what research to do. So can you expand a bit on how people can use models more effectively doing research and how you see our industry and yours and education changing as a result?
00:02:12
Speaker
One of the biggest trends in AI is about agents, about having AI be able to do autonomous work. General purpose agents that kind of substitute for a human aren't there yet, and we don't know and they will be, if they will be.
00:02:24
Speaker
But specialized agents like deep research agents are quite good. So I don't know if you've had experience yet with any the deep research tools. All of the major providers have them. Probably I would suggest that Gemini or John GPT's or maybe Claude's research tools if you haven't tried them. and These things work so that you ask a question and you get like a 40-page report back. I have been speaking to lots of people and analysts heavy industries. and If you think about it,
00:02:48
Speaker
A lot of white color work is this way, right? So we pay for marketing reports. You pay to have outside counsel write a report for you about the risk associated with the job. You do research on whether an accounting practice is correct or not.
00:03:00
Speaker
You obviously do research as a stock market analyst. And the question is, how good are these systems at this kind of job?

Transforming Research with AI

00:03:06
Speaker
And the answer from talking to lots of other workers who are senior workers in these various knowledge work fields is quite good.
00:03:12
Speaker
You know, data sources have been relatively limited. They tend to use the open internet. But you produce pretty great results, and now they're expanding to more private data. So that opens a lot of interesting questions.
00:03:23
Speaker
I can now generate really high-quality research results on demand. What does that leave for human researchers to do? I mean, they don't do everything. They don't have the same kind of opinions. But what does that do for mediocre research?
00:03:34
Speaker
What does that do now that I can get research on demand? How does that change my decision-making? think there's lots of really interesting questions to ask that we don't have answers to. Yeah, I mean, it's transformational in the analyst business. We would hire juniors straight out of school or college to go fetch information, make calls, run

Apprenticeship in the Age of AI

00:03:54
Speaker
surveys, things like that. But I've been using it a lot for my job.
00:03:57
Speaker
I use it more as an augmentation tool. So I'll write something and I'll maybe pump it through CheckGPT. just to see if it can add any fresh perspective or ideas and things like that. And I found it's incredibly good at finding the information and I barely need to go to juniors anymore for additional support and help. I can get a lot of what I need straight at my fingertips.
00:04:19
Speaker
One, do you have a similar experience? And two, what does this mean for the youth of today who are starting in the workplace? You know, they've they've got access to way more information than we had 20, 30 years ago, but at the same time, they don't have the emotional maturity to get on. So how is this kind of impact, do you think, the whole cadetship model for, you know, academics and research analysts, these types of people?
00:04:44
Speaker
Let's start off with kind of you know school, right which is where we start these kind of training. and Everything's in chaos right now. i mean People are cheating continuously with AI. It's bad. We'll figure it out. There's all sorts of evidence that properly used AI works like a great tutor, accelerant of learning.
00:04:59
Speaker
We'll learn how to flip classrooms where you use outside of class that do a lot of activities and testing inside of class. We'll figure that stuff out. I'm not that worried about that. But I am worried about the other piece you're discussing, because i teach people to be generalists at a place like Warden, and then I send them out to you to be specialists.
00:05:14
Speaker
because they And they learn the same way they have for 4,000 years, was apprenticeship. They work with somebody who's more experienced. In return, the more experienced person gets somebody to do the grunt work for them, and the junior person, by writing same intro to report 50 times, learns why it's important to do, how to do it right.
00:05:30
Speaker
That's already broken and will certainly be breaking this summer for most people because you'd rather the interns are not dumb and they'd rather they want a job. So the AI is better than them at their work already.
00:05:41
Speaker
So they'll turn in AI work and middle managers rather turn to AI than an intern. So that breakdown is a really big deal. I think that the way that around it is start being deliberate about how we train people. So mentorship isn't just do the work for me and you'll learn passively.
00:05:55
Speaker
It has to be active. And similarly, if you feel as you've found that you probably need less interns than you do before or less junior people, you're going to have to consciously decide what your pipeline looks like. Rather than just hiring and as many juniors as you need to help and promoting the ones who are doing well, you're going to have to invest more in getting the right set of juniors on board even before you need them so you could train them enough to get them to a level where they could be helpful.
00:06:18
Speaker
Right, right. Okay.

Leadership in AI Transformation

00:06:20
Speaker
So let's think about how this is impacting enterprise. And in your postings, Ethan, you've said AI is fundamentally a leadership problem.
00:06:30
Speaker
So what kind of leadership is needed now that AI is reshaping work at ah this task level that we've been talking about? I don't know when this is going to air, but just a couple of days ago, Amazon put out a memo about how AI is going to transform work.
00:06:44
Speaker
We saw the same kind of thing come out of Duolingo and Shopify. And i think these are good statements of urgency. They're kind of terrible of leadership statements about what the future holds.
00:06:57
Speaker
because they don't actually tell anyone anything. They're like, your jobs will change, use AI. But it's part of the goal and and role of a leader is to make sure that they can tell you why you're doing it, what you're doing with AI, what the point of this is, what will your job look like in the future? Give me a vision of what work will be like.
00:07:15
Speaker
Actively shape how the company will transform in a world with ai involved. And I think that that is both important for the employees who want to know what's going on, but also strategically because work is going to change.
00:07:26
Speaker
And that doesn't just happen organically. That has to be a choice. And so I want to see leaders making more choices. Then I want to see how they're setting up incentive systems and structures in order to actually make those choices become real.
00:07:37
Speaker
And I worry when it's just kicked the can down to more junior people or the second or third level of management, which is transform with AI. That's not a marching order. That's that's just confusion. Yeah, no, I'm completely ready there.
00:07:48
Speaker
So what do you think makes an effective like lab environment for exploring and exploiting AI with with your team? So this is kind of part of what I've been talking about a lot, which is the idea that you need three elements. You need leadership, like we talked about you need the crowd. You need everybody working on AI. And you need the lab.
00:08:06
Speaker
And the lab is really you know not a, it's an ambidextrous organization, we would say. It does two things at once. One thing it does is take the ideas that come out of the crowd. So people in your organization are figuring out how to use AI all the time to do work better.

Fostering AI Innovation in Enterprises

00:08:20
Speaker
And you want to turn those into immediate products everyone can use the next day or two later, validate this prompt works really well, test and benchmark how good the deep research tools are, and also start to build for the future, larger, more agentic systems that might succeed. And the lab isn't a technical place, particularly.
00:08:37
Speaker
It's a mix of people who are really good at using AI in the crowd and people who are technically savvy. But the idea is that you're actually making real things and making them fast. Right. So thinking about how these tools are helping us do our jobs better, you know, I recently been tinkering around with that Chinese company, Manus.
00:09:00
Speaker
It's quite amazing how it can like run an entire marketing department for one person. But can you share a bit of your experience? Because i know you've been pretty deep with that and how you found that has driven things for and enterprise possibilities.
00:09:15
Speaker
So Manus is example of an agent. It's a model that runs on top of 3.7 and Maybe they're updated to four by now, but I hadn't seen that yet. And it shows you how relatively straightforward you can do with and a system that is just mildly agendic, right, where it basically sets its own goals and pursues those goals.
00:09:35
Speaker
The models are smart enough to do a lot of work already. We just haven't started using them yet, right, in the proper kind of way. And that's the part of that's a technology problem, and part of that's an organizational problem, too. Like, so what do you do with Manus? They could create a website for you. It's the same thing that the LVM yeah models can do.
00:09:49
Speaker
So I think we have to start thinking a lot about like what want these systems to do. and I think waiting for people to tell us that is going to result in a lot of missed opportunities. Yeah.

Challenges of AI in Enterprises

00:10:02
Speaker
And I'm just going to get to the point here. I'm seeing a lot of companies struggle with agents because they just don't have a clean and enough set of content and data and process that the technology can use. So you get these vendors who give you the big gleaming demo on how these tools can work in a perfect environment, but none of these environments are perfect.
00:10:29
Speaker
and they seem to require an enormous amount of adaptation and change to get them working with these tools. How do we get over the hump in terms of saying, look, we just need to revisit how we operate as a business so we can use these tools effectively? Because it feels like despite the noise, the hype, the excitement and the fact that these tools are incredible,
00:10:53
Speaker
enterprises are still faced with the same big problem. The data is a big mess, content is all over the place, and it's very hard to structure and very hard to operate in these environments. I mean, what what what do what do companies do here?
00:11:08
Speaker
I think, first of all, it's worth thinking about what we mean by the data is a mess or the data isn't structured or when you do something different with the data, because that can be misleading to a lot of people. You know unlike a traditional machine learning algorithm, which is still really important, by the way, right? I mean, finance runs on this stuff. Traditional machine learning approach is find patterns in data, can make predictions about the future, right? So finance was the most transformed by this because I can do, you know, analysis of a market or analysis of, you know, sales or figure out what movie to recommend to you.
00:11:37
Speaker
I know other movies you've watched. So that is, you know, you hire PhDs, you hire data scientists, you get your data organized, you build your data lake, and you, you know, and you you do this kind of analysis. Still very important. But those kinds of systems had this weakness, which was they weren't good at working with language. So large language models were applying some of these same techniques to language.
00:11:55
Speaker
It turns out we get a very different kind of beast as a result. So LLMs. are pre-trained. The P and GPT stands are pre-trained. So they already have data in them. So you don't need to train a model, right? I know you know this, but just for the listeners who are are watchers, just to make this kind of clear.
00:12:09
Speaker
So your data is less important in some ways because the model kind of works like a person. It's not going to go through vast amounts of data and spontaneously find a pattern or results any more than a normal analyst would.
00:12:21
Speaker
So when you say get your data organized for AI, large language models often are often quite good at this. Some of the use cases I'm seeing for LLMs actually are ah deep research style tool where it's like, I know actually check my Dropbox and then check my SharePoint and then go to the internet and then also log into this enterprise software system that we have and then preserv give me a consolidated view of what this all means.
00:12:42
Speaker
So LLMs fill in a little bit of a different place in that what data you give them, and you give access through RAG systems and other approaches to having them be able to look at your data. But it's not the same kind of data problem as we had for large scale machine learning systems.
00:12:56
Speaker
That being said, what you're saying about process absolutely resonates, right? Companies haven't thought through process. You need to explain the ah to the LLM what it's doing and what it fits in. You need to design those elements of your organization.
00:13:08
Speaker
So I think i am a favor in favor of kind of leaping in and seeing what you actually need and and doing experimentation to learn about that. But I think that a lot of companies create barriers at where they need to do things before they're LLM ready.
00:13:20
Speaker
And I don't necessarily think that's true, right? You will find out what you need to do to make the LLM work or not. And nobody really has that information, right? Is it having access to all of your data sets useful? Generally not. Like one of the issues I see is people are like, but we need to go through all of our proposals to give us a good proposal.
00:13:36
Speaker
Actually, with an LLM, um you'd be better off giving it two or three really great proposals that you wrote purposefully and some good instructions for writing proposals that are written in plain English by your best proposal writer and having the LLM work from that rather than spontaneously go through all your systems.
00:13:51
Speaker
No, you are preaching to the choir and our message to enterprises and our research shows The tech is actually the least of the problems.
00:14:02
Speaker
It's all the process, data, and skills around the technology to make it work that has to be addressed. And I think our perennial problem in our industry is we like to buy the shiny new thing when The real issue is the ugly stuff that needs the fix. And I think we've been avoiding that for generations. and And this is just highlighting when the technology is actually there, you can see how good it is. You can see what peers and competitors are doing with this technology. You've got no choice but to address, I think, the hard stuff to get this right.

AI Personalities and User Engagement

00:14:37
Speaker
Let's flip a bit now to, I think, one of the more fun areas that you talked about at our summit, which was about AI developing a personality. And I was playing around with Google Gemini last night, and it was actually kind of rude.
00:14:50
Speaker
And I thought, this is this is refreshing. I'm so tired of ChetGPT sucking up to me. But what's what's your view on the personality of AI well what's goingnna work but better and better as we evolve?
00:15:03
Speaker
So there is a sort of natural personality of these AIs and they're trained in a particular way to reinforce it one way or another. I think the biggest thing we're seeing, especially for the more consumer-oriented models,
00:15:14
Speaker
is that the AI companies have realized that personality matters. People want to have something that sucks up to you a bit more. That's what most people seem to want. You know, like people will stick with models that are friendly and ask them questions.
00:15:27
Speaker
So personality engineering has become a much bigger deal inside these organizations from just being sort of helpful and friendly to being an anthropomorphic character that you deal with. And so that is going to be a point of differentiation for better for worse among AIs.
00:15:40
Speaker
There's some early evidence that you know you can alter the personality of AIs to make them more engaging so people spend more time with them. So I think personality becomes an interesting issue overall. I think you can alter personality with some system prompts, but as soon as you start altering the personality the you're probably altering performance in ways that aren't necessarily clear.
00:15:59
Speaker
We've been doing research also just working with them like a person. turns out saying please doesn't actually matter very much in accuracy levels. Well, actually, it's even more interesting. It matters a lot on very particular questions.
00:16:09
Speaker
So we test it against this large question data set. And some questions being polite to the AI makes it much better. And sometimes asking being polite to the AI makes it much worse. And it sort of averages out to no effect.
00:16:21
Speaker
It's a complex issue. It's learning to, I guess, start up a relationship with a computer. But we're also dealing with these LLM companies that are consciously trying to start to think about what relationship they want us to have with the computer.
00:16:34
Speaker
And it turns out we're pretty manipulable. So it it isn't just a natural development, right? It's it's a back and forth. Yeah, it probably depends on your own personality as well. Do you like to bark orders or do you like a nice polite conversation? and And away you go.
00:16:46
Speaker
So let's get to, I think, the other big

Executives Experimenting with AI

00:16:49
Speaker
issue. This is very significant in the tech industry right now. It's we're seeing a lot of major cyber issues going on, the amount of geopolitics is opening up companies to more AI exposure, more tech exposure in general.
00:17:05
Speaker
So as you look at the rapidly evolving landscape, if you had five minutes with a CEO or CIO of big Fortune 100, what would you warn them about or urge them to do today as they are going down this AI path?
00:17:20
Speaker
So at the executive level, first of all, I would say, look, these systems are good enough already to disrupt work in real ways. Like ignoring this isn't viable. I would also tell them, the thing I tell of them, which is no help is forthcoming. Like the consulting companies don't have better answers about how to use AI than you do.
00:17:37
Speaker
The software vendors are still figuring this out themselves. Even the AI for companies, when I talk to them, are are still trying to figure out what their systems do. so You have to engage in some exploration and some work to figure out what happens.
00:17:49
Speaker
This can't be 100 percent KPI driven. It has to be done through experimentation and trying to improve performance over time. so i mean All of that stuff it makes this urgent. I would also urge that they use AI. right so They can't just delegate this down. They have to try this out.
00:18:03
Speaker
What I really enjoyed, Phil, is that you tell me these stories about your experiences. If you don't have that at the executive level of people doing this kind of engagement, going to know what these systems could do. And they'll they'll say, like, I need to make time or i need an AI course. No, you just start interacting with these systems, you'll learn what they do one way or another.
00:18:18
Speaker
And then I think it's the urgency of making decisions about this and not just punting those decisions down to other parts of the organization that becomes really important. You do hit a very good point. I mean, my dad is 82 and just upgraded his CheckGPT account. I mean, I'm like, wow, it's interesting how he's an academic, right?
00:18:38
Speaker
But then I work with so many business leaders. Some are getting it and they're spending a bit of time with his tools. Others are clearly not. And my take is I can tell within five minutes if someone's BSing me around.
00:18:51
Speaker
their experience with AI versus those who are getting their hands dirty. Do you think this is going to get more more pronounced in the coming year or two where if you're just not getting your hands dirty, you're going to fall away as a leader?
00:19:04
Speaker
do Do you think that's actually going to happen? This is going to get very much down to our own competency as individuals in terms of how successfully we're going to perform at work? By the way, my parents, I found out, i didn't even tell them to do this, ah both have switched their prescription glasses to meta glasses. So they actually are wearing and the same age as your as your as your dad. So it's it is, you know, age is not the barrier.
00:19:28
Speaker
I think that it is less about having the CEO invent a way of being really good at AI or inventing a new approach themselves as much as situational awareness. They have to understand what the situation is with AI and the only way to do this viscerally.
00:19:42
Speaker
And also it will be helpful to them And it will also give them a sense as the person who coordinates the organization. And most importantly, in some ways, carries risk for the organization, right? Like somebody has to make choices about how things operate. Someone has to make risky decisions. If you're not informed, you can't make those, right? The whole reason someone works with an analyst organization is to help them get the information they need to make important risky decisions.
00:20:05
Speaker
You also have to be informed yourself, right? That's why you come to a conference. That's why you listen to a podcast like this. This is another area of being informed. You don't have to be the master of AI, but if you're not informed what these systems could do and where they're going, you can't make decisions.
00:20:17
Speaker
Wow. Well, we're in a very interesting place. I'll ask you one final question. I can't resist this one, but one thing ah got very excited hearing from you about is you've really got your hands dirty with a a lot of the tools that we all are learning about in the market.

Future Leaders in the AI Market

00:20:34
Speaker
Who do you think is going to win this race as we look out in two, three years in the future? Which are the... leading AI companies you think are the ones that stand out for you who are going to be the next big domineering players?
00:20:47
Speaker
There's a few things that underlie question. First of all, race to what, right? Right now, if you look at this, the there's three or four closed-sourced AI companies that are all very competitive with each other.
00:20:58
Speaker
There are a couple open-source companies that are eight or ten months behind. And then there's no ability to catch up besides that. So the question is, what are we racing towards? Is there some point where an AI model becomes self-improving fast enough that that becomes the dominant model? So is it a race that someone wins?
00:21:13
Speaker
Is it an ongoing technology race that looks like a lot of technology races where things plateau after a period of time and everyone catches up? I don't know the answer, but I would say that the big three LLM companies are probably going to say the big three, which is Anthropik, OpenAI, and Google.
00:21:29
Speaker
And outside of those three, there's a couple of really good Chinese models out there, you know, Quen and DeepSeq among them. Meta sort of fell out of the race a little bit, but they're spending a lot of money to get back in.
00:21:42
Speaker
i don't know. It kind depends on what happens. you know, XAI Grok has scaled up very quickly, could be very much a competitor. We just don't know the answers to those questions yet. Yeah, well, good answer. What do you think of the potential Google has? Because if they have the courage to cannibalize the ridiculously big search business and force-fitted Gemini and everybody gave it gave it away practically, they could win, right?
00:22:10
Speaker
But I think it's a battle between protecting their legacy and where things are going in the future. If you were the CEO of Google today, what would you do? Well, I would say Google is a big company. It does many things at once. They're already disrupting search a lot with what they do. Their deep research reports are excellent.
00:22:26
Speaker
The Notebook LM is really good. They have this AI mode. And I don't know how much those things take away from already start cannibalizing their search and ad business. I mean, it's very clear the writing is on the wall to some extent for the old way things used to work. People are going to engage in more agentic commerce one way or another, whether that's actually asking an agent to go do work for you or just because you'd rather ask the AI, what should I buy, then go do Google search yourself on it.
00:22:50
Speaker
Before I buy any gift, by the way, I always do a deep research report on it. so And so I think that the element that's really important is how do they profit from this, right? So is there a way of how do ads work in LLM world?
00:23:03
Speaker
How do you advertise to an LLM? I mean, there's a lot of really open questions here that I think Google's well positioned to solve. On the other hand, ChatTP is not going anywhere. They've got a billion users, right? Like that's one out of every seven or eight people. And so I think that those two will at least remain competitive in the future.
00:23:19
Speaker
Wow. Well, they're going to know more about us than we do ah probably know about ourselves, right? Always always always always was true. Good.

Conclusion and Podcast Promotion

00:23:28
Speaker
This has been a wonderful conversation. We very much enjoyed you addressing our company conference last month as well, Ethan. And I look forward to sharing this pod with the world and to the next time we we meet. i really appreciate it today. i Thank you.
00:23:41
Speaker
Thank you.
00:23:45
Speaker
Thanks for tuning in to From the Horse's Mouth, intrepid conversations with Phil First. Remember to follow Phil on LinkedIn and subscribe and like on YouTube, Apple Podcasts, Spotify, or your favorite platform for no-nonsense takes on the intricate dance between technology, business, and ideological systems.
00:24:05
Speaker
Got something to add to the discussion? Let's have it. Drop us a line at fromthehorsesmouth at hfsresearch.com or connect with Phil on LinkedIn.