Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
The Everything Machine and the Trillion-Dollar Bet [Replay] image

The Everything Machine and the Trillion-Dollar Bet [Replay]

S4 E40 · Bare Knuckles and Brass Tacks
Avatar
103 Plays4 days ago

What if the story we're being told about AI's inevitability is hiding something underneath?

That's the question Jessica Parker and Kimberly Becker put to George K. on their podcast, Women Talking ‘Bout AI.

This conversation is a replay from their feed. It followed the money: the special purpose vehicles, the obfuscatory financing, the concentration of risk in a handful of companies and a single island in the Taiwan Strait. But what they kept arriving at wasn't really a financial question. It was a human one.

Who has skin in the game? And what happens to the rest of us when the people building this technology can't answer what outcome they're actually trying to produce?

The conversation covers why the dot-com analogy is the wrong frame for the current investment craze, why an AI crash could starve the narrow applications that actually work, and why the "everything machine" promise was probably never going to pay for itself.

It also gets into what chatbot tutors get wrong about teaching, why we keep analogizing ourselves to whatever technology we just built, and what it might mean that generalists could be the ones who come out of this ahead.

The kind of conversation where you leave with more questions than you came in with. Which is exactly what we're after.

Recommended
Transcript

Tech Promises and Skepticism

00:00:00
Speaker
if i If I came up to you on the street and said, like, I am inventing this thing and it's going to help you do everything. You're like, okay, well, like what? Like everything, anything in your life.
00:00:11
Speaker
Can it wash my car? it can It can set appointments for you automatically. Can it feed me? You can take photos of your fridge and it'll develop recipes, whatever. Okay, this sounds great. What do you need for it? A trillion dollars.
00:00:25
Speaker
If anyone else said that to you, you'd be like, that's crazy talk, snake oil, nonsense, get out of my face.

Introduction of Key Guests

00:00:39
Speaker
Yo, this is Bare Knuckles and Brass Tacks, the tech podcast about humans. I'm George Kay, and this week, George A. and I were both on the road traveling, and it got a little hairy in terms of recording.
00:00:50
Speaker
So we're going to air a different interview, which is one that I did this week on a podcast called Women Talking About AI. It's hosted by Jessica Parker and Kimberly Becker. You may remember Kimberly from her appearance on this show where we talked about her work in linguistics and the effect of confidently wrong LLM text generation.
00:01:11
Speaker
Now, to be fair, I think I'm the third male guest in three seasons. I was very sensitive to the idea of mansplaining, so i did ask before recording about the audience and the level of...
00:01:25
Speaker
technical understanding, so to err on the side of caution, i do a bit of explanation around some of the technologies and try to be very specific about generative AI versus other AI, deep learning, machine learning.

AI Financial Dynamics

00:01:41
Speaker
But the episode really touches on the financing and the bubble, right? We talk about the circular financing, we talk about financial engineering, what the biggest hyperscalers and labs are hiding in the structure of that financing, and whether they can even articulate the outcomes that are desired by this so-called general purpose technology.
00:02:03
Speaker
We also talk about supply chains. We talk about how you can prepare yourself for the future labor market, the role of technology in education, and what happens to the rest of us while these very big questions go unanswered.
00:02:16
Speaker
So, hope you enjoy it. Here it is my appearance on Women Talking About AI.
00:02:26
Speaker
Welcome everyone to Women Talking About AI. Today we have our third male guest, I'll take it. third, I think. Yes. think so. In three seasons.
00:02:37
Speaker
um George Comidi. And I met George on his podcast, Bare Knuckles and Brass Tacks, where he has a co-host also named George. And they have a wonderful show that is accessible, but also in-depth look at tech in general. George is a community builder and AI watcher. And so he is well positioned to answer some questions we have. So welcome to the show, George. Kate. Yeah. And when we were of Pinging ideas off of each other, Kimberly, about what we wanted the show to be about. You sort of gave me four options that George could talk about. And one of them was around the AI bubble and the inevitable market correction that people think

AI and Historical Tech Bubbles

00:03:16
Speaker
is coming. And that just fascinates me. i am definitely not an expert in it. I think that'll be clear today, but I'm interested in and I follow the news and the discussions.
00:03:27
Speaker
And so that's what I chose. So I'm glad that you were willing to come on and talk about that. And also we had David Shapiro on to talk about post-labor economics, and those were some of our best performing episodes. So I do think these topics are top of mind for folks, especially since it's in the news.
00:03:44
Speaker
I mean, when the media is beating us over the head about AI every day, including, i don't know, weird vibe reporting about how we're all going to be out of jobs, I think that it is very much a question that needs to be addressed a little bit more skeptically.
00:04:03
Speaker
Yeah, and I think discussed from different perspectives, like we've talked about it a lot, but I think today we're going to dive into some specifics. And a lot of times it's just me and Kimberly talking about it. So I'm i'm curious to get your take.
00:04:16
Speaker
So I'm going to start with a bit of a prelude to just situate our audience. So when we talk about the AI bubble, it's when you see these stories in the news and there's concerns around a market crash and... um And a lot of comparisons are are being made around the dot-com bubble that happened in the late ninety s And I do think that comparison is useful, but I do think it's limited. it doesn't capture the full story of what we're seeing now. And perhaps we can get into that a bit. Yeah.
00:04:48
Speaker
But I think instead of the dot-com bubble, I think more about like the fiber optic cable build out in the 90s. And so I'll just give our listeners a little bit of information on that because I might be less familiar with that than the dot-com bubble. But in the years leading up to 2000, lot of telecom companies laid around like 80 million miles of fiber optic cable across the U.S. And they were betting on internet demand, like growing forever.
00:05:12
Speaker
And they were eventually right, um but their timing was off. It took years for that to be fulfilled. And I don't know, for like five or six years, 95% of that cable was still unused.
00:05:24
Speaker
And the companies that laid it either went bankrupt or totally collapsed. And so we still use that cable and the technology worked, but the timing was off. And that infrastructure did become the backbone of the internet and everything we do online.
00:05:38
Speaker
But those early companies really paid the price. And I think that's an interesting comparison because of the big infrastructure concerns about the trillions of dollars being spent on data centers.
00:05:50
Speaker
And I'll start with the infrastructure question for you, George.

AI Infrastructure Concerns

00:05:52
Speaker
Like, I feel like the big difference between this and fiber, even though I feel like that's, to me, the best comparison, the biggest difference here is that the infrastructure we're using for ai is...
00:06:04
Speaker
depreciates It's not going to be we're not going to be using the same infrastructure in 50 years, 100 years as the fiber to cable scenario. And so let's just start there. Like what concerns do you have about the infrastructure issue and how much is being invested in these data centers and GPUs?
00:06:22
Speaker
Yes. So I'll start out by saying I'm not by any means a financial analyst or an investment banker, but my lens that I bring to the world is through anthropology. It's what I have done in every facet of my career. So I i spot patterns, I see behaviors, and I see culture as the map of the universe that somebody takes to the world, right?
00:06:45
Speaker
Now, to answer your question about the infrastructure, absolutely depreciating asset. That is where the dot-com analogy begins to break down a little bit. And that's generally proffered as a defense of this build out, right? Like, oh, we might have the timing wrong and some people might lose money, but look at what we did and eventually it all shook out. It is also worth pointing out that similar overbuild happened with like the railroads. Like famously, people would just build railroads literally to nowhere on the guests that were going to go to Kansas one day.
00:07:18
Speaker
Now, fiber and railroads are infrastructure that stay there. Even when the fiber wasn't being used for the internet, it could be used to process different things, to send different signals, and you didn't have to tear it out of the ground and redo it. And to your point,
00:07:34
Speaker
We don't know what the hard life cycle is of a GPU after a training run. So for your listeners, when they say like, oh, they're training a new model, right? They're running data through racks and racks and racks and millions of these processing units 24 hours a day for long periods of time.
00:07:57
Speaker
Initial estimates are about three years, right? So a data center feels more like a warehouse full of a depreciating commodity, like a bunch of oranges or avocados.
00:08:10
Speaker
And so you got to buy more avocados, right? And so that creates a little bit of a different structure in terms of pricing. But similar to the dot-com bubble and the telecoms, I think the most concerning part is the amount of financial engineering and obfuscatory financing behind these things, right?

Financial Engineering in Tech

00:08:33
Speaker
Bloomberg famously had this chart between like NVIDIA and Microsoft and Oracle and CoreWeave, and it looked like a goddamn spirograph, like the amount of circular deals between one another. And the problem
00:08:49
Speaker
is that a lot of the debt is being taken on in ways that make it very opaque. So one of the problems with the bubble is over leverage, taking on a lot of loans in order to build out things. And if they don't pay off, that's that's a hard set of math to overcome.
00:09:06
Speaker
But what is very tricky, especially maybe more recent in people's memory, is the financial engineering that makes it just really hard to understand where is the risk, who has taken it on.
00:09:18
Speaker
And it kind of all leads back to NVIDIA, more or less, as the biggest player in the GPU space. But there are also on top of that these layers of like race economics and ah race to secure certain things. So for example, Nvidia, you know, its earnings is basically the oracular event of every quarter in the stock market. If anything goes below expectations, expect to see wild sell off corrections, stuff like that.
00:09:50
Speaker
But the problem is NVIDIA is now also motivated by that and has incentives around like advanced

NVIDIA's Role and Risks

00:09:59
Speaker
purchasing. Like if you dig into the earnings calls, you know, sometimes it's projected because people are trying to lock in pricing now for Blackwell GPUs, which is the latest things. And then maybe...
00:10:11
Speaker
um You have Blackwells in your data center that you're running, but now you feel like you need Vera Rubens, which is the next iteration. And so you have this accelerating race dynamic in the purchasing and then taking on the debt to do the purchasing.
00:10:25
Speaker
And it is just not clear right now that at least in the generative AI pool of things, we're talking about those huge models that everyone knows about, Claude, ChatGPT, et cetera, that They can possibly recoup any kind of revenue to match the, I think, last countless $690 billion dollars being spent.
00:10:50
Speaker
And so again, someday somebody pops the delusion bubble and is like, hmm, I feel like i need to call in those debts. And then you get these cascading effects, which are very deleterious.
00:11:05
Speaker
It's not as systemic as the housing crisis, right? We're not talking about like banks drying up and all credit freezing and literally like AIG, like the machinations of payments between large institutions just stopping. That was the scary part.
00:11:21
Speaker
But it's it's not going to be pretty for anyone when you consider that the so-called Magnificent Seven stocks represent anywhere between 30 to 34% of the S&P 500. Who's invested in that? Think of all the index funds that public pension funds are tied to, all of that. So now you're talking about people getting hurt in market corrections of which they're not really a part. like It's not like the California Public Teachers Union is running model training but they will be affected by the price of gpus and um yeah and i i just think that that is um there are patterns there that we have seen in previous bubbles that just sort of indicate we have very bubble like dynamics going on and then it gets very scary from that culture standpoint that like no one wants to jump first until somebody does and then everybody does Yeah, that concentration risk is very real. And I think there's another interesting element to that that I want to touch on with Taiwan. um Yes. But the first concentration risk, which we've already alluded to in terms of the differences between this bubble and the dot-com bubble, is the dot-com bubble was spread out across thousands of companies that a lot of them were pre-revenue. were early revenue companies.
00:12:40
Speaker
Now the concentration is in the Magnificent 7, which make up an unprecedented portion of S&P 500. um So when one of those goes under, it's there's a lot of concentration there. It can totally...
00:12:54
Speaker
but every Yeah, I mean, the parts that's one of those things that feels like a little bit of an echo of 2008 is because the Metas and the Microsofts of the world don't want the debt to show up on their balance sheet. They take out loans or they build debt financing agreements through special purpose vehicles. Right. And so the debt isn't owned by me. It's owned by those people over there.
00:13:19
Speaker
But that also, and the reason they can do that is because there is this widespread belief that like, Meta can't go bankrupt, right? And so you start to hear a few of those things like, well, they're good for it because we just take it for granted that there are these like really big companies. But I think recent runs on, uh,
00:13:42
Speaker
like Blue Owl, which is one of the big private credit markets. Ah, yes, heard that. You know they've just had to like, no, no, no, you you cannot redeem these debt, out or like putting these limits on, is just beginning to show there's enough shakiness there that um it just feels It feels very precarious. And also to your point, when you're building out ah fiber, it was largely the telecoms. And yes, it was like subcontracted out to any number of people on the ground, but it's largely underground. Like a data center is a huge physical entity and there are a lot of construction jobs that are associated with it. Not as many as the ai industry would say, like it's a job creator because it's like once the building's done, the building's done.
00:14:26
Speaker
But there's a lot of work that goes into it. There is, if you dig in very deeply, and Ed Zitron has done the best reporting on this, Like there's a lot of fanfare, like Oracle signs agreement to do this, that, and the other. Disney to invest a billion dollars. and open what There's a lot of announcements. Actual reality is much harder to track. In fact, I don't know of any sites that Oracle has like so far promised, like any movement is actually happening.
00:14:57
Speaker
But you probably have contractors bidding on those things. so you have a lot of people downstream that are counting on these would be jobs and if it all freezes up, you know, that again begins to affect a lot of people displaced a lot in a lot of other places. Yeah, and you had mentioned the obscure financing.
00:15:16
Speaker
I can't remember the exact details of this, but maybe you do. And I think the example was Meta where they're doing interesting things to move some of this debt off the balance sheet. Yes. i'm trying to think of the specific example that Meta did recently that I read about.
00:15:31
Speaker
yeah it's generally through special purpose vehicles. yeah And so it's like a thing that people buy into and they can loan it at a ah much higher rate than, say, an investment bank. But the whole reason to do it is to just, again, be able to...
00:15:45
Speaker
say that deflect, that's not my debt. It belongs to these people over here, despite the fact that you are eventually on the hook for it. And it's that kind of, we have not yet seen, I pray to God, we have not yet seen collateralized debt for AI, but it wouldn't, I wouldn't put it past somebody to create one. And then you end up in that mortgage-like area where people are buying tranches of debt that they otherwise can't see the outcomes of. And we don't know who the owners are. And again, ah the more obscure the risk, the shakier things get because then people don't know where to pull the pin and then they just, they can kind of like panic and then it just creates a cascade
00:16:29
Speaker
Again, through ah a system that affects a lot of people who don't have direct skin in the game. I want to touch on the Taiwan piece because I think that's very relevant to these

Geopolitical Risks in Semiconductors

00:16:39
Speaker
concerns. Because we we hear, you know, everyone talks about NVIDIA.
00:16:44
Speaker
But I don't think a lot of people realize that, you know, we 90% people.
00:16:51
Speaker
these silicone chips come from Taiwan. And I've seen reports analyzing what would happen if China decides to take over Taiwan and limit access to the silicone chips. um What do you think about that risk and and how it would affect us if that were to come to fruition, which feels more and more likely as time goes on as ah as a real possibility? Yes, I mean, the semiconductor supply chain is hella specialized.
00:17:26
Speaker
oh um TSMC is the main manufacturer. ASML, I mean, I feel like we're on Sesame Street, but ASML out of the Netherlands is the primary manufacturer of the lithography machines necessary to fab things at the nanometer scale, right?
00:17:50
Speaker
So you have that concentration. Yes, and that is why, ostensibly, the Chips Act was passed. But to build a factory and the mind share and skill set necessary to fabricate chips at that sophistication is a decades-long project, right? It's not like i built a plant in Arizona and ta-da, we have onshore chip manufacturing that's on par with TSMC.
00:18:20
Speaker
Well, not only that, but it's so expensive for us if we're going to offer that same commodity here in the U.S. The companies, that part of the reason why my understanding is the Chips Act wasn't successful is when given the choice, the companies don't want to wait and invest 3x more money when they can just get it from Taiwan quickly and without... Yes, and there are there are some exciting developments in the ability to to diversify that. um AMD and a couple of other new chip manufacturers are coming online to build for different kinds of models, which maybe we can get into.
00:18:56
Speaker
But yes, in terms of your question, it's there was there's one throat to choke, and it is an island situated in the middle of a geopolitical conflict that is many decades old.
00:19:08
Speaker
However, if your listeners hadn't yet heard, we're kind of going through a dress rehearsal right now. um Helium is a huge component of silicone chip manufacturing because it is, when it is transported, it is kept at near absolute zero. So it's a vital cooling component.
00:19:30
Speaker
But when it gets shipped to those manufacturers, they can only hold it but for so long because again, as the temperature rises and it turns back into gas, like it's a commodity that dissipates, right, and decays over time.
00:19:43
Speaker
one third of the world's helium supply roughly was taken offline when the war with Iran started because Qatar it produces a huge amount of helium through liquid natural gas as does the U.S. And so that helium usually makes its way through the Straits of Hormuz to South Korea, to Taiwan, to a lot of places in Asia.
00:20:09
Speaker
And that has probably, i'm not close enough to the pricing of GPUs, but I promise you that anyone whose job it is to purchase GPUs is putting in those orders now and has been and is probably trying to get ahead of a supply pinch later on. So I'm sure for the next few quarters, NVIDIA is going to look like a rockstar, but there's a there's a problem because until the helium supply comes back online and trade can pass through those straits normally, that is a problem. The secondary problem
00:20:45
Speaker
which is worrying many more people with much more expertise than myself, is Asia really relies on that oil. ah And you have a problem here. And I think Paul Kudrosky and some others have pointed this out, that Taiwan is a tropical country.
00:21:04
Speaker
And if the government must decide to use energy to cool its citizens or power its semiconductor, manufacturer You would now put a choice between the economy and the populace, which is not great.
00:21:21
Speaker
And then you have these other countries way out in the South Pacific that rely entirely on on gas for generator power. And if they cannot get it, and it is already, i mean, in the U.S., we are in a privileged position. We complain about $4 gas, but we are talking about like,
00:21:40
Speaker
president the Philippines saying don't take the elevator. We're talking about i think Laos shortened the school week to three days a week. Fishermen can't even take their boats out. That energy pinch is super real.
00:21:55
Speaker
And you can bet that China will start cutting deals with people. And if given the choice, there is going to become an ask. So there's already a lot of geopolitical movement going on right now that I don't want to use the word threatens, but it it certainly poses a realignment of a lot of stuff that we take for granted in the Pacific upon which our AI industry relies.
00:22:24
Speaker
Yeah, and that's why, it's i mean, like anything, when you're talking about markets, it's so

Complex AI Market Predictions

00:22:30
Speaker
layered. It's never just just a simple discussion of like, oh, this is a new technology. Is this like the dot-com bubble?
00:22:37
Speaker
Like the geopolitical component is obviously huge in all of this. And i don't think anyone was expecting this war with Iran. Yeah. And so the implications, we're still figuring it all out and everything, everyone's making these projections, but there's a lot of risk involved. And We're all going to feel it at some point if this if it doesn't get resolved. And it's obviously not just one problem to resolve. There's a lot of things happening. Yes, it's always more complicated than it appears.
00:23:06
Speaker
So we've been touching on this possibility of a burst and how complicated it is to make accurate predictions. For someone who's not watching markets every day, what do you think is the one thing they should pay attention to?

NVIDIA's Market Influence

00:23:21
Speaker
Yeah. um Like the pulling of the pen, I think, is a a good way to say it. Like if you feel like, is there one thing that could just trigger burst? If that one thing is something people could pay attention to, what would it be?
00:23:37
Speaker
Yeah, all eyes are on NVIDIA. It sort of sits... In an enviable and unenviable position of being at the center of these things, simply by dint of the fact that it has the best chips.
00:23:51
Speaker
um But it means that a lot is reliant on NVIDIA. And i i think that's where things begin to slide.
00:24:03
Speaker
um Either they have supply chain pinches, orders aren't keeping up. But you can't underestimate the amount of how do want to say this energy financial engineering and PR that will be called upon to assure everyone that everything is okay and I don't say that just from Nvidia I say that all parties involved have a vested interest in ensuring that everyone thinks everything is okay
00:24:39
Speaker
You know, the large labs are spending gobs on electricity and training runs because they are building what Karen Howe correctly and derisively calls an everything machine, right? if i If I came up to you on the street and said, like, I am inventing this thing and it's going to help you do everything. You're like, okay, well, like what? Like everything, anything in your life.
00:25:05
Speaker
Can it wash my car? it can It can set appointments for you automatically. Can it feed me? You can take photos of your fridge and it'll develop recipes, whatever. Okay, this sounds great. What do you need for it? A trillion dollars.
00:25:19
Speaker
If anyone else said that to you, you'd be like, that's crazy talk, snake oil nonsense, get out of my face. But that's essentially the promise that they've made in the market. And again, they have different avenues. Anthropica's really bet on the enterprise and coding.
00:25:35
Speaker
ah OpenAI can't decide what it wants to be when it grows up. It releases new products all the time and spins them down, whatever. But they've taken on all of this to build the everything machine and if it doesn't pay out, that's going to be problem.
00:25:51
Speaker
um And there are only there's only but so long for companies that are buying those technologies to kind of swallow that snake oil before they are beholden to their share shareholders about like, how much did you spend on these licenses? Like, what is the real productivity outcome?

AI Projects Failures and Skepticism

00:26:09
Speaker
And if they have a harder and harder time answering that, it it gets very tricky very fast. Yeah, I have a couple of thoughts on that. I mean, the recent Conmigo announcement was interesting. You know, they're like the Khan Academy AI chatbot tutor for kids. Microsoft invested so much money in it and they shut down because the uptake wasn't there.
00:26:30
Speaker
I think that's interesting to think about. Oh, is that because people realize that teaching is harder than... Just the Socratic method with questions. Oh my God. Yeah. Yeah.
00:26:42
Speaker
So, I mean, it validated me and Kimberly because we were like, okay, it wasn't just us that couldn't figure out how to do this. It's also like they had so much money from Microsoft and we're giving it away for free.
00:26:53
Speaker
But you were touching on this, which is it is this everything solution, all general purpose. And the revenue and the spending, like when you look at corporate spending and then you look at the earnings for these companies like ChatGPT and Anthropic and XAI, like it doesn't add up. Corporations are still not spending them up enough. Users are not spending enough because we know that the true cost is still being heavily subsidized by investors. yes And so when I think about one of the arguments that I feel like i sort of had early on, which I think was wrong, which is like, oh, well, eventually like technology will make it cheaper like it always does. It's used to improve upon itself and the cost will go down.
00:27:40
Speaker
But then I really started thinking about this these infrastructure costs and how much is being invested. And so I think there is still a huge and and rightfully so concern about the valuations of these companies and their earnings. Because corporate spending, I think the last time I checked was like $250 million. Like it's not even close to what it needs to be to justify. And you you brought up the fiber optic build out. And again, I would refer people to Paul Kudrosky's amazing blog, but He very clearly shows that spending on data centers has already outstripped all of the money that was spent on fiber optic by orders of magnitude, right?
00:28:17
Speaker
So where you see lots of money, you also see capital flows for investment and power. Right. And I think that just becomes a ah tricky combination. My fear of the bubble bursting in the near term is obviously lots of people following the S&P get hurt that have no skin in the game, especially especially public sector employees.

Potential AI Winter

00:28:44
Speaker
But Also, i do worry about an AI winter again because the amount of money that has flowed from VC into this is such that if it locks up or goes south and they cannot, it will...
00:29:02
Speaker
take so long to be able to raise money again, to do anything again. And the thing that doesn't get talked about this, and ah I will be pedantic for the sake of your listeners, is the everything machine is the generative AI model. These models that use transformer architectures that are really compute and energy intensive,
00:29:26
Speaker
What it is doing is this is creating this gravity well that pulls energy and investment away from narrowly focused applications where AI is doing amazing things, more deterministic, more diagnostic, less predictive.
00:29:41
Speaker
Recently on our podcast, we interviewed a founder and CEO out of Morocco who has just trained models. on expertly annotated and validated sources to help small farmers identify crop failure. Like what is the difference between a vitamin deficiency in a tomato plant versus a mold? Because that affects two things. How do you treat it? And if you can treat it in accordance with eu regulations, you have now opened up an economic avenue for a farm, right? That is an amazing use of of AI all the way down to the edge where it can be ah operated on a phone. And then they do have cloud models, but they don't, it just doesn't eat up resources, right? He's sharing resources in this data center that the EU built.
00:30:32
Speaker
That's amazing. ah MIT researchers who discovered a new class of antibiotics, like an entirely new class of antibiotics is a deep learning model doesn't take a city blocks worth of power to run, but it also took very high quality wet lab work.
00:30:51
Speaker
So there's just this huge difference between people who are using what's called the data centric paradigm and and very narrowly focused applications, like what is the outcome I want and then designing for that.
00:31:05
Speaker
Right. I want to discover new ways to attack MRSA. OK, cool. This was an entirely new antibiotic mechanism that probably would have taken us years to discover rooting manually. I want more of that AI for my children.
00:31:19
Speaker
I do not need girlfriend chatbots. Like, No one does. so Or Sora. I'm so glad that they stopped focusing on Sora. And didn't they take it offline? Yes, because it was like a thing no one asked for and has like no tangible outcome. And like you, you again, listen to this panic, right? Like, oh my God, Sora is going to upend...
00:31:42
Speaker
Hollywood, i was like, how can a probabilistic output, again, you cannot control the output consistently, replace the level of consistency of a Christopher Nolan film that is in his brain shot for shot.
00:31:56
Speaker
Like he knows what he's shooting, right? Patty Jenkins knows what she's shooting and the look that you can create a bunch of random ass clips and string them together and call it a movie. but Yeah, I mean, people clearly don't understand movie making, including.
00:32:08
Speaker
But I think you're touching on these really domain-specific niche use cases are the things I think about when i think about the possibilities of AI that get me excited, especially because I was a healthcare researcher. But a crash, sorry, a crash, my fear is, well, and we're already seeing a little bit of this backlash, especially at the local level against data centers. It sours all public opinion on it.
00:32:32
Speaker
And then it enables politicians to run on sort of like a anti-AI populism. And then like all this good stuff just dies in the wilderness. It doesn't get the funding, doesn't get the investment.
00:32:45
Speaker
And then we sort of like missed our chance. I mean, it'll come back around again, but like it it might not for decades. Yeah. And I also think about just how unexpected things are, like the sort of software market, how stocks have gone down in software. Like no one ever thought that was going to happen. No one could have ever. Yeah, lots lots of second order effects for sure. you know That we just don't know about. That's the anxiety, right? like What we were told in the neoliberal fever dream
00:33:18
Speaker
Go to college, become a lawyer, a doctor, radiologist, whatever, because you're safe. Like the trades aren't safe. Some classist interpretation of like working with your hands was less than. and now it's the paralegals who are in peril.
00:33:36
Speaker
Is... a You cannot underestimate the psychic wound that that creates because people will feel like they've been sold a lie. Like you told me to take out these loans to go to college. If I did that, I would get this. I would get this seat in the economy. I would be higher than these people. And now you're telling me.
00:33:57
Speaker
You're going to use an LLM to do this instead of the expertise that I cultivated is very damaging to public trust. And I think we use that when we're talking about civil institutions and civic ah structures.
00:34:15
Speaker
But that lack of trust at a societal level in the social contract, it's really hard to get that back. And it can create a lot of violence in the near term.
00:34:27
Speaker
i think if people just feel like they don't have a chance and they have no autonomy or agency over their future, that's when you get really weird things happening in politics.
00:34:40
Speaker
Well, and i just in society in general, I mean, that's why there are more gang-related activities in poverty-stricken communities, you know, because when there is a power vacuum, people will...
00:34:55
Speaker
fight back in a way that sort of takes their power into their own hands, which is why i love that you opened with like, I'm an anthropologist at heart. And, you know, what what I so hear people calling for is like humanities people to be at the table and to say what this means. So I love that lens um for me from you, especially.
00:35:16
Speaker
Yes. And I also, I have this preoccupation of late, maybe it's because I have young kids. Yeah. So if you think about the way we've organized human labor for like, i don't know, 4,000 years, it's always been a specialty. There's the blacksmith, there's the cobbler, there's the wagon wheel maker, whatever.
00:35:34
Speaker
We basically ported that over to knowledge work. And, you know, here is ah HR, here is this person, here is this type of software engineer. it creates this highly verticalized skillset, right? You start at the bottom, you work your way up.
00:35:49
Speaker
But because we have those bands of specialization, We also have horizontal organization, right? There's this department, there's sales and marketing over here, there's software development over here, there's, with and that's kind of like a pass the baton way that we do things, right? Oh, there's this huge, massive project. You have somebody at the top, usually executives from different lines of business, and the project kind of moves between those vertical bands. If you have things that are built to do narrow specialties very fast, like machine learning,
00:36:21
Speaker
it really starts to erode that vertical specialization. And I think if you wreck that vertical specialization, you also affect the horizontal movement.
00:36:31
Speaker
And so the one thing in cybersecurity, which is the industry I'm most familiar with, that I do not think we're thinking of is how do we reorganize teams for machine speed tooling, right? The things are gonna come in now so fast that it just runs into this wall of human meat because we haven't like re-architected later. the bottleneck.
00:36:55
Speaker
Right. And I think my, on the, my positive vision for it is, and I try to tell college students this and I admit it's the least helpful advice, but if you're listening, you need to be an interesting person.
00:37:12
Speaker
you need to essentially have a Renaissance education because when confronted with specialization, i think generalists win.

Education in an AI-driven World

00:37:23
Speaker
And I also think people who can just pull from many different contexts will win. Not that I'm advocating for LLM centric enterprises, but if you if you put me in front of Claude and ask me to develop a marketing plan, I'm going to kick ass at it versus a 22 year old marketing graduate. I've just put in the reps. I have the time. I have the experience.
00:37:46
Speaker
And i I just think you we need to get away from this ROI measurement of education and get into this idea of like, you got to know a lot about a lot because you're going to have to bring that to the table and the people who have more to bring have more to add.
00:38:19
Speaker
It's so interesting that you said that because I was just reading yesterday, Yale brought together this special committee to produce a ah report on... It's like, we're the problem.
00:38:31
Speaker
We're the problem. Like, we've ruined higher ed with these Ivy League legacy... But I thought it was interesting. I still don't know if I agree with it because I'm still mulling it over. But they basically said, we believe the purpose of higher education is to preserve, share and preserve, create and share knowledge ultimately.
00:38:52
Speaker
and This is a debate that David Shapiro and us have had was like, what is the true point of education? Like, I do think there's a long overdue reckoning in higher ed because I see the worst of the worst. And I see how it's changed just over the past decade since I've been in consulting.
00:39:10
Speaker
And I think it's a good thing. We do need a a reckoning. and I think the trades have always been very valuable. i had a two-year degree that I don't use, but it got me a lot of life experience and helped me go on to the next thing. I've been talking about like...
00:39:32
Speaker
very next step focused. It was very do this, you graduate from this school, you get this grade to do this thing, to do this thing. And the thing that I wish is i had I wish I had taken weirder classes because you just had access to random stuff in college, but you were sort of, ah they did not explicitly do it, but the culture said like, well, you should just focus on your major because there's this thing and then you get this thing and then you do that thing. And I was like, I look back and I was like, I should have just, why was I afraid to take botany? When else would I have used an electron microscope? I should have done that is what I i would say. Right. um
00:40:15
Speaker
And I think when we talk about these The thing, oh God, do not get me started on chatbot tutors or like LLMs for education. Like, okay, so I think we've learned just don't do take-home essays anymore.
00:40:28
Speaker
What if we did return to the Socratic method? I had this hard-ass college professor, shout out Dr. Hans Tiefel, warfare and ethics.
00:40:39
Speaker
You could not pull that trick where you would say anyone who's gone to college has done this. You kind of skimmed the reading or you didn't. And you would just wait for someone else to comment in class and you would kind of just regurgitate what they say in order to look like you were adding to the conversation. He was just like this no holds barred, no did not suffer fools German survivor of the bombing of Bremen. And he was like, he would just say, do you have an original thought or are you just going to repeat?
00:41:11
Speaker
but And it was so hard and no one skipped the reading. But I am a much better writer for it because we basically had to write a thesis like every essay. Like you had to generate like original ideas. It was so hard.
00:41:25
Speaker
Okay, fine. And you're like, I guess you just have to write that in class because people will chat GPT. But you still had to defend your ideas. And I think if you got people to do that in class, two things would improve. One,
00:41:38
Speaker
Having to articulate it in real time, pretty tricky, but we've been doing it for literally thousands of years since ancient Greece. And then two, one of the skills that I think is very lacking everywhere is people feel nervous about speaking in front of other people. Yeah.
00:41:56
Speaker
And i was like, that's what you do the rest of your life, your job interview, arguing for budget for a project, arguing for a promotion. I don't know any instance in which you're not having to speak in front of people. You're not going to be able to chat your boss on Slack and say, like, I think I need a raise. It's not going to happen.
00:42:12
Speaker
yeah So the more practice you get defending your ideas in real time, i think is a good thing. And I feel like I don't think that's the solution to the problem, but I think people are flummoxed about some of these issues.
00:42:24
Speaker
And we have... and ways of dealing with them that we have used for a long time and it's just like we're like scared to go back to those methods. They, the ed tech industrial complex would try to sell you on this idea that it's a tutor that can differentiate blah, blah, blah. Okay. i don't really have time to take down the technical problems in that argument in terms of the architecture, but that solution is clearly invented by someone who has no idea what teachers actually do. Right? Like,
00:42:58
Speaker
Ask any school teacher and they know, i am aware that this kid, his parents are going through a divorce. If you're in a Title I school, you're dealing with a largely traumatized population. They are already differentiating.
00:43:15
Speaker
And there is a generally caring vocation. I don't want to generalize, but like i think somebody who tries to sell you on, again, tutor for your kid through computer, like wildly misunderstands the socio-emotional value that that teacher is providing to that student just by dint of a stable day, stable environment, ah you know, loving encouragement, stuff like that. It just like wildly misunderstands the process and is trying to drill us all on, again, this neoliberal fever dream outcome of this input,
00:43:53
Speaker
you know, ah ROI outcome. it's It's not the right answer sir to the question. Yeah, i I love that framing of like, it's a good answer, but it's the wrong question that you started. You started with the wrong question. Faulty premise. Yeah, sorry, EdTech. I really feel like most of EdTech is predicated on a false premise. Yeah, it's like, it's a great solution. It sounds like a great solution, but you started with the wrong problem.
00:44:19
Speaker
So I'll give you a concrete example from my friends in early childhood education. Large language models for one are very good at context understanding. I've always said really good at analyzing texts.
00:44:36
Speaker
sort of meh on generating text because essentially you have what I call the tyranny of the mean, right? You're just trapped in the bell curve and it's the outliers that make things interesting. So generating text, sort of whatever.
00:44:50
Speaker
But what they can do, and this would be the dream, is like what do teachers spend a whole bunch of time doing that they would not otherwise be doing is taking notes and this and other, but a natural language interface would be really great if they could just talk at their computer, right? Instead of me having to type all these notes into this like almost universally bad UI, what if I could just be like, okay, this person, this person, today they did this and I think it's it. And they can just have this like journal and then they can call back to it
00:45:26
Speaker
And not ask the LLM um to do the differentiation for them, but help them with that recall rather than having to just like form fill whole bunch of nonsense categories on a report card.
00:45:39
Speaker
Right. And then free them up to do more of the reflection and the teaching. oh, can you pull up what I said about Jimmy's reading like three months ago? okay because And you start to like have that resource rather than just like number on a line goes up and to the right. I guess he's improving. We did, but I think I have a way to bring this full circle because I've been thinking about this connection back to the bubble and the concentration risk.
00:46:09
Speaker
So for better for worse, AI lowers the barrier to entry. So now people who, I mean, previously it would maybe a technologist wants to create an ed tech product. They have no knowledge of education, but they still have to get a lot of buy-in and investment and a team and a vision. Now anyone can create um an educational chatbot or anything for that matter. I mean, that's why software is struggling as investors. People are realizing anyone can build their own consumer application. Like I replaced my CRM with my own.
00:46:38
Speaker
So this goes back to the risk concentration because a lot of these companies are rappers, meaning that they still rely on OpenAI or Anthropic or Google's models.
00:46:50
Speaker
And so if one of these, if OpenAI i were to go under, let's just say, they're the obviously the giant in the space. Think of all the companies that have been built using AI's model, OpenAI's models, or maybe they're model agnostic,
00:47:04
Speaker
but a lot of their tools relied on open Like there's just a lot of risk there as well. And the educational piece is a great example because really anyone, we had a guest on who I think is is lovely, who talked about Toby's tutor and she built it for her son who's dyslexic and ADHD.
00:47:24
Speaker
And, but she's not an and educator. She's not a teacher. and that just shows how that barrier to entry is low um and how many companies we've just never even heard of that have revenue and maybe also have investors like we did at Moxie are relying on still these same major frontier models. And so if one of those goes under, that's just another component of the bubble bursting that I don't think is getting a ton of attention.
00:47:53
Speaker
Yes, i I would have said this is more of a problem in like 2023, 2024, but I do get the sense that at least the savviest of VCs are asking a little bit more of the questions about what's under the hood and is it a rapper or a feature because...
00:48:14
Speaker
I remember a bunch of companies coming out after chat GPT. A lot of them were like very specific, like, you know, oh, you can chat with a PDF or whatever. i was like, that's all just going to be in the model.
00:48:28
Speaker
Like that's just going to be a native feature of the model. Right. um And a lot of those did vanish. um So I, I do think that,
00:48:40
Speaker
And I do hope that there's more due diligence there than there is today. But to your point, if anything is relying on an LLM as an interface product, yes, it almost certainly relies on one of the larger ah models.
00:48:56
Speaker
And I don't want to say that's systemic risk, but that I think goes back to the point of what is the purpose, what is the outcome that you're wanting?
00:49:09
Speaker
And if there are narrowly focused applications and they've thought through those questions, ideally they've developed a better business model. So again, going back to deep leaf in Morocco, his point was, look, small farmers, their experience of the internet is WhatsApp and Facebook.
00:49:30
Speaker
they I can't like build software and expect them to like use it like software. So they, their UI is primarily designed to look like WhatsApp, like take a photo of the thing and like chat with it. But the chat bot functionality i don't I didn't ask him what it's built on. It could be built on open source. It could be built up because what is required of the chat is not that sophisticated.
00:49:56
Speaker
And we have seen similar performance on many of these things with much smaller models. And those are out. That genie has is out. So i I feel like if you've built a towards a better outcome,
00:50:10
Speaker
And isn't reliant upon the features of a foundation model, you're in a healthier place. And I and i think the smartest VCs are are asking those questions. I have two two thoughts. that i know we're getting close to time.
00:50:22
Speaker
One, I do want to point out, yes, I think the VCs, the more sophisticated, they're sophisticated investors. But there is a huge layer of investing that happens at the angel level. Like I'm an angel investor. We had angel investors. So a lot of these tiny startups, and people don't realize how many there are,
00:50:40
Speaker
are being funded by non-sophisticated investors. And because of all the AI hype, we people we raised money in like five days, and exceeded what from angel investors.
00:50:51
Speaker
And I know that that happens a lot. And these are a lot of times the VCs are diversified enough to where they can handle the the risk. Yes. But that is not the case for these smaller funds.
00:51:03
Speaker
And so I think that there's just a lot more people who would be touched by that, that people don't realize. Yes, and i think and I think that goes back to my original fear is that it just creates, it sends a chill through the system such that, like, people will become so risk-averse that, like, the good stuff just, I mean, perception is reality. So,
00:51:27
Speaker
You could try to explain how your deep learning antibiotic thing is way better and focused than whatever, but they're still going to have this visceral reaction that I got fleeced on a whole bunch of snake oil.
00:51:42
Speaker
And that's the fear, is that the stuff that can materially contribute to engineering, science, whatever, just doesn't get the stuff that it needs because everyone's just afraid. Yeah, I agree i agree with that. Well, I loved what you said about The vertical and the horizontal in the workplace and being a generalist, because I have found that my current job at a nonprofit has is really benefiting from my work at a startup and my work prior to that as like a teacher researcher kind of.
00:52:14
Speaker
tech support person in an English department. I mean, you know, like... Yeah, human creativity is combinatorial, so you should have as many ingredients in that combination as possible. And a couple weeks back, we had when the title of our show, I think, it was something that was...
00:52:30
Speaker
Jessica, help me here. The patriarchy is a ladder and AI is climbing it. And so when you were talking about like the vertical infrastructure, and then on that show, we talked about how when women organize things, it tends to be more of a cycle and ah a unified, like circular kind of, everybody does the teamwork and we do whatever, you know, but when patriarchal systems are in place, we tend to have this kind of vertical structure and then these neat little buckets and categories and, you Yeah, I was just struck that, yes, this will disrupt that kind of a system um where people can't just jump in and be flexible and adaptable when something needs to get done. If you're not willing or able, one of the two, you got to be both to jump in and do it. um Yeah, what happens? Yeah, I i asked...
00:53:20
Speaker
A bunch of cybersecurity leaders at a recent conference, I said, raise your hand if you have someone on your team, that rock star that you're like, oh, man, if only I had like three more Susans or Carl's or whatever, like that would be the ideal.
00:53:35
Speaker
Everyone raised their hand. Everyone's got that person on their team. And I said, what is it about that person that makes them that rock star? and I gave them... you know, five minutes to think about that. And when it came back almost to a T and these were different groups, it was out of the box thinking, creative, flexible, adaptive, not super hardcore expert in this one thing.
00:53:57
Speaker
hmm. But if you look at hiring job descriptions, there is this huge disparity between what we want and desire and what we are hiring for. And this is something that we need to reexamine.
00:54:11
Speaker
But I think it is a very big cultural shift from what we're used to. And it's going to require a lot of time for that to shake out, I think, for all of us. You know this new model that Anthropic said that they've created? Mythos.
00:54:26
Speaker
Mythos. And I was reading something last night from someone who's an open source coder. They have a full-time job, but they do open source in their free time. And I have no idea this is true, but I'm curious to get your take on it. She was talking about the reason Anthropic did that was ah basically a message to all the people who work on open source software that they need to fix their bugs and it's causing... that whole community to get really worried about what the potentials of this model are. And they're worried that OpenAI isn't that far behind Anthropic and that they're not going to be as careful in releasing their model.
00:55:04
Speaker
Do you know if there's truth to that? Because I don't know much about the open source community, and but she she made it sound like there are a lot of companies that rely on open source software because it's free.
00:55:15
Speaker
And that this new model poses a huge risk to that ecosystem. There's two different questions. So, first of all, yes, for your audience, open source, which is freely available, licensed software developed by many people who can change the source code and adapt it and is maintained usually by very dedicated volunteer developers.
00:55:40
Speaker
is a critical part across all software. Most commercial software has some kind of building block that is open source because if I'm building something, why would I do it from scratch if I've got the building blocks here, right?
00:55:53
Speaker
Open source has also always had bugs, aka vulnerabilities. Here is where it gets little tricky. I'll try to take it in parts. So those vulnerabilities existed and were exploitable and were problematic before mythos, right? And I have many security leader friends who would tell you, if you can't keep up with mythos, then your program was kind of broken from the start because, right?
00:56:18
Speaker
LLMs have been good at discovering software bugs since like a GPT-3. So it it's not really a problem. The bugs that were pointed out by Anthropic, like an OpenBDS, which is a critical open source operating system for firewalls, not to get too far down the rabbit hole.
00:56:39
Speaker
If it it was shown that you could take sm much smaller models, 3 billion parameters as opposed to, I don't know, I think some of the reporting said that Mythos was 10 trillion parameters, could find those same bugs.
00:56:53
Speaker
So here is the first question. it's like, is Mythos more capable? Doomsday kill everything? Maybe. Maybe that's also marketing. Right. I mean, i will say openly I use Claude occasionally, so I'm not like an anthropic hater.
00:57:12
Speaker
But if we go back to the everything machine promise and you ran and trained 10 trillion parameter, again, I'm not sure if that's the number, but we know it's significantly larger than existing models.
00:57:25
Speaker
and you spent how much money to do that training run, and you come out and the big to-do is that it can't do the everything machine, but it can find software bugs.
00:57:37
Speaker
I feel like that is a business problem. But what if I make it sound so epically terrifying that the bug finding is now a part of the business model?
00:57:50
Speaker
There's some... chicanery there in the messaging. The name. Mythos. Right. Exactly. Legends. Heroes. Yeah. And it's so dangerous I can't publicly release it. That's fine. So the two questions are, is it like crossing a Rubicon?
00:58:08
Speaker
i i don't really think so. The other point that is more important was about power, which is what your open source ah person is saying.
00:58:19
Speaker
And Rafi Kokori and the CTO of Mozilla had an amazing editorial in The New York Times saying like, If this is so important, why not give access to the open source developers who maintain those libraries? they We know who they are and let them use this tool to go fix those bugs.
00:58:39
Speaker
Instead, you have given early access to like... the most powerful, most expensive companies on the planet. I get it. You're trying to do like economies of scale, but it's really just concentrating power in a different way. Right. So this is just one of those many examples where if i have to round out the episode is to say like, be excited about ai but just like you have to

Evaluating AI Promises Critically

00:59:04
Speaker
peel back. You got to question like everything. And again,
00:59:08
Speaker
Big fan of Claude, have used it. The Doomsday stuff, I'm not such a big fan of. And then telling everyone, like, this thing is more dangerous than everything, even after that really expensive training run, feels kind of like a feint.
00:59:21
Speaker
Like, I got to cover up the fact that it's not the everything machine, but it's like this really sexy cyber machine. That that feels like a problem. i mean... is Who benefits, right? where where what What's the incentive here? that to Yeah, and some of these people, I think the incentive is sometimes you're in it and you don't even know what incentive structure you're at the mercy of. And and then I ah i know we're at time, so I'll leave you with this. Humans are really weirdly obsessed with analogizing themselves to the latest technology.
00:59:56
Speaker
So right now we call these things neural networks. That term was invented based on neuroscience of the 1970s. Cannot tell me that our understanding of neuroscience has not changed since then, but we call them neurons, even though it's kind of not a neuron in our present day understanding of it.
01:00:12
Speaker
That's fine. So it's conscious, it's a brain, whatever. Descartes was obsessed with the hydraulics in the French gardens. He posited that the way our thoughts work was like hydraulics and changing pressure in our circulatory system.
01:00:29
Speaker
We did the same thing with the telegraph. This is like a common thing that we do is to take whatever is the technology of the day and try to be like, that's how we work.
01:00:41
Speaker
And so if you if you understand that premise, I think it becomes easier to question a lot of the promises and the stuff about all the things that anthropomorphize models and promise things that can't be done. Again, oh we don't know if the models are conscious. And I was like, my man, we have like 250 theories of consciousness right now.
01:01:02
Speaker
What do you mean? can't even agree on what consciousness is. You cannot tell me that you can define consciousness in an artificial intelligence model. Anyway. Oh, we should have you back on. I could just go down the rabbit hole. I just listened to Michael Pollan's latest book.
01:01:16
Speaker
Gosh, so fascinating. i love. Yeah, that's fascinating, too. If you dug this conversation, be sure to check out Jessica and Kimberly's show. Once again, it's called Women Talking About AI.
01:01:29
Speaker
It's a very humanist perspective and it is everywhere you can find your podcasts. We will be back with our regularly scheduled programming next week. But until then, as always, stay real.