Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
AI is doing real good and real harm, but the hype is hiding both image

AI is doing real good and real harm, but the hype is hiding both

S4 E39 · Bare Knuckles and Brass Tacks
Avatar
101 Plays4 days ago

The AI hype machine is taking up all the oxygen we need to actually stop the harm happening today.

This month we heard from three guests who didn't compare notes. Didn't coordinate. And all three circled the same thing: the #AI hype machine isn't just wrong, it's actively making things worse.

Capital flows going to “everything machines” instead applications that actually accomplish tasks. Gas turbines burning methane next to communities already carrying four times the national cancer rate. AI chatbots mathematically, not metaphorically, mathematically, engineered to reinforce delusional thinking in vulnerable users. Deepfake abuse still expanding, still mostly targeting women and minors, still unsolved. This is the real harm inventory.

This month. Right now.

Meanwhile the discourse is about whether a model might hypothetically stage a coup in five years.

We're not doing doomer porn. We're saying watch the industry’s hands, not the mouth. The boring risks are already here. The extraordinary stuff — the farmer in Morocco beating generalist models with expert-annotated field data, the researcher finding antibiotics with true wet lab work — that's also already here! It's just not getting same headlines and the funding.

System Check. This month's episodes, broken down against current events and whatever's rattling around our brainboxes.

Mentioned:

Recommended
Transcript

Introduction and System Check

00:00:08
Speaker
Welcome back. This is Knuckles and Brass Tacks, the tech podcast about humans. I'm George K. And I'm George A. And the monthly wrap up is what we call the system check, where we go over some of the topics, the themes, ideas that came up over the episodes we recorded this month. And we try to break them down against current events, just whatever's in our brain box.
00:00:32
Speaker
So, George, I think across the three that we recorded this month with Amber, with Elmadi, with ah Ryan, The first topic that came to mind is the hype machine is running on bad faith and everyone is just about sick

The Tech Hype Machine and Its Distractions

00:00:48
Speaker
of it.
00:00:48
Speaker
And that was without us even ah curating that particular theme. It just seems to to come up across all the domains. Yeah, I mean, like my my thoughts on her is like what struck me about those last recordings that this wasn't really coordinated.
00:01:05
Speaker
Like Elmadi wasn't like briefed on what Ryan said. Ryan didn't compare notes with Amber. Three people with three totally different corners of the problem, like all circled the same thing. And like the hype machine is running on bad faith and it's actually making the field worse.
00:01:20
Speaker
And like in Almaty's case, i he's in Morocco solving essentially $300 billion dollars crop loss problem with a model trained on you know expert an expert annotated field data. And like that's the actual promise of this technology landing on the ground in a place that the the valley doesn't think about. Now, while he's doing that, the discourse in San Fran is about whether a model might hypothetically plan a coup in five years.
00:01:43
Speaker
So you know what I call that in thread modeling? It's distraction. It's not an accident. Like sci-fi framing pays, doom pays, the singularity pays. What does not pay is telling a boardroom the problem is the same problem it was 20 years ago and they're not doing it.

Impact of AI Hype on Innovation

00:01:59
Speaker
So at what point does this stop being annoying and start actively you know becoming harmful? I would say we crossed that line 18 months ago. the the moment the The momentary hype pulls security budget into chasing rogue agentic AI scenarios where you where your log aggregation pipeline's broken, hype becomes a threat factor of its own. And I put up my own material online lately and that that's kind of where I'm at. like I don't know where you are.
00:02:26
Speaker
Yeah. I mean, does anyone remember that the world was ending because of Open Claw? Has anyone talked about Open Claw this month? It's just this, like, continuous barrage of, oh, this is the next moment. Oh, this is the next thing.
00:02:43
Speaker
Oh, this... And it's just awful. And I think, actually, to your point about solving problems on the ground, that... The hype is actually doing harm. You know, I

Overshadowed AI Innovations

00:02:56
Speaker
think that there's tremendous psychological harm to a workforce that's being told you're going to be replaced any day now unless you use AI, but also if you use AI, maybe you prove that you're worthless.
00:03:06
Speaker
It's just like this neoliberal dystopia. but then also like the true innovative shit just isn't getting funded because, know,
00:03:17
Speaker
it's all wrappers around LLMs or something and and not like the juice, like the stuff that could do better diagnostic scans or the stuff that could do better chemistry. And it's just really maddening.
00:03:31
Speaker
I mean, like thats that's kind of the whole point, right? It's like we we're not looking at solving actual problems. We're looking at, again, um addressing shareholder value, which, again, ultimately is where our economy fails.

Security Practices vs. New Tools

00:03:44
Speaker
Yes. Yes. And I think... One thing that I heard from an interview with a neuroscientist, Neil Seth, is i maybe it's just that language is really intoxicating, right? And that's why we believe the genius of these LLMs as opposed to like, as he raised, no one thought alpha fold was conscious or whatever the hypey nonsense is, but it had a practical use case.
00:04:14
Speaker
know. Yeah. I think it is what it is. um All right. So let's turn to the next topic, which is what I like to quote an old weightlifting coach on, which is brilliance and the basics.
00:04:29
Speaker
um For listeners who haven't heard it, I think I've told the story before, but I used to have an Olympic weightlifting coach. would be like, everyone wants to come in the gym and add a few weights to the bar and they want to PR their clean or their snatch or whatever. But they don't want to do pull-ups and they can't do 50 push-ups and just like they can't do the stuff that's like maintain the musculature to prevent the injury. They just want to go do the sexy thing. And that's why they get injured.
00:04:57
Speaker
Well, I think we have covered that too, right? Amber was very

AI Marketing Myths

00:05:01
Speaker
clear. You know, she said that AI security is kind of this term that gets thrown around as just sort of the vague cloud security term with sparkle emojis. I think that was my favorite takeaway.
00:05:14
Speaker
but it was just like kind of this emphasis on continuously new tooling without doing any of the real work, which is like the process and the guts and the stuff like that. and And you yourself, you had a post about that too.
00:05:28
Speaker
Yeah, I mean, like, again, I think, you know, Ryan nailed it on the whole Mythos thing. I think like the the model does not like fix the patch management problem, which is kind of like what my whole post was about was like, hey, we have to look at like the fundamentals of like network architecture and security and actually like just build the solution towards what we've always done correctly.
00:05:55
Speaker
Again, I just, I don't think it fixes patch management. It doesn't fix log monitoring, identity hygiene, network segmentation, third-party access reviews. There's all the shit that you still need ah generally a human security team to manage. And it doesn't fix any of the things that have, you know, owned breach owners post-mortem in the last 20 years. You look at breach reports,
00:06:14
Speaker
we're not addressing those problems and it's all there. I think, you know, just talking mythos is the big hype to your today. And it's, it's, you know, Anthropics got its IPO coming up and this whole thing, like I got called it out. I think again, it's just a marketing play.

Democratizing vs. Centralizing AI

00:06:27
Speaker
You don't, you don't have this like super, apparently super secret, super hardcore technology that you create a super secret club for and then have a whole marketing campaign about it and publish it in Bloomberg that, know,
00:06:39
Speaker
You're just asking a comment. Yes, I'm glad you brought up the marketing thing because let me walk you through this scenario. and I am a company promising an everything machine, right?
00:06:54
Speaker
Like, I need a trillion dollars, but don't worry because this is going to be able to wipe out 50% of white color work. It's going to do something amazing, right? What are the other quotes? Data center full of PhDs, right? It's amazing.
00:07:07
Speaker
So I do this large training run. i don't know. I've seen some reports that Mythos is considered like 10 trillion parameters or something. i tell you that's like hundreds of millions of dollars in training costs.
00:07:19
Speaker
Okay. So I finished the training run. Do I have the everything machine? Oh, shit. I only have a model that like detects code vulnerabilities slightly better than before.
00:07:32
Speaker
That's not an everything machine. What did I spend that money on? I don't know. How do I dress it up? ah It's so dangerous. I mean, i eat like a do it feels like, that yes, it feels like if I dress it up as a literal mythic object that I can avoid the questions about what about that 50% of white color work that was supposed to disappear?
00:07:54
Speaker
I mean, again, like I haven't logged into it. I haven't seen it. I'm not part of the special club. But from what I understand, know, what mythos does is it finds every unpatched flaw in the system that you already have like,
00:08:06
Speaker
not been maintaining essentially, um you know, that that distinction matters. Like the asymmetry is not that AI is making attackers smarter. The asymmetry is that AI is driving the cost of exploiting your existing technical debt towards zero. right And that that's really what it is. Yeah. Marcus Hutchins just called that out too, if you saw that post last night. Yes. Yes, and also ah LLMs have been used to look at these sorts of vulnerabilities and and chain low-pry vulns together since GPT-3. And in fact, ah researchers used open-source 3 billion parameter models to find the exact same bugs in OpenBDS that Mythos did. So I think it is it's like the answer to the wrong question, right, is, oh my God, what do I do now The question is,
00:08:57
Speaker
What were the knowns, the unknowns and the unknown unknowns that like your tooling, your SAST, your DAST, your scanners just were never going to pick up. Right. And I think it's more about retooling your process around that rather than need new shiny object.
00:09:14
Speaker
Yeah, and I got two points. The first is like Amber's framing on that episode, you know, it kind of struck me the most. as Like AI security and something I've been saying too, AI security is nothing just nothing but the unresolved supply chain crisis, but just running faster.
00:09:29
Speaker
You know, and I think we never solved the supply chain problem from SolarWinds. We never solved the open source dependency problem. Log4j didn't fix us. We just moved on.
00:09:39
Speaker
And now we're bolting automated cogeneration, agent orchestration, and like third party model consumption on top of a foundation that was already cracked. So, you know, and I spent most of my time as a CISO just trying to look at how do we solve problems. And so every time a board or a non-technical senior exec would ask me, like, how do we solve this problem? The honest answer was honestly never just like a tool. A tool is part of it, but it was never a tool. The honest answer was we have to do the work that we already bought tools to do. Like AI doesn't change that. It just makes it more urgent. Mm-hmm. So the organizations that are going to survive this, I think, will be the ones that, you know, or I should say, they they won't be the ones adopting the most AI.
00:10:20
Speaker
They'll be the ones that, you know, can still tell you when their last vulnerability scan ran and who owns the remediation.

Capital Flows and Real-world Innovations

00:10:27
Speaker
And again, that that was kind of the whole point of what i was trying to make with my this week.
00:10:32
Speaker
Yeah. Well, you also brought up the fact that you're not in the special club, right? So I think that also brings us to this idea of... you know democratization, centralization. This was ah across a few of the other episodes, but, and and I brought it up in the Ryan Clark episode, right? That Rafi Kikourian at Mozilla had said, like, why aren't we making this so-called powerful tool super available to the developers who maintain these open source projects rather than the most, you
00:11:03
Speaker
wealthy and concentrated market cap companies, right? Because those are the people who maintain the shit that's broken, right? Like they would, they would welcome the chance to look for the bugs. Excellent call out there. I really liked that. And I could all point was like, you remember when like the internet was about like open source and just community and trying to help us out? And that's another thing too, because I've had a lot of ah lot of feedback on that AI governance thing I put out.
00:11:28
Speaker
And I've had a lot of like folks of mine, like friends of mine who are in the consulting circles were like, did you just package that out as like a service? And I was like, the whole point is I don't want to make money off it. Like we have to go back to helping each other solve problems. And like, I don't want to profit off of every little thing. I know you don't like,
00:11:47
Speaker
we We don't have to. We don't have to be that person. I think that's what makes, you know, RSA and Black Hat so insufferable now is just the profiteering is just too much. Yeah, I think. um
00:12:00
Speaker
Yes, to your point. There's also like.
00:12:06
Speaker
I don't know what that impulse is that like, oh, you should package it up. I get it. But I feel like the calculus behind that is not that smart. Right. Because, yeah, you and I are working on some other things.
00:12:20
Speaker
But like the return on you selling AI governance consulting services and the BD required is like, oh, George just takes on a fourth job. And you're like, or I could just put this shit out there and like maybe somebody will, you know, like what is the real return that you would have gotten on that? Were you going to retire your AI governance strategy consulting work?
00:12:44
Speaker
No, that's not it. Yeah. Yeah. um Whatever. but But I also think this, like back to, you know, our friend in Morocco solving real world problems, this is like this concentration of capital flows also into like the same few companies and the same, and like, you know, what, I guess in some ways,
00:13:09
Speaker
maybe it's a blessing and industry in disguise. Maybe that's a very privileged thing to say from the West, but like he's had to architect things in a really constrained way because there's just not like billions of dollars flowing into ag tech that's going to help small farmers um because it's not sexy enough and it's not like AI girlfriends or some shit.
00:13:28
Speaker
But, I don't know, maybe that's good, but i I see a lot of stuff coming out of Europe. I'm excited about a few things I've seen out of Singapore and India, and it it just feels like the concentration is not, i I would rather have this flourishing like we had in the space race, obviously, some, you know, specious reasons for trying to get to the moon, but it also gave us basically everything we live with today because as many people as possible were allowed to throw their ideas at the wall.
00:13:59
Speaker
Yeah. And I think that's kind of the whole point is like, what are we, what are we focusing our efforts on? And I think, You know, eventually this AI hype bubble is going to burst in itself. I got whatever the correction.
00:14:12
Speaker
We've already seen like half the data centers that have been, you know, proposed to be built have already been canceled or postponed indefinitely. So I just think. Yeah. Or they're just not online. It's like, it's a lot harder to build a data center than you think. So it's like all hypey announcements and like the bulldozers are not moving.
00:14:30
Speaker
There's bigger human problems to solve. like that's what That's what our friend kind

Tech Utilization Failures in Politics

00:14:34
Speaker
of showed us. that There's bigger human problems to solve. and That can be done them with smaller resources. Yeah. And look, are you ask anyone in medicine, like, we should be solving, like, the healthcare data providence problem so that we can use AI to more quickly detect cancer and long-term diseases. And we should be using that to actually optimize research so that we can find cures to, like, lupus. And, you know, we could look at, like...
00:14:58
Speaker
You know, even even in in you know third world countries as well, don't know if that's the proper political term you anymore, but like you look at like STIs and a lot of the the transmittable diseases there that are still running, you know, pandemic style.
00:15:11
Speaker
We could solve real human problems that improve the quality of life, but we as ah as a Western society are just not... We don't want to do so. And even our own selves, like we are being gouged right now at the pumps, at the grocery store, cost of living and inflation. And we are not doing anything. We're not forcing political will out of our leadership to actually use technology to improve things. Everything is just about stock prices and shares. And we're not a political podcast, but holy shit. i mean, that's kind of the problem.
00:15:42
Speaker
Yeah, I think I heard something that was like um trying to change. Yes, the political will, like trying to change the state's relationship technology and its citizens and that interface. So, for example, that was given in that particular podcast.
00:16:00
Speaker
The IRS could easily build a pretty intelligent system that would pull on existing records. They already have them against our social security numbers and our tax filings from the past.
00:16:15
Speaker
Pre-populate, use optical character recognition from LLMs to read W-2s, whatever, And kind of like, I don't know, fuck TurboTax. yeah You can just give away this free tool.
00:16:29
Speaker
And this thing that is super stressful, which is paying your income taxes, could be really easy for a lot of citizens. That could be done today. There is no technical hurdle to that problem. The tax argument is really interesting too, because I've always found taxes are really silly exercise because I have to do all this math and file these forms to go to the government because Canada is the same thing.
00:16:55
Speaker
And the government is going to tell me like, oh yeah, that's how much you owe or no, you actually owe more. And I'm like, but you already know how much I owe. So like, why are we playing this game?
00:17:08
Speaker
and To me, it's it's it's an asinine exercise. And that's, again, where we could improve automation and efficiency, but we're not doing it. Yes, and ah also thinking, again, these human outcomes, thinking backwards. So, for example, the frame of the debate sometimes is Waymo wants to come into a city. We know Waymo is safer. I've ridden in one.
00:17:31
Speaker
It's kind of an exquisite experience. Yes, demonstrably, statistically safer than human drivers. Okay, From a labor perspective, obviously not good for taxi drivers in major cities, Uber, Lyft, elsewhere.
00:17:47
Speaker
But is that the question or is the real question, could we use AI to better understand traffic flows to improve public transit, which is infrastructure we already have and under invest and you get fewer cars on the road? I don't know. It's like we're always like missing the forest for the trees.
00:18:08
Speaker
But again, that stack goes into the argument of like, there's the automotive industry, which has a massive lobby and they want to buy more cars. And like, even first of all, you smart cities, we can have a whole conversation about smart cities and smart city infrastructure and how that's supposed to go.
00:18:22
Speaker
then it also gets into like the whole digit i digital ID conversation. And that's, there that pisses off a lot of people. And that's, I'm just raising the points of like all the related things when you bring that up. And I think, you know, I think of the bigger problem where it's just like the inshidification of everything where like cars used to be made to last. Like I remember my family's old Ford Tempo lasted like 20 years, man. 16 years. 16 years for that car.
00:18:49
Speaker
And it went to like countless trips up and down from Canada to the States. Now you're lucky if your car lasts you like three to six years. And we just, we, ah things are more expensive and the quality is reduced.
00:19:04
Speaker
We're not actually spending money solving substantive problems because investors don't want to put money into that. And I think governments don't have um the courage anymore to make policy decisions that are just based on pure governance. Because if you're an elected official in the West, be it Canada or the U.S., your reelection campaign begins literally the second day after you've been elected to office. So you're not actually thinking about good governance at any point.

The Importance of Data Quality in AI

00:19:30
Speaker
Oh, yeah. I'm a huge proponent of like very strict term limits. Like this is this is the clock. This is your time. You may get reelected three times in the House, two times in the Senate. You're out.
00:19:42
Speaker
and And just what what is your legacy? What are you going to leave behind rather than i don't think that public office should be a career. I do not think that isn't fucked up incentive structure.
00:19:52
Speaker
All right. We'll take a quick break and we will be right back with more fist shaking maybe. Or maybe we'll maybe we'll get a little bit more positive. I don't know.
00:20:20
Speaker
right, and we're back. So there's been a big argument around data quality, right? And so that that data quality argument is finally starting to get some traction. And know it cuts against a lot of like scaling consensus, right? So we look at our friend Almaty's model, which is focused, field validated, you know, expert annotated.
00:20:39
Speaker
You know, I went through and and did the flip side of that where I talked about model collapse, data decay, and what you get when you scrape with open web. And there's the Stanford AI index um and and, you know, data centric paradigms.
00:20:50
Speaker
Where do you think we are we are at today and where we're heading tomorrow in terms of the data quality argument currently taking place in industry and in media? Yeah.
00:21:01
Speaker
Yeah, well, first off, I think that it's not loud enough in industry. I think it is and has been since I think 2024 is when the first data-centric paper might have come out. but that So let me back into that. I got convoluted. so
00:21:22
Speaker
I call it the scaling hypothesis. I refuse to call it the scaling laws because despite what Sam Altman thinks, you can't just call things laws and just like will them into being. It's not scientifically validated. So that was the idea that you just needed more data, more compute, and we're kind of still living in that. And the bigger you made the models, the...
00:21:40
Speaker
the better they got. And that was true up until a point. And then it hit this wall. I don't care what anyone says. All of the improvements that we have seen to date since GPT-4 and beyond have really been in the post-training phase, whether it's fine-tuning or RLHF or reinforcement learning.
00:21:59
Speaker
um it's It hasn't been from just ingesting more data and they have effectively run out. Now, from the academic side, where like really true innovation happens, you know, discovery of a new class of antibiotics, all this other juicy stuff that doesn't get as much play in the press.
00:22:19
Speaker
Those are scientists working with, sure, deep learning models, statistical recognition, probabilistic models, whatever. But it's obviously they're scientists and they're academics. so it's focused on a problem statement and a unique data set.
00:22:34
Speaker
And in the case of the one MIT set of researchers that did the antibiotics, like true wet lab work and like true data. And so I think... um As you point out, the Stanford 2026 AI index, which I think dropped this month, um did have a section on data quality as correlating with
00:23:00
Speaker
output and quality output. And so we have started to see that shift that smaller institutions can find either by pruning the data or by just taking the time to front load it with high quality annotation that you can get massive performance improvements. And also again,
00:23:17
Speaker
solve discrete problems. there It's really hard to make an everything machine and make money doing it. And i just wish some of the cool stuff that is happening in academia, which will get licensed out or, you know, spun out into startups, but...
00:23:34
Speaker
that innovation is going to need more attention and more capital flows. And if the correction comes against the big frontier labs, LLMs, whatever, and and that capital flow freezes, my fear is that the good shit never gets funded and it just sort of like dies on the shelf or just sits dormant for way too long.
00:23:57
Speaker
You know, I think you're you're bang on. I would actually argue I think this is where the narrative is finally going to crack. Because i think the scaling argument has had a decade to deliver on its own terms, right? Like more data, more parameters, more compute, better outputs.
00:24:10
Speaker
For a while it worked, but I think what our friend in Morocco kind of demonstrated and walked us through was um a really good counterexample. More focused, field out field validated, expertentated expert annotated data on a specific domain actually produces real world results.
00:24:27
Speaker
I mean, he did not need the entire internet. He needed the right data. And his models beat generalist ones on a problem that he's actually solving. So that that really, really needs to be hammered home.
00:24:38
Speaker
The flip side of this argument is, you know, the one I spend most of my time on, which is that if you scrape the open web indiscriminately, two things are going to happen. First is you inherit every piece of garbage, every bias, every synthetic poison artifact that's been injected into data supply for the last three years. And second, your model's progressively train on its own outputs because the open web is now full of AI-generated content. so that means the model's going to collapse. That's not theoretical. There are papers showing it measurably. Yes, mathematically for sure.
00:25:10
Speaker
Working in operations, you see it, you see it in the outputs. And I think Stanford's AI index for this year finally put some of this on the record because transparency on training data is dropping. um You know the foundational model transpi transparency index, the scores fell, I think, below like 58 points to 40 points year over year. yeah ah The most capable models disclose the least. and You know, that's the tell. When frontier labs will not tell you what's in the data, It's because the data is ali or the data is a liability, to be honest with you, and that the frontier labs just don't want to

Ethical Concerns in AI Data Practices

00:25:44
Speaker
articulate that. yeah Or they settle out of court.
00:25:46
Speaker
Yeah. And like, here's the part that I think is not being said loudly enough. And I think it might be possibly the most controversial statement of this episode. And I i did actually look at the script beforehand and I researched some thoughts down. and i'm like, OK, cool. Let's piss people off.
00:26:03
Speaker
I think the labs racing China on data volume are converging on a data ethics posture that actually starts looking like China's. Oh, good statement. Good statement. I will also say um the race for compute capacity, which you pointed out from Dmitry Alperovitch's statement before the, was it the House or the Senate Select Committee on China?
00:26:30
Speaker
i was very surprised slash disappointed by that because as you've pointed out, Two things are true. The data centers are not getting actually built as fast as they can be. Everything that was promised to be online this year, delays, delays, not happening.
00:26:48
Speaker
Um, there's a lot of other logistics involved in, in data center stuff that's makes it doubtful that they'll come online. And then plus, uh, China has placed all their bets on open source.
00:27:02
Speaker
And so they're kind of like running with that space, including lots of data sets. I feel like the race for compute is a trap.
00:27:13
Speaker
Right. Like we're going to be ah strip mining our, our side of the planet to build the shit that doesn't work. That doesn't do what it's supposed to do while they quietly, you know, just develop like super specific ready applications that their economy can actually ingest and apply.
00:27:36
Speaker
Well, I'll say this, right? I think if your competitive dimension is to just scrape more and scrape as much as possible, you eventually scrape the things that you said you would not scrape.
00:27:47
Speaker
Of course. And that that turns into a structural problem, which the only way out, I think, is data

Cultural Challenges in AI Adoption

00:27:51
Speaker
providence. Like I just spent last night talking to a whole bunch of senior Canadian government officials about data providence.
00:27:57
Speaker
you know, and and knowing what's in your training data and being able to attest to it being able to remove what should not be there, that's going to be the key because, you know, that's where the next generation of defensible AI is going to get built is when you have models that can actually be verified to be trained off of the data that they're supposed to be trained off. with Especially if you expect them to make critical decisions or inform critical decisions.
00:28:21
Speaker
processes and workflows right like if you needed it to do important stuff you have to know that you trust the output you know um i also want to say something that i think got lost in the episode which is i really respected how al-mati recognized the cultural
00:28:45
Speaker
frame of his end users, right? If he, he said, you know these small farmers generally understand WhatsApp and Facebook. So like creating some weird software UI is, is just not germane to them. So let's just make it easy to use as something that's very familiar. And as the cultural anthropologist in me was so happy to hear that because I think,
00:29:08
Speaker
We have this hubris in the West that like literally everyone should just bend towards Silicon Valley design decisions. um and it And it just doesn't always have to be that way.
00:29:20
Speaker
um But yeah, you're right. Yes. Yes. I think it's a think it a trap. ah Last point on that. I was joking with our mutual friend, Mike, the other day.
00:29:32
Speaker
There was a headline, I think there was a few weeks ago, there was a software update that created a huge traffic jam on some highway in China. I'm going to try to find the source of this article. And it was a bunch of Baidu-powered robo-taxis kind of like locked up on the highway.
00:29:48
Speaker
And it was being shared like, ha-ha, look at Chinese software, blah, blah, blah. and And I wanted to pause and say... Much like plane crashes, the reason that's a headline is because it doesn't happen very often.
00:30:04
Speaker
But also, are you missing the point that they have hundreds of these machines on the road operational today? Can you point me to another American city where people are just jumping into robo taxis? And that's what I talk about as does the economy have the capacity to uptake the technology? Yeah.
00:30:23
Speaker
and And if we are racing towards compute to build the everything machine and we sort of get distracted on that, I feel like that's a fool's errand while the adversary develops an economy that is actually powered by very intense and discrete applications that are force multipliers. Anyway, that's right. Well, I think adoption, before we close this off go to our last topic, I think adoption is a big thing. So when you know, it came up today, I was ah just talking with some Canadian government officials and it came up the example, um like one lady was talking to, her dad had been a veteran for almost 40 years and she was talking to him about going to see her doctor and and he was getting a report from the doc about something he's dealing with and there was some used to help the diagnosis and he fundamentally didn't trust it because the AI.
00:31:16
Speaker
And really because of the data. And the first thing like he had asked her was like, well, where's that data going and all this? and And she was like kind of beside herself because she understands the system. She understands like how that model is trained. She understands how it' because it's it's government approved.
00:31:30
Speaker
But then she's laughing to herself because she's like, yeah, but you give all your data to Home Depot so you can get it. ah And you don't question that at all. Bingo.
00:31:42
Speaker
Bingo. That's the adoption. That that to me it was like, that's the adoption problem in a nutshell right there. Yes. Yes.

Immediate Risks of AI

00:31:49
Speaker
Yeah. All right. Well, we're going to close out with, we started with hype, which is sort of future casting. We're going to stay grounded in the present. and We come to the end here. And I guess what we want to say is
00:32:04
Speaker
Not Doomer porn, not Eliezer Yukowski, is going to murder us all, not killer robots.
00:32:14
Speaker
The real risks are here. They are fucking happening right now. We talked about human outcomes. You talked about, ah what was it, AI...
00:32:27
Speaker
chatbots, sexbots on OnlyFans, like taking money away from real sex workers. Yeah. I mean, I don't know care what you feel about sex work, but that is a real instance of somebody with concentrated capital taking from other people. I literally today just read a story about um, someone that had used some image generation software that they got through Google. They're an Indian indian med school student and they financed their, their med school by creating a, um, a mega themed influencer who's just like a pretty girl in like, you know, like kind of like the whole like pretty girl kind of sexually soft core on the line kind of content that, yeah you know, mega loves.
00:33:08
Speaker
And then they killed it. and It was a dude. It was a dude the whole time. that person doesn't exist. And I was like, uh, I can't even get mad at that. break That was a real thing.
00:33:21
Speaker
So, yeah. Yeah. ah The extent to which people cannot recognize the AI generated imagery is kind of appalling. I thought we would be with it a little bit longer, but it's like full bore. Anyway, we have those harms. We have...
00:33:36
Speaker
AI psychosis that is in fact real, ah documented cases, numerous tragedies, and the scale is amazing and terrifying, right? Like even if 1% of people of OpenAI's, you know, 800 million weekly active users have an AI psychosis problem. That 1% is a fuck ton of people.
00:34:01
Speaker
Like enough that it is a public health emergency. um And I'll try to also track down this paper. There is an MIT study that found that basically...
00:34:14
Speaker
That AI psychosis that kicks in, that mirror world that people start to live in after a certain time is a mathematical inevitability of the way large language models are trained.
00:34:26
Speaker
And it's so it's not something that you can put on people like, oh, they're weak-minded or these are weak individuals. It is almost a mathematical certainty. And i will try to I'll try to track that down. So there's that.
00:34:39
Speaker
Yeah, and I think, look, like, at the end of the day, like, the real risks are boring and they're already happening and no wants to talk about them, right? Amber kind said it cleanest. It's not Skynet.
00:34:51
Speaker
ah It is the random thing that you downloaded from from an ex-post or from Reddit, right? Or the case of Vercel, somebody downloading some Roblox shit that was, like, loaded with the Trojan.
00:35:06
Speaker
That too, right? like and Let me walk through the actual like harm inventory from this month alone, right? So gas turbines. You had literal gas turbines running unpermitted next to you know majority black neighborhoods in Memphis and Tennessee, or Mississippi, I should say, sorry, you know burning methane to to power chatbots.
00:35:25
Speaker
you had You know, peak nitrogen dioxide rose by what, like 79% or something after the XAI data center began operating in a community where residents were already, you know, carrying cancer rates four times the national average.
00:35:38
Speaker
Harvard School Public Health estimated between like $53 and $99 million dollars in annual ah health damages from a single Vantage facility in Loudoun County, driven primarily by, you know, three to six additional premature deaths per year.
00:35:56
Speaker
The NAACP, I'm a k and pardon me, forgive me. ah The NAACP has already filed a lawsuit. um And so I don't think this is a future risk.
00:36:08
Speaker
I think it's like, you know, it's a now thing because this is all this month. And on the AI psychosis thing, right, MITC SAIL and the Lancet Digital Health Group are are now publishing formal typologies and like ah ah Bayesian models on on like how synchrofantic chatbots yeah essentially reinforce delusional belief in vulnerable users. um The Human Line Project has documented nearly 300 use cases of of delusional spiraling with serious cases linked to at least 14 deaths, five wrongful death lawsuits filed against AI companies.
00:36:45
Speaker
The mathematical modeling is very real now. It's not speculation. Like deepfake abuse um is still expanding. yeah We're still mostly targeting women admirers. And that has not been solved, George.
00:36:59
Speaker
And the powers that be have no interest in solving it, it would appear. so i yeah. yeah I just see it as like a problem that because the technology itself is still being shown as profitable, we're just, you know, like, like, notice the pattern.
00:37:17
Speaker
These harms do not have- it's actually not being shown as, like it's not being shown as profitable. it's just being promised. It's all promises. I'm saying like, notice the pattern, right? Like these harms, you don't have a single villain that you can point at and say that that guy is the bad guy in the AI story. And that's kind of the problem because that's how people's minds works. The gas term mines in Memphis are are not benevolent.
00:37:38
Speaker
You know, a syncfant chatbot is not benevolent a deep fake generator is just a model that weighs file. um The harms are compounding at the floor level in places you know where the hype conversation has no vocabulary.
00:37:51
Speaker
and Like that's my real concern. It's not like that a super intelligence wakes up and decides to kill us all. It's that the ordinary systems that we are building in the ordinary way that we are building them are already making people

Accumulating Harms in AI

00:38:03
Speaker
sick. And they're already driving people crazy. They're already hurting people with the least ability to fight back.
00:38:09
Speaker
The doom conversation keeps... acidster death for the so Acid baths for the social contract. right like I'm saying, man, that the doom conversation keeps pulling oxygen away from the work of actually stopping it.
00:38:23
Speaker
Yes. 100%. 100%. To quote... hundred percent um to quote Peter J. Parker in Spider-Man Into the Spider-Verse.
00:38:36
Speaker
Don't watch the mouth, watch the hands. um I think it's... Anyway, we'll leave you there, listeners. i don't think we meant it to be as sort of ah as it turned out to be, but it's...
00:38:50
Speaker
We're frustrated, but we are trying to focus on the stories that illuminate new ways of thinking or asks new questions or at least forces a different kind of thinking because otherwise we're kind of all lemmings going off the cliff or we're all just sheep headed in one direction.
00:39:06
Speaker
um So if you stayed with us this long, you know, hopefully keep listening, keep asking questions, stay sharp, stay critical. AI is a very powerful technology. It has a lot of promise.
00:39:18
Speaker
But it in of itself needs to be also held to account. And please engage with us. Please reach out to us. Like we're on LinkedIn. Most people can reach us. um You can look us up. Like I like ah we'll we'll we'll sort start figuring out other social media. I got I got funny cyber page on Instagram. um You know, but we can we can do other things. But primarily we're on LinkedIn. You can reach out to us. um And please let us know, you know, if there's topics or there's things you've seen or examples that you've seen that we didn't cover in your real lives because we want to like platform them. We want to platform you. we want to give you a chance to talk about it.

Audience Engagement and Podcast Support

00:39:55
Speaker
And let's let's engage because if no one else wants to talk about it, then let this be the show that does. Yes. And for lack of a better category, I guess you can call us. technology humanists. So if you're an artist, if you're a writer, if you're a musician, if you're building something cool that you think can solve your world problems, we want to talk to you.
00:40:16
Speaker
All right. With that, we are out. Have a great week. We will talk to you next Monday.
00:40:26
Speaker
If you like this conversation, share it with friends and subscribe wherever you get your podcasts for a weekly ballistic payload of snark, insights, and laughs. New episodes of Bare Knuckles and Brass Tacks drop every Monday.
00:40:40
Speaker
If you're already subscribed, thank you for your support and your swagger. Please consider leaving a rating or a review. it helps others find the show. We'll catch you next week, but until then, stay real.