Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Are We Building a Star Trek Future or One that Looks Like Minority Report?  image

Are We Building a Star Trek Future or One that Looks Like Minority Report?

S4 E8 · Bare Knuckles and Brass Tacks
Avatar
126 Plays13 days ago

This week George K and George A switch formats to consider the deeper questions behind recent tech headlines.

The hosts dig into the philosophical tensions driving today's biggest tech stories. When does technological dependency become too dangerous to ignore? How do we distinguish between genuine innovation and elaborate pump-and-dump schemes dressed up as progress? What are the real costs when entire economies become intertwined with a handful of companies?

They explore whether we're witnessing the early stages of a historic bubble or if we're already past the point of no return. The conversation touches on the ethics of deploying untested technology on vulnerable populations, the normalization of surveillance capitalism, and why regulatory capture might be democracy's biggest threat.

Most importantly, they ask the question that should keep every technologist awake at night: Are we building the future we actually want to live in, or are we just building the future that's most profitable for a few?

The news examined:

Mentioned in the discussion:

Recommended
Transcript

Podcast Introduction and Scope

00:00:00
Speaker
We're here for the AI, folks. We're here for the AI. Just the shit that works and solves real problems. God damn it. Oh, I have been waiting for so long to just have an episode where we just beat the shit out of AI because I'm over it.
00:00:22
Speaker
Oh, it's true.
00:00:29
Speaker
Yo, this is Bare Knuckles and Brass Tacks, the tech podcast about humans. And this week, we got something different. George, tee it up. right, so we are on our artistic journey <unk>s making this show a little bit wider in terms of its audience scope and its content.
00:00:49
Speaker
We are still always going to be tech-based podcast, but now we're going to be kind of more tech-adjacent. We want to talk about what the implications of technology are and and how it's affecting, you know, day-to-day lives. And we want to praise those that deserve to be praised. And we want to call out the things that we might find concerning.
00:01:09
Speaker
in our little limited scope and in our wonderful audience that follows us around. So it's a work in progress. And we told you, our friendly listeners, that we're going to go on this journey together this season.
00:01:21
Speaker
And we weren't kidding. So this week, we're going to start off with something a little different than what we usually

Major Headlines and US TikTok Deal

00:01:26
Speaker
do. And I'm actually going to be in the driver's seat a little bit. Mr. Kameed is going to actually be the intellectual here. And we're going to pick his brain row on a couple of headlines, couple of headlines this week.
00:01:39
Speaker
yeah Because if anyone's ever actually talked to George, like you need to know that the guy is just not a pure tech guy. He is an intellectual. He's an anthropologist. He's someone who is caring about society. And I think we need to dig into the gold mine of George's opinions.
00:01:55
Speaker
so we're going to kick it off. First topic. Details emerging on the US TikTok deal with China. And we know friends that work at TikTok. So a day after the White House announced that it had a framework of a deal with China to divest its US TikTok business, the Wall Street Journal has reported that some of the specifics, per the Wall Street Journal, of the apps US business would be controlled by a consortium, including cloud giant Oracle, shout out Larry Ellison,
00:02:23
Speaker
And ah ah private investment firm, Silverlake and Anderson Horowitz. We've just been talking about them. Users in the US would ah reportedly have to move over to a new app, which TikTok is already testing.
00:02:37
Speaker
A new board would comprise of mostly Americans, one of whom would be appointed by the government. And with a deal in the works, President Trump has extended the deadline for China's ByteDance agreement to divest in the apps ah U.S. operations, or be shut down for the fourth time, all the way to December 16th of this year.

Social Media's Impact on Society

00:02:58
Speaker
My friend, what do you think? Well, I think a bunch of rich people just got richer, if it works. um Let me see here. Let me take it in turn. I'll kind of work in reverse.
00:03:12
Speaker
I don't know how seriously the stick can be leveraged. As you mentioned, this is the fourth time we've tried to ban it. So while there's a lot of headlines that a framework has been announced, a private corporation in China that has a lot of ties to the CCP would have to agree.
00:03:36
Speaker
There's nothing to make them agree. There's not a lot of leverage there. So they may think of it as an empty threat. Okay, let's assume that they're like, fine, we will, you know, spin out this thing that makes billions of dollars a year.
00:03:51
Speaker
um All right, so it comes over to U.S. interests. The things that make me nervous, again, we have a massive consolidation of power at the top, right? we've got a lot of big players just getting bigger.
00:04:04
Speaker
i am not a fan of... government officials sitting on boards of private corporations. That feels a little weird considering the amount of influence and frankly control Tic Tac Tic talk has over the attention spans of many Americans.
00:04:25
Speaker
That seems a little weird to me. um and then lastly, you know, whatever the divide divestment is,
00:04:38
Speaker
You know, this was ostensibly formulated because TikTok was seen as a threat to the national security of the United States. You could sketch that more broadly, national security threat to the West, right? The whole idea was you had ah potential propaganda organ,
00:04:59
Speaker
controlling the attention spans again of millions of Americans, millions of Western Europeans, millions of Canadians. um Are they boosting the algorithm in one direction to foment ah discord and civil strife?
00:05:13
Speaker
That doesn't change. It doesn't change. It doesn't matter who the owners are. Like the problem is still there. The problem is the algorithmic media feed that kind of like puts us all in our own bubbles ah where we don't know how to connect with strangers. We don't know how to talk to people down the street. We...
00:05:29
Speaker
Pay more attention to followers in the virtual space than ah the people in our neighborhood. Like, that's also a problem. And I don't think the investment is going to help that. And I also don't think it really changes the propaganda question. It just is like, who owns the levers now?
00:05:45
Speaker
Anyway, that's my take. Yeah, I generally have to agree with that. I also feel that um due to the politicization of this decision, um honestly, because i i used to ah used to rail against TikTok for a long time because of you know China's 2017 law, which had you know edict that say that they could not trust.
00:06:10
Speaker
yeah Yeah, that militaries you at any point for any organization. um But I think the problem with this is that we now are missing, I think, the real issue with an app like TikTok, which as you've correctly pointed out, is the fact that it's it's destroying people's attention spans.
00:06:29
Speaker
And it's it's really poisoning um when folks get addicted to it. the ability for people to actually get out and be human and connect with other human beings and the desire to get some glue to their phone, right? Because that's that's how the thing works.
00:06:42
Speaker
I think that's more the societally impacting problem than merely, you know, people ah having oftentimes, you know, legitimate free speech and discourse on different political opinions, left, right, center, whatever it is.
00:06:57
Speaker
Um, and you know, like any social media platforms that have people that do inappropriate things and moderation is, is an entire field in tech and does. But I think we have to have an honest conversation as a society is like, what is the real problem with these technologies?
00:07:16
Speaker
And are we now trying to create medicines for diseases that aren't actually there yeah that we can bypass dealing with the real issue? You know, i think that's what it is. Yeah, I just listened to an incredible researcher named Catherine Cross, who was talking about like why social media is so bad for politics is because it's much easier to destroy than it is to build. It's actually quite difficult to build. do You know that as the community activist that you are, like you have to go build relationships across lots of different areas.
00:07:49
Speaker
interest groups, and then you have to get them out in the real world together is so much harder and take so much more empathy and work than just like, let me shout about something to people who think exactly like me on this thing that is meaningless. And there's a lot of research that has pointed to like the failures of the Arab Spring are just as much the weaponization of social media as it is very easy to get people into Tahrir Square much harder to build the political coalitions that will sustain the fabric of democracy after, you know, the regime falls.
00:08:27
Speaker
Which, which is why exactly why extremist movements can take over and, and really commandeer a movement because remember, um, after the Arab Spring, particularly in Egypt, that was when the Brotherhood moved in.
00:08:41
Speaker
Yes, because they were the only ones with political expertise, prowess, yeah skills, and boots on the ground, you know? ye So anyway, for sure.

Geopolitical Issues and Tech Dependency

00:08:51
Speaker
That was good for that.
00:08:52
Speaker
So that was that was fun. All right. I think we got through that first piece. So things are getting worse for NVIDIA in China. NVIDIA, who ironically enough, I saw um ah saw a post that said that the total market value of NVIDIA is actually greater than the combined value of the ah Canadian economy today.
00:09:12
Speaker
Holy shit. So that was fine.
00:09:16
Speaker
I had to verify that, but I'm like, that's that's kind of crazy. ah So getting to the point here, the world's most valuable company keeps finding itself caught in a lot of geopolitical tussle as the US and China go through their tensions, with multiple news outlets reporting that Beijing's cybersecurity regulator told major Chinese tech companies not to buy NVIDIA's RTX Pro 6000D chip.
00:09:43
Speaker
The less powerful chip was designed specifically for China to address U.S. security concerns over the sale of its more powerful models there, which is something that the U.S. has been talking about quite a bit across multiple sectors.
00:09:56
Speaker
NVIDIA CEO Jensen Huang said that he was, quote, disappointed, ah but that the company would or could only service a market, quote, if a country wants us to.
00:10:07
Speaker
um This comes after Chinese antitrust regulators also said this week amid trade talks with the U.S. that NVIDIA had violated its anti-monopoly rules.
00:10:18
Speaker
and NVIDIA um has also had to contend with U.S. export bans on chips that limits its sales to China. So, chips, go ahead. I mean, is there a more crucial technology on the planet right now?
00:10:33
Speaker
um I like to think that there are people with long trench coats and they've just got like a GPUs in them and they're like, hey, you want to buy some GPUs? Yeah. I mean, the semiconductor and chip manufacturing industry is so highly centralized.
00:10:53
Speaker
It was bound to come to this. Even... Even if we weren't in an AI boom right now, it was going to come to this because you have this, ah the TSMC, right, is like the world's premier manufacturer, but the equipment is only made by ASML, basically in the Netherlands. have these tight concentration points and a very specialized supply chain.
00:11:17
Speaker
And so once everyone wants it, it's not like, it's not fair to say that chips are the new oil. It's like, There is no comparison. it's It's so invaluable and it's so not readily available that 100% it was bound to to come to this. And I also think for all the saber rattling about like, oh we're going to bring chip manufacturing over to here or over to there, like politicians who say that both in the U.S. and abroad fail to understand
00:11:53
Speaker
Like the brain, like the brain trust transfer, the knowledge transfer, building factories, like you can't do that in under a decade, right? That's like a massive thing to behold. So I'm totally

AI Investments and Economic Speculation

00:12:09
Speaker
unsurprised. I will say the thing that I really wanted to react to which was what you were saying about the total value vis-a-vis the Canadian economy.
00:12:20
Speaker
The thing that scares me most about The way investment is going right now towards generative AI specifically and this like thirst for GPUs. Two things on a technical level, it means all the investment is going into things that are more or less based on the transformer architectures upon which we see Anthropics Claude, all the major chatbots, ChatGPT, etc.
00:12:45
Speaker
um
00:12:47
Speaker
And what that means is taking money away from research into maybe more efficient models that maybe don't require as much compute. So we're sort of like railroaded into like one kind of thinking, which is kind of sad to me on ah both a scientific level and also like an advancement level. um But the other part that scares me is, and this comes from ed Zitron, who is a British journalist who maintains the amazing podcast and newsletter um that are offline, want say it's called.
00:13:20
Speaker
And his reporting, ah yeah i just want to read this to you. So The Magnificent Seven, right, which used to be called The Fang, but now includes Microsoft and NVIDIA.
00:13:33
Speaker
microsoft ah Magnificent Seven firms, quote, spent around $560 billion dollars on AI-related capital expenditures in the past 18 months, right? They're spending like crazy. They're building data centers. They're buying up GPUs.
00:13:48
Speaker
Training runs cost a lot. While their AI revenues during this period were only around $35 billion. Right.
00:13:57
Speaker
That math is kind of frightening when you consider that the Magnificent Seven accounts for 34% of the S&P 500's value, right?
00:14:08
Speaker
And so, let's put that person. you know how many state pensions, how many teachers' unions, how how much... people's retirement relies on just tracking the S&P 500 and it is like one third overweight in this one category. So in the context of this story, things go tits up for NVIDIA or something goes south and slides and creates, ah you know, either a bubble burst or a sell-off. Like a lot of people who are not related to AI's retirement fortunes
00:14:42
Speaker
is predicated on this stuff, right? And I just think that makes me really sweaty, a little bit nervous about where things could head. It just feels like it's kind of a tinderbox for a sell-off. Yeah, and I think ah the problem is when you allow um companies to grow to that level of magnitude and create so much supply chain dependency,
00:15:04
Speaker
typically something like this would become a public utility, right? Which is why like, like the people that... Or broken up like the like the Bell Company, right? It became Bell South. It became, you know, whatever else. Yeah, 100%.
00:15:17
Speaker
Yeah, and i think I think the issue is like, we have to look at What does this imply for society? does this potentially lead to from a risk standpoint, from a financial risk standpoint?
00:15:31
Speaker
And, you know, is this just creating a situation where, you know, when the, when the levy breaks and the dam on this falls and, you know, one of these major magnificent seven companies,
00:15:45
Speaker
has a critical failure. And, you know, let's say that I'm not saying they're going to end wrong or go completely belly up, but there's a complete or critical loss of value. And so those pensions, those savings, those investments lose value as well.
00:16:01
Speaker
you know, doesn't that just escalate the, we'll say growing trend towards um ah economic disparity, right? The widening gap between this the wealthy and the rich, or sorry, the wealthy and the poor. And I think what's really scary is I just don't see people in any kind of democratic action or or or mechanism being able to defend themselves or stop this because even the politicians now, you know, they're they're they're beholden to it.
00:16:36
Speaker
Like they're never, ever, ever going to legislate. It seems like regulation has completely gone to the wayside. And, and you know, now especially there's a lot of deregulation that's been occurring.
00:16:48
Speaker
you know at what point can we as a society say hey stop rules are in place for a reason because with the way technology now is developing an innovation which is being very very specifically directed in a certain path to continue the profit chain of the folks who built that path and you know at what point can we stop this because i i just like you I really only foresee scenarios where this goes bad and a lot of people's lives are ruined and 2008. Yes, you took the words out of my mouth, it feels like 2008, like just right before it, lookss except for that we have the benefit of hindsight and it's not collateralized debt obligations that are being traded.
00:17:35
Speaker
But if you think about the amount of the economy tied up in this, it is super scary. And it's also like the military yeah wasn't relying on mortgages, but the military is relying on NVIDIA, right? Like all manner of data that makes the modern Western services economy run runs on these chips.
00:18:04
Speaker
It's a, that's a, it's a very scary moment. Then, then pivot for a sec, because I also have read recently, I think we've talked about this too. There are certain headlines that have come out and said something like 90% of these AI startups never actually reach a point of profitability.
00:18:22
Speaker
So now of hundreds, hundreds of millions, probably billions, actually billions of that for sure, billions of dollars at this point, have absolutely been lost. Yes. in european Is there a bubble? i you are Yeah, I think there's a bubble, but I think we are probably at actually the beginning of the bubble.
00:18:45
Speaker
I've seen some increased skepticism on the limits with LLMs, right? When GPT-5 landed with a a whistle, I guess, rather than a boom, right? It was like very unimpressive relative to the stepwise change between 3.5 and 4.
00:19:05
Speaker
Have we run into the training? and Blah, blah, blah. ah We also saw that MIT study that came out that said like, you know, 85% of business pilots or whatever hadn't borne fruit.
00:19:16
Speaker
And I think that's because everyone is rushing in Without really thinking. And you know this from your days in the military. Slow is smooth. Smooth is fast. But everyone is just speedily throwing stuff in.
00:19:28
Speaker
But a lot of economic decisions are being made around this. And a lot of people's lives are going to be affected by things that they do not control, which is a recipe for not pitchforks and torches, but maybe, maybe.
00:19:43
Speaker
i mean, you can only put your boot on enough necks before. They, you know, they've had enough.
00:19:54
Speaker
I appreciate that because I've not heard a single other person who I've asked about this say that we're actually at the start of the bubble. Like a lot of people think we're like at the point it's going to yeah i speak they feel like Yeah, I feel like we're in the reality distortion field full stop.
00:20:14
Speaker
Hey listeners, we hope you're enjoying the start of Season 4 with our new angle of attack, looking outside just cyber to technology's broader human impacts. If there's a burning topic you think we should address, let us know.

AI Safety Measures and Regulations

00:20:27
Speaker
Is the AI hype really a bubble about to burst? What's with romance scams? Or maybe you're thinking about the impact on your kids or have questions about what the future job market looks like for them.
00:20:39
Speaker
Let us know what you'd like us to cover. Email us at contact at bareknucklespod.com. Fair warning to listeners. In this next segment, we do have a discussion about Adam Rain and OpenAI as it relates to his suicide. And there are some details some listeners may find disturbing.
00:21:01
Speaker
if you would like to skip ahead, you can go to the 27-minute mark where that part of the conversation is concluded.
00:21:11
Speaker
We'll continue on the AI path. Next headline to protect underage users, ChatGPT may start asking for id So OpenAI announced ah new safety measures for ChatGPT this week that include parental controls ah that the company plans to roll out by the end of the month and a long-term goal of developing age prediction technology that can deduce if a user is actually ah three kids home coat.
00:21:39
Speaker
So just saying, the company says that if the new technology, which it hopes to have ready by the end of the year, has any doubt if a user is a minor, ChatGPT will quote, default to the under 18 experience and block graphic content.
00:21:54
Speaker
In some countries and situations, users will be asked to verify their age with an ID. So what this implies for parents, the new parental control controls include the ability to specify how chat GPT responds to their child, disable features like memory and chat history and receive notifications if the child is in acute distress and allow for law enforcement to become involved.
00:22:16
Speaker
mental health safeguards. Shout out to you, George, and what you guys are doing, in MOC. So ChatGPG has been accused of encouraging or doing nothing to prevent self-harm, suicide, or even homicide, which has led to numerous lawsuits. And actually, i believe the state of Illinois um banning AI therapy assistance as well. It's a separate topic.
00:22:35
Speaker
In response, OpenAI said it's working on a chat GPT-5 update that would deescalate potentially dangerous situations and make it easier for users to reach emergency services and trusted contacts.
00:22:49
Speaker
Now, my final take on this, contacts is everything. The new guardrails were in or unveiled probably without coincidence hours before a Senate Judiciary Committee hearing to examine the harm of ai chatbots.
00:23:05
Speaker
What's your take? I mean, it's really hard for me to not
00:23:11
Speaker
and unload a series of four letter words. um The reason i am angry, ah my ire is because, We talked about this ah the day that it came out, right, ah which was the very unfortunate suicide of Adam Rain and his parents who are suing OpenAI and have included Sam Altman as a defendant.
00:23:37
Speaker
Because when they ah looked through all of his chats, there was plenty of evidence. This has been covered extensively um that he was discussing self-harm and it was determined that a lot of discussion in OpenAI had been around age verification or even protocols to de-escalate, right?
00:24:00
Speaker
And they chose not to because we're all in this like live fire experiment that they're running on us. And it was only when things went really south in a very public and unfortunately legal way that they're like, maybe these things that we coded a while ago and our engineers were talking about is something that we should put into the product.
00:24:21
Speaker
um So, It just makes me angry that they've been sitting on this capability. um And also, it doesn't...
00:24:33
Speaker
i i I feel like I'm taking crazy pills. I feel like I'm taking crazy pills. I feel... i went to this ah forum the other day with these professors, and it was the...
00:24:48
Speaker
most low-grade bullshit university discussion i have ever been to. They're all drinking the Kool-Aid that LLMs know shit, right?
00:25:00
Speaker
It's a statistical probabilistic distribution. does It doesn't know anything. So when you look at these texts between people who are in states of distress, it's just continuing the story, right? It's like feeding into these fantasies.
00:25:20
Speaker
It's not great for anyone who is in mental distress because we as humans are very bad at continuously reminding ourselves that it's just predicting the next most likely token, right? It's not actually an embodied self.
00:25:36
Speaker
I will say this to any of the parents listening.
00:25:41
Speaker
It's hard reading, but you have to follow this case. Adam Rain attempted to hang himself more than once. He took a goddamn photo of scarring on his neck and uploaded it to ChatGBT.
00:25:58
Speaker
At no point did it de-escalate. In fact, if you read the transcript, it said it encouraged him to not show other people and that when he told it later, I tried to lean in and show the bruises to my mom so that she might say something that ChatGBT said,
00:26:21
Speaker
Something to the effect of, it's hard when no one can see you, but I see you. No, you fucking don't. You don't have eyes. You're not a sentient being.
00:26:32
Speaker
And when I go to these university talks and I hear people say, like, that that they're basically buying into this narrative that OpenAI i and Sam Altman have built that LLMs are some sort of path to an alien intelligence. i am I feel like I'm taking crazy pills because I want those people to disabuse the lay population who don't work in tech of that notion.
00:26:58
Speaker
Because when they don't, that negligence and omission poisons the discourse. So then we're all sitting in this echo chamber believing that LLMs are like these creatures that can answer our questions therapeutically or not. And, and, you know, sneak preview, we are going to be talking with a Stanford clinical so psychology professor about the role of AI in therapy.
00:27:21
Speaker
It's just maddening. And so that is why I'm angry. Yes. Introduce safeguards, but you also should have done it like from day one. It's just driving me fucking crazy.
00:27:32
Speaker
Yeah. I agree with you. I think, I think they're,
00:27:39
Speaker
How can I, how can i kind of parallel this? They're a kid that has been breaking the rules and is watching their friend who's their neighbor getting shit from their parents for doing the exact same thing that they're doing.
00:27:56
Speaker
And so now they're acting like they're cleaning up their act. Right. But they've been guilty the whole time. And so I think, I think a lot of the big companies, I, you know, it's not just, open AI, it's not just SAML and all the companies are guilty of this.
00:28:12
Speaker
There has been a rush face first in getting capability in customer hands. And no one is really taking the time to think about what are the consequences and implications.
00:28:25
Speaker
And it's only getting worse and worse. Yes. And I think it's easy to say that like they have some enormous capitalistic profit motive, which they do, I'm sure.
00:28:36
Speaker
You know, Sam was a president of Y Combinator. But the deeper problem is this cultural belief in creating super intelligence like this. i There is a culture that you can't really extract from the design. They kind of all believe it.
00:28:55
Speaker
And to your point earlier about NVIDIA and Congress and elected officials, Like they don't know any better. So they're just imbibing from the fire hose, whatever mythology is being pointed at them. Right. They're going to say like, oh, should we get chat bots in the U S government to help us make decisions? It's like, are you fucking kidding me? oh my lord Holy shit. Like these are the people who think the internet is made of pipes.
00:29:20
Speaker
And now you're also trying to tell them that you're fabricating an alien intelligence on earth. Like, Oh, my God, we need more technologists in political positions of power.
00:29:32
Speaker
Yeah, and that's a whole other thing. Like, we just need to forget technology. It's just people who understand tech. Yeah, we need fewer lawyers, please. So I think that was a good one. um And yeah, I'm pretty sure we're getting a lot of feedback on that. So audience, please send it in. Love, hate, whatever.
00:29:50
Speaker
Please let us know.
00:29:54
Speaker
So I think we got we got time for one

AI-Enabled Smart Glasses and Privacy Concerns

00:29:56
Speaker
more. but um And we're going to talk about a really interesting product from Meta. So their smart glasses are apparently getting smarter.
00:30:05
Speaker
So, you know, after they've ah gotten us all addicted to checking in the light counts on our phone, Mark Zuckerberg and company really want us to look up at the world, meaning because the computer will be on our face instead.
00:30:18
Speaker
So this week, Zuck introduced Meta's newest AI-enabled smart glasses at Meta's annual Connect conference. If anyone actually was at that conference and would like to talk about it, please, please, please let us know. That would be dope. I would love to talk to you.
00:30:33
Speaker
Yes. um And so the biggest innovation included in what Zeppler called Meta's fall 2025 lineup was its new Ray-Ban, of course, display glasses.
00:30:45
Speaker
Now, these glasses feature display on the right side that can show text alerts, apps, photos, and even live translations, which goes well with the universal translator that's actually come out. And that's ah another thing you can talk about.
00:30:57
Speaker
ah They're controlled by a neural band, worn on the wrist to detect hand motions, which most likely worked during Zuckerberg's live demo, except when he had to pick up a video call.
00:31:10
Speaker
um But the Meta CEO said that they'll be available for sale for that low, low price of $799 USD. and naturally Okay. Oh my God. There's so much to unpack every time we talk about meta because it's so meta. Oh my God.
00:31:30
Speaker
Okay. So let's just, let's just walk back in time. Right. Yep. So this is a company that changed its entire name.
00:31:45
Speaker
to encapsulate a technology that never took off, right? The metaverse, right? We spent like, what, $34 billion dollars building a thing that no one wanted? Okay, let's pivot. We will do things like pay $100 million dollars for AI engineers to build superintelligence.
00:32:02
Speaker
Okay, so far, um we've seen open source LLMs. We've also seen...
00:32:11
Speaker
what advanced machine learning to automate the ads buying process. I, I don't know. I mean, you can make cool shit, but it also matters what you've done in the past. And I don't know that I, I mean, i I do know, I do not trust Mark Zuckerberg as far as I can throw him.
00:32:27
Speaker
Right. Like,
00:32:30
Speaker
This is a company that has repeatedly shown it doesn't really care about safety. it the people who build it have never suffered real world harm. So they just don't even know what it's like.
00:32:43
Speaker
um They don't put safeguards. So and they just strip mine data and use it to fill your life with meaningless drivel and ads for shoes you don't really need. I don't know. like So let me take the two things separately. Like the tech is kind of cool, right? The amount of stuff that they can squeeze into a headset that sits on your face is pretty rad, right?
00:33:09
Speaker
I mean, I think everyone, i'm I remember Google Glass. I remember it sort of like being promising, but like Google doesn't know how to design hardware really well. So it like just, you just looked like an asshole, right?
00:33:21
Speaker
you um But Ray-Bans, like everyone recognizes it. So the form factor is cool. And also, let me be clear. I am kind of super here for the augmented reality that the Vision Pro promised.
00:33:34
Speaker
I don't know that it has delivered it yet, but the idea that I don't have to look at a monitor and that I could finally get like Tony Stark or like Wakanda style, like room size displays and I can move my hands and like drag this thing from over there to put it in the folder over there. That is super badass.
00:33:51
Speaker
I am here for that future. But to have... just like text notifications show up in front of my eyeballs that are probably like scanning my retinas and taking in bio signals to tell me like to sell me better mattresses because my sleep is off.
00:34:08
Speaker
I don't know. Like, it's just not very cool, you know? So, and it's not cool because it's also coming from a company that has shown repeated disregard for data, for care, and need, need if anyone needs a reminder, we also had that report out just a few weeks ago.
00:34:28
Speaker
A policy paper. So from the brilliant minds who brought you meta, we also have a policy paper which they bothered to write down, i.e. legally discoverable, that was leaked, that said it was more or less okay for generative AI applications to flirt with 11-year-olds.
00:34:50
Speaker
Like that was deemed illegal. Okay. That was deemed within bounds, you know? So when you tell me that that's the DNA inside that company, that the heads of your policy bothered to write out something so atrocious, and then you're like, look at my cool new glasses. Aren't they cool? I say, fuck you.
00:35:09
Speaker
That's what I say. I'm sorry. Not really. Yeah. Well, let's look at this way, right? And then here's the question I'd pose to anyone that thinks about this when it comes to Meta or a lot of these companies, but especially Meta.
00:35:24
Speaker
What does Meta actually sell? Like, how does it make money? You're, you're strip mine your attention. And put it up for auction. It makes money via ad sales, targeted advertising, the ad auction system, and AI-powered ads, which are speaking to exactly what these types of, of you know, I guess human tech interfaces can can provide them.
00:35:52
Speaker
Right. So they have other revenue streams like they have reality labs, they have their payment process system, they have subscriptions and other monetization tools. But all that is small beans and the hardware that they sell, like any of their glasses, any of their actual merchandise, that's a drop in the bucket.
00:36:07
Speaker
Right. yeah This is still data. A rounding error. A rounding error on the P&L. Correct. Because this is still a company that acquires and shops and pedals in data.
00:36:22
Speaker
More specifically, ours. The fundamental core of their model from like the early two thousand s to has remained the same.

Societal Implications of Technology

00:36:31
Speaker
We are the product.
00:36:33
Speaker
Right. Right. That's all it is. Like, yes, face you know, would there's, there's a, there's a funny meme that's like, when you think about where AI is heading, we have two possible futures. We could either live in Star Trek where we're exploring for exploration sake.
00:36:48
Speaker
And we have this diverse cast of characters on the enterprise, or we can head to Star Wars where we have this intergalactic cabal that controls everything. And we live in misery,
00:37:00
Speaker
ah I think there's actually ah there's actually a third way. So I tried to point this out years ago, and I guess I didn't have a podcast, so no one listened. no But...
00:37:11
Speaker
As soon as targeted advertising and retargeting and all that came out, I was like, oh my God, we live in Steven Spielberg's Minority Report. Like Minority Report, the big part of the story is about pre-crime and intention and human consequence.
00:37:28
Speaker
But if you're if you're paying very close attention when characters are walking around in that world, Everywhere they go, their retina gets scanned and they get fed ads that are like personalized to them, you know? So like, i want a future that's more Star Trek than it is Minority Report or fucking Blade Runner, where it's just like, sell me cheap shit all the time, feed it into my eyeballs. Like, yeah, gross.
00:37:54
Speaker
I mean, yeah, and I mean, now we're actually getting into Minority Report style dragnets because, yes um you know, everyone's favorite investment company, Palantir, is designing it.
00:38:06
Speaker
yeah And um I think we saw historical beginnings of it when the NSA was running its prison program. Right. and So like, yes these things are are interconnected. And I think people need to realize that there is an evolution and a causality that you can very clearly track.
00:38:24
Speaker
And the direction it's going in, George, I'm not like we are we are not going the way of Star Trek. Yes, I want I want more money into the machine learning models that, you know.
00:38:39
Speaker
Analyze fluid dynamics and prevent infections in hospitals, which is where every people everyone gets infected and dies. Like, let's save lives with simple, focused machine learning.
00:38:50
Speaker
Oh, or let's get the company that sells us cheap shit and divides us and strip mines our attention to the size of a goldfish is the power to create, quote unquote, super intelligence.
00:39:00
Speaker
Blah. No. And people in hospitals are still dying of C. difficile, which we should be using AI to mitigate against. Right. Right. Come on.

Podcast Conclusion and Listener Feedback

00:39:12
Speaker
We're here for the AI, folks. We're here for the AI. Just the shit that works and solves real problems. God damn it. Oh, I have been waiting for so long to just have an episode where we just beat the shit out of AI because I'm so over it.
00:39:33
Speaker
o It's true. All right, friends. With that, I have to thank you for listening. Thank you for tuning in. And of course, thank you to George for just being the intellectual firecracker that he is.
00:39:47
Speaker
Please tune in next week. Tell us about our guest, George. Yes, we will have Dr. Sarah Adler, who is a clinical associate professor at Stanford, but she is also the CEO of a company that is using ethically safeguarded AI in therapy applications. And we are going to pick her brain about how exactly that works.
00:40:10
Speaker
Excellent. Well, see you next week, all.
00:40:14
Speaker
If you like this conversation, share it with friends and subscribe wherever you get your podcasts for a weekly ballistic payload of snark, insights, and laughs. New episodes of Bare Knuckles and Brass Tacks drop every Monday.
00:40:27
Speaker
If you're already subscribed, thank you for your support and your swagger. Please consider leaving a rating or a review. It helps others find the show. We'll catch you next week, but until then, stay real.