Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
So, Are We Gonna Cure Cancer or Just Double Down on Mining Attention? image

So, Are We Gonna Cure Cancer or Just Double Down on Mining Attention?

S4 E10 · Bare Knuckles and Brass Tacks
Avatar
0 Playsin 9 hours

This week George K and George A switch formats to tackle the AI revolution's messiest questions—from autonomous coding agents to digital actresses and deepfake scams.

The hosts examine what happens when innovation moves faster than ethics. When Claude Sonnet 4.5 promises 30 hours of autonomous coding, what's the real trade-off between productivity gains and security fundamentals? When talent agencies want to represent AI-generated actresses, are we witnessing the death of human performance art or just another moral panic? And when Brazilian scammers can steal millions in $19 increments using celebrity deepfakes, who bears responsibility—the platforms, the regulators, or the users?

They explore the uncomfortable economics behind AI video generation, where companies promised to cure cancer but instead delivered infinite dopamine-mining slop. The conversation digs into data center energy consumption, the exploitation of human attention, and why your grandmother clicking Facebook ads might represent democracy's newest vulnerability.

George A brings a practitioner's lens to AI governance, arguing for education from elementary school up, metadata standards for content authenticity, and balanced regulation that protects innovation without enabling exploitation. George K challenges the fundamental premise: if supercomputers are being pointed at our dopamine receptors just to sell more ads, what happened to building technology that actually improves human life?

Most importantly, they ask: Are we building applications that create a better future, or are we just doubling down on the attention economy?

News examined:

Recommended
Transcript

Introduction to 'Bare Knuckles and Brass Tacks'

00:00:00
Speaker
We'll start off with the big picture of this. Actors and and and human arts, human performative arts are a cornerstone of an evolved society, right? Oh man. So human beings conveying artistic expression, human beings, real human beings, that is the art.
00:00:29
Speaker
Welcome back to Bare Knuckles and Brass Tacks, the tech podcast about humans. I'm George K. And I'm George A. And today we are taking another swing at the news. So this time, George A. is in the hot seat. I will be doing the rundown of some big items in the news, not just AI, but a little bit of it.
00:00:51
Speaker
And yeah, we'll get George A.' 's reaction as a security, privacy professional and general good human. might i but So I'm very thankful that you are back in your leading interviewer role because like it's better.
00:01:07
Speaker
Well, let's see what

Anthropics' AI Model: Advancements and Implications

00:01:09
Speaker
we have in store. So the first up is if anyone uses Anthropics LLM chatbot Claude, you might have noticed they released a new version of the model, which is called Claude Sonnet 4.5.
00:01:22
Speaker
And it was released as the company's bid for ai agent and coding supremacy. The model is designed for autonomous, quote unquote, agentic tasks, particularly coding and can reportedly work up to 30 hours without human intervention on complex projects like building software applications from scratch.
00:01:41
Speaker
Anthropic is positioning this as a breakthrough in letting AI systems complete multi-step tasks independently. While companies like Anthropic, OpenAI, and Microsoft promote agents as the next phase after chatbots that could unlock productivity gains and potentially replace human labor,
00:01:58
Speaker
Technology is not yet widely adopted for complex autonomous tasks in everyday use. So, George, you run security in a company that uses a lot of developers and software

Security in AI: Hype vs. Safeguards

00:02:09
Speaker
engineers. What is your take?
00:02:12
Speaker
Well, look, the way the way that I see this is We have to look at this in a reasonable manner, first of all, right? So let's, let's get away from the hype, right? So the first thing we need to consider are the innovations and the guardrails, right?
00:02:26
Speaker
So we have to, we have to appreciate there, there is a large potential productivity leap, you know, from an agentic AI like Claude or Sauna, Uh, was it? Yeah, I saw it for 4.5.
00:02:37
Speaker
Um, but we still have to insist on security fundamentals on the implementation, right? We still have to have a degree of auditability and there has to be explainability controls before adoption.
00:02:48
Speaker
So you have to have your requirements actually written down and you have to understand what you want to use this model for. what it's going to connect to, what data is going to feed it, what data is going to train it, and what the outputs should be, right? So these are the things that need to be thought of before you even actually do any purchase or implementation.
00:03:05
Speaker
I think a lot of organizations are running face-first on this problem. problem. They think they're running face first in this profiteering based solution, but security is still security. And then my response to all these AI implementation questions still goes back to security fundamentals and basics when where we're doing any net new technology implementations.
00:03:26
Speaker
That goes to my next point around pragmatic investment lens, right? Recognizing that full autonomy is still experimental. We have to avoid over-investing until ah ROI, maintainability, and risk exposure levels have to be proven in real enterprise context. And I think that's where we talk about the AI hype bubble because people think AI is going to do everything and wash our cars and and fire all the unqualified people. oh it's it's It's not.
00:03:52
Speaker
We have to look at what are the actual functional business processes that these tools are going to replace and make more efficient. And that needs to be how we focus. And so from, you know, an ah ROI standpoint, the the ROI, if it's being done in an ethical manner,
00:04:10
Speaker
is we need to take our people and retrain them in how to adopt to the new kind of views, which leads to again to my next point being that we have to go through a workforce evolution and not elimination.
00:04:21
Speaker
We have to support reskilling and augmentation of developers rather than wholesale replacement. We have to balance social progress, which is empowering our staff, which is still the best way of getting the most output out of them.
00:04:33
Speaker
And we have to really approach it with a with a degree of fiscal realism, right? So cost efficiency. Right. And we have to look at governance first principles. We have to prioritize AI governance. We have to look at model provenance. We have to look at code integrity testing existing within the SDLC and especially leading into the ISO 2701 framework and the ISO 4300 series, which is all the AI series before we start granting autonomous execution rights. And then finally, we have to look at competitive advantage through caution, right? We have to encourage control points to stay technologically relevant, but demand the clear metrics on risk reduction, compliance alignment, and a total cost of ownership before scaling.
00:05:13
Speaker
And that's kind of how I think we could we could approach it in a sensible manner. Yeah, I think it's really easy to buy into the hype. And I think what we covered the last time we did this swing, right, was that MIT study that found 95% of generative AI projects have not returned anything. And I think that's probably because, as you said, adopt the tech without thinking through the process. So even if you were like, oh, now I can get rid of some of the code dev teams and just have this thing run code,
00:05:46
Speaker
What you end up missing is the human expertise that understands your environment, your customers, maybe the systems design. Because again, if we're talking about the median of the bell curve, like it's going to design okay. But I can't remember where I saw this, but there is like a lot of work out there right now.
00:06:06
Speaker
for people who build themselves on LinkedIn as like vibe code cleanup crew. Like they come in and they take the vibe coded nonsense and they, you know, this like works all in the patches and yeah, it's, it's like sloppy and they come in and they tighten it. And, and, uh, I think you end up thinking about like,
00:06:27
Speaker
You know, is the juice worth the squeeze? I'm legitimately surprised that in our current era of startups, someone hasn't actually tried to create a platform where you're actually correcting AI create a code using AI.
00:06:43
Speaker
Someone's making that right now. I'm sure someone's making that. Yeah. Yeah. All right. On to the next.

Ethical Concerns with AI in Hollywood

00:06:50
Speaker
So this coming from the BBC and actually the CBC.
00:06:54
Speaker
ah European AI company Particle 6 has created an AI actress named Tilly Norwood, and that has sparked intense Hollywood backlash after actual talent agencies expressed interest in signing the digital character.
00:07:09
Speaker
So Dutch creator Aline van der Velden said she wanted Norwood to become the next Scarlett Johansson. And obviously that has made a lot of actors upset. I know many of them, ah but actors, including Melissa Barrera, Emily Blunt, Natasha Lyonne and Whoopi Goldberg criticized the move.
00:07:28
Speaker
And of course SAG-AFTRA, which is the big union representing the Screen Actors Guild. Stated Norwood isn't an actor, but computer generated content trained on human performers work without permission or compensation, creating problems by using stolen performances to put actors out of work. Very much the same argument we've seen with other generative AI applications vis-a-vis art direction, ah drawing, stuff like that.
00:07:52
Speaker
And the union emphasized its contacts require notice and bargaining before AI performers can replace human actors in productions. um It didn't. It didn't get caught up in the summary here, but and another very juicy tidbit from SAG-AFTRA, which was right on the money, is that This isn't acting because it's not drawing on any lived experience. That's what you pay actors to do, right? Is to like channel into human experience instead of a simulacrum of it.
00:08:21
Speaker
But anyway, let's get your take ah because you and I have also discussed AI companions. We've discussed ah the sex robots in a previous episode, like all kinds of trying to replicate human experience. This one is just for fictional stories rather than one-to-one companionship.
00:08:38
Speaker
Yeah, but look, like, <unk>ll we'll start off with the big picture of this. Actors and, and, and human arts, human performative arts are a cornerstone of an evolved society, right?
00:08:53
Speaker
So human beings conveying artistic expression, human beings, real human beings, that is the art, right? And, and I appreciate the streamline to entertainment.
00:09:05
Speaker
But, you know, at day's end, it was like when when CGI took over all the special effects in movies. And like, I remember first one that I remember this really became an issue was ah Star Wars Episode One when they were talking about the pod. And all the prequels, that was actually my favorite because i love the the pod racing.
00:09:23
Speaker
But when you compare the special effects there versus the special effects you saw in and the original series, or you go back to like the original Predator alien or Alien that of thing. They were a lot more janky, but at the same time, that was the art of filmmaking and it was so much cooler.
00:09:39
Speaker
But if we take it back to the actual AI question, the first thing is is around you know ethical use of data and likeness. And I am completely opposed to and unconsented training on human likeness and performance.
00:09:53
Speaker
You know, I'm all about supporting fair compensation and consent while advocating for innovation done transparently that enables that performance, not takes it away from the professionals who have spent years training and delivering to do this.
00:10:07
Speaker
You know, and the other thing too is intellectual property governance, because I see this as a data governance and copyright failure. And it's reinforcing the need for ethical AI sourcing and really just tracking how these models are developed and how they're actually commercialized and consent frameworks within any company using these generative models not only just in film but in general like i'm i'm talking like advertising for for particular companies and stuff right because those are still actors that are bought in and and paid to do that you know and then the labor impact awareness because you know we have to really we have to
00:10:43
Speaker
really feel for displaced creative workers. And we have to focus on on creating some balanced regulation that protects livelihoods without stifling economic or sort of economic and technological progress and entrepreneurship.
00:10:58
Speaker
And then, you know, i think I think some of the administration's policies now absolutely removing AI guardrails works completely against um what's good for the the labor market. And and and You know, say what you will about it. ah i am still someone that believes in organized labor and believes in in protecting and representing the careers of human beings. I do think that you have to train yourself constantly and you have to stay at the cutting edge of where technology is.
00:11:27
Speaker
But your your job, your career, your profession should still be fundamentally protected if you are willing to do that work. and you know, we have to look at for some of these production companies, the business risk and brand reputation, you know, because this also represents a massive reputational, legal and compliance risk, you know, especially around when you're seeing unauthorized data use, you know, you're not seeing model accountability, you know, there might be a breach in contractual norms.
00:11:51
Speaker
And when you're talking about massive production contracts and these huge projects that are running hundreds of millions of dollars now, It not only is something that has a reputational failure on these studios if they happen to to misappropriate or or behave inappropriately, but if you mistreat the actors, it becomes a very public reputational problem, which again, you're trying to get people to go back to the theaters.
00:12:14
Speaker
And if you're if people are, know, people in this environment where they're a little bit more socially conscious, if they see that your studio is is conducting business in an unethical way, they're not going to spend money to go see your movie, even if it is amazing.
00:12:28
Speaker
And finally, we need to make sure that there is innovation within boundaries, right? I am still adamantly a fan of responsible innovation. We are exploring digital talent or virtual influencers only under explicit ethical, legal, and consent-based frameworks to preserve both creative integrity and corporate credibility.
00:12:48
Speaker
You know, all of that has to be considered at the board level. And then when we're talking about individual productions, we can't just let producers see that the studios are moving this way and then they're just going to dive into it and try to cut corners as much as they can on overhead costs so they they can maximize on profitability, even if the the film project relatively tanks um or tanks relative to expectations.
00:13:12
Speaker
Yeah, i there's so much here. Like, when I read talent agencies wanted to represent Tilly, I'm not going to say her. It's... um It's like contractually, how does that even work?
00:13:27
Speaker
It's not, it doesn't have legal standing. So I guess you're just licensing like a character. But to your point, if it appears in a production as a licensed individual, and then it turns out they find their stuff in the data training that was,
00:13:45
Speaker
copped and copyrighted from others. Some of that means it's risk on top of risk on top of risk ah from a legal standpoint. But also, yes, more to your point, I just think what a failure of imagination to think that, you know, star power is star power because people believe that that human is capable of broadcasting something that others are not.
00:14:09
Speaker
And that's how new talent is discovered. um And if you think about movies that have been hits in recent times, there has also already been kind of a creative push against too much green screenery. Like several actors have said how hard it is to just act in front of a green screen because you're just basically pretending all the time but you're like it's hard to react to something that you're just like somebody shouts a cue like and now the giant troll smashes the whatever and you just have to like conjure that in your mind, you don't even know what it looks like because it's not even a post-production game. I'll give you a perfect example that.
00:14:43
Speaker
When I was a kid and Top Gun first came out, right? Now in all the scenes where he's like in the cockpit, that's obviously a blue screen or a green screen, but they had real Navy pilots simulating real dog fighting.
00:14:57
Speaker
And when you're a kid and you're watching that, or even if you're just like a dude and going into college, right? because you have to remember the year Top Gun came out was actually a record year at the Naval Academy for people joining and trying to become pilots.
00:15:09
Speaker
It's that it inspires you. It motivates you. You see the thing. It's real and it's cool. And I still remember how much I fell in love with the idea of being a fighter pilot. Obviously,
00:15:20
Speaker
You know, ah fate and God being what it is. I'm a bit too big and I have ah have glasses, I have a stigma so I couldn't do it. But still, like when you're a kid and you see like a real thing, that's what the movies were great for. And I think we're taking away what made movies great by doing this.
00:15:37
Speaker
and there's And financial success is taboo, right? Like the year that Barbie came out and Oppenheimer came out and Christopher Nolan famously wanted no CGI for Oppenheimer, right? Like it was all in-camera effects and stuff.
00:15:50
Speaker
And it's a relatively obscure scientific figure if you're not into nuclear physics. And here it is, this big blockbuster. And and Barbie had built sets rather than like, let's just CGI. Anyway, yes, to your point, let's ah let's just have...
00:16:05
Speaker
fun and creative expression and not necessarily. Also, last point, when a private company creates quote unquote actresses, man, is there a lot of stuff to unpack there about what they build into that? Like what constitutes the perfect look, quote unquote, right? Like what kind of biases are being layered in there ah across gender, race, everything? It's smooth it's a really tricky business.
00:16:37
Speaker
Hey listeners, we hope you're enjoying the start of Season 4 with our new angle of attack, looking outside just cyber to technology's broader human impacts. If there's a burning topic you think we should address, let us know.
00:16:50
Speaker
Is the AI hype really a bubble about to burst? What's with romance scams? Or maybe you're thinking about the impact on your kids or have questions about what the future job market looks like for them.
00:17:02
Speaker
Let us know what you'd like us to cover. Email us at contact at bareknucklespod.com. And now back to the interview.

Generative AI and the Misinformation Challenge

00:17:12
Speaker
Let's go on to the next because we started with movies. Now we'll move on to video.
00:17:16
Speaker
So there's actually a series of different releases. This is all ah coming to us from different sources. But we have had this wave of generative AI video tooling come out, right? So we've got Google announcing that Veo 3 will be integrated into YouTube Shorts.
00:17:37
Speaker
Meta created the cringiest product ever called Vibes. whatever and uh open i open ai released sora 2 which is still kind of i think in a closed beta but obviously with the the ease with which these are created there is fear that this will basically just create like an infinite ai slop feed in everything that most people are looking at um And this was just basically the concern that supercomputers are being pointed at people's dopamine receptors.
00:18:15
Speaker
And um obviously we have misinformation. We have deep fakes. We have this, the general commodification of human attention. um And then we all know social media isn't social. We're all social. lonely and talking to nobody in particular.
00:18:30
Speaker
And so now you're really talking to no one in particular, like not even real humans interacting with real content. So just curious to get your take that like these three big organizations kind of come in with the same sort of product and looks like, ah you know, they they promised they could cure cancer with AI and instead we get short videos of nonsense memes.
00:18:55
Speaker
And you know what, like I'm, if I'm doom scrolling, I'm guilty of enjoying some of them. I think I've sent a couple to you. um You know, like when you you see like, what was it the, and yeah if if Excel was a person type thing, like that sounds hilarious, but that's, that's AI slot, right?
00:19:09
Speaker
um So I think we have to look at the difference between something that's consumably relatively innocent fun versus where this thing gets manipulated at scale for the sake of profit.
00:19:21
Speaker
Um, you know, so digital well-being and and social responsibility have to be the first things that are thought of if we're doing this correctly. you know, I'm very uneasy about ai systems being engineered to exploit human attention, um particularly since we're now in the era of misinformation.
00:19:37
Speaker
And so we really have to support what ethical design standards that promote digital wellness and informed consumption look like. And I think that means, you know, there has to be a whole of society education process. And I don't know how we get there because companies are typically trying to make money to think about that.
00:19:53
Speaker
But, you know, from the elementary school level on, like the at the point that we're going to allow children to interact with any direct AI agent or the product output of any AI agent, we have to educate them to understand how to relate to and and correspond with whatever thing we're putting in front of them.
00:20:14
Speaker
um And we just don't, we we lack that education, you know, and, you know, that goes into the next point on like information integrity risks, right? so I kind of view generated misinformation, deepfakes and content poisoning as like some of the most significant threats to societal and organizational trust and then getting into brand reputations and and the stability of our democracy, if we're even getting that far, um as it can be manipulated by hostile actors both inside and outside of the state.
00:20:44
Speaker
You know, so we have to look at a balance between innovation and exploitation, right? Where we're thinking a fiscally pragmatic way about market opportunities. But we should make the point that profit shouldn't depend on creating addictive algorithms or deceptive media and advocating transparency over user control. And we know specifically that the major...
00:21:06
Speaker
ah social media companies, the metas, the Twitters, they have been called out by Congress, I think, nowhere or near effectively enough about this exact problem. And I think our, you know, I won't say our, I'm not American, but but your elected officials, and in Canada, we have to deal with it a bit too, they don't have the knowledge or they are perhaps... They're well behind the curve. Yeah.
00:21:30
Speaker
But here's here's another point too. I'm saying they might actually be fully informed or or much more informed than they let off, but the lobbies and the dollars that in influence the things they say and the policies they push might predicate how hard they go in the paint when those corporate leaders are standing before them and and and testifying.
00:21:49
Speaker
So that's a Yeah, I mean, Meta's DC office is intense, like and amount of staff and money going through that case. It is a compound. I've walked by it before. Yeah. um You know, and so we have to think as well then about governance and compliance imperative.
00:22:05
Speaker
Like I strongly, strongly support creating regulatory oversight and internal content authenticity frameworks. Like we need to figure out how do we implement on standard things like watermarking, things like actually ensuring that the metadata is available, actually doing disclosure standards for AI generated media, which by the way, George, I am Very appreciative that you lead by examples when you use AI to kind of support the posts and the media that you put out.
00:22:32
Speaker
You, before most people, before most companies were putting out there that this was AI generated. And I think that lead by example way can be taken at scale. And if people understand that, it's like um like a like a warning on a cigarette pack.
00:22:46
Speaker
Yeah, yeah, We'll choose to consume the cigarette, but you got to know you're going to get cancer if you do it enough. Yeah. um And then finally, you know, we have to look at the strategic opportunity with guardrails, right? So there is potential in ethical AI video generation for marketing, for training, for communication.
00:23:03
Speaker
But again, if we only invest in creating safeguards, ensuring consent is there and factual integrity are baked into the business model and us doing it. I agree. I've seen some incredible security awareness training products that use video models to spin up really custom things to educate, for example, accounts payable or HR teams, right? Like just using a prompt.
00:23:28
Speaker
That is incredible, but it's a closed system, right? It's not released for public consumption purporting to be Taylor Swift or whoever else. um And then there are some nuances here, right? Like I think Google has said that whatever is used by Veo3 to inject into YouTube Shorts will have a stamp that says ai generated.
00:23:47
Speaker
um Vibes is sort of more animated than realistic. And then for Sora, it was like you upload some, you're encouraged upload video of yourself called a cameo, which then allows others to create videos using your likeness. And there's kind of a little bit more social component there.
00:24:04
Speaker
I appreciate your attention to detail on this particular issue. And my take is still just more of an ideological one, At its core, it's just like, again, y'all said this shit was going to cure cancer. It's just using it to just mind strip mine people's attention and sell ads to them. It's like, great, guys, super good use of time and GPUs.
00:24:28
Speaker
The other being that we know from ah incredible research, especially that of Dr. Sasha Luccioni at Hugging Face, that the energy consumption for video is so much bigger than just text.
00:24:45
Speaker
And so if if data consumption is a problem and erecting data centers all over the place, eating up water and land, like, holy God, like video is like an exponential force multiplier for that rate of consumption, which is extremely problematic. Because again, what is the trade-off in value? Just like AI slop and our planet's water supply.
00:25:07
Speaker
so you know, fair trade. To kind of close off on this, the one thing I've been kind of exploring more and it's like very cutting edge and I don't think we've even really talked about it there's a movement now towards going to small language models, which are a lot more like simple, kind of more endpoint or server-based.
00:25:27
Speaker
And the energy consumption is a lot less and it's a lot more efficient. And I think it's it's going to be where this goes because the cost of getting into the AI game now with the data centers and the power generation and the ah LLMs It is just, it's not tenable.
00:25:46
Speaker
Yeah. Yeah. Yeah. All right. Well, I, I sort of like picked four stories at random and it seems to have like cascaded in a certain order. I didn't even intend that. So I'm just going to blame the hive mind.
00:25:58
Speaker
So our last story is ah police cracking down in Brazil on a ring of scammers.

Deepfake Scams and Social Media Responsibility

00:26:06
Speaker
So Brazilian authorities arrested four spec suspects,
00:26:09
Speaker
for using deep fake videos of Brazilian supermodel Gisele Bundchen and other celebrities in Instagram ads to perpetrate online fraud. So there's a couple of layers there.
00:26:22
Speaker
Investigators discovered over $3.9 million dollars in suspicious funds linked to the scheme, which promoted fake skincare products and bogus giveaways. The operation marks one of Brazil's first major attempts to counter AI manipulated celebrity images in scams.
00:26:36
Speaker
Most victims lost small amounts under $19 and didn't even report the crimes, creating statistical immunity, quote unquote, that allowed criminals to operate at scale. Brazil's Supreme Court ruled social media platforms can be held liable for these scam ads if they fail to remove them swiftly. does ah Those were the kind of the two layers that I was interested in. It's like...
00:26:58
Speaker
one, very easy to create these ah deep fakes, two, really easy to apparently launch them in the meta ads manager without any oversight.
00:27:10
Speaker
So what's your take? Oh, man. Um,
00:27:16
Speaker
Okay, so it's tough because this very complex, but essentially our biggest, biggest priority here needs to be security and fraud prevention, right? So I see this as a case study in scalable digital fraud, and it reinforces the need for deepfake detection, content authentication, and fraud analytics and platform governance.
00:27:39
Speaker
um You know, and I'm saying this as I'm doing or delivering a um an ISO audit for for for a client on Sidekick. But like it's it's stuff that's top of mind to me is that how can we ensure the the integrity of the content that we're being presented and the data that supplements it as well?
00:27:59
Speaker
You know, so then we have to also look at things like corporate accountability with limits where, you know, we have to hold platforms accountable for negligence in removing, ah for example, criminal content. And yeah only we have to we have to keep in mind that we can't be overjudicial on this.
00:28:14
Speaker
Like we can't um but can't be overly punitive in our regulations because we don't want to stifle innovation or also impose like impractical liability burdens because that also stops innovation as well.
00:28:26
Speaker
But we have to find a healthy balance between how we can make sure that the content that we're delivering is appropriate for the intended audiences and even for the audiences that can access it.
00:28:38
Speaker
while at the same time ensuring that we are not over-regulating the AI industry. You know, and that also then goes into consumer protection and awareness, which, you know, right now is maybe experiencing a bit of a downturn in its era.
00:28:52
Speaker
But, you know, we need to emphasize, again, public education campaigns and having shared responsibility between the users, the regulators and the platform companies to really reduce victimization in the emerging tech ecosystem.
00:29:05
Speaker
Because we're going through a bit of an industrial revolution at the moment. If you haven't recognized that, that's that's the era we're in. you know and then we have to look at AI risk and then compliance integration as well. Right. So i I personally think of ai generated deception as one of the most dangerous, ah critical and fast moving enterprise threat risks right now.
00:29:26
Speaker
And so we have to really adopt and evolve our threat modeling to be able to recognize the potential for that to occur. and you know, we have to look at things like brand impersonation, like payment fraud, like reputational manipulation vectors.
00:29:40
Speaker
These are things that now have to be considered as part of our threat models, not just from an SDLC standpoint that the code is vulnerability free or that business process isn't impacted by, you know, a bad use test or whatever, right? So we really have to kind of step back from this and and and see the societal impact for a lot of these more tactical digital implementations that we're trying to make because it's just, it's all little baby steps going towards running off a cliff, basically.
00:30:08
Speaker
And then finally, I think we need to focus on policy through pragmatism, right? So I'm a big advocate of balanced legal frameworks. My GC and I talk almost on a daily basis about making sure that we are not limiting while still protecting the organization.
00:30:23
Speaker
We have to make sure that we deter abuse, that we encourage the rapid takedown mechanisms for abusers. and that we have to really support cross-border cooperation, again, without over-regulating legitimate AI innovation. And so when you're dealing with global firms or firms that are working in multiple national jurisdictions, you're dealing with different compliance standards across those jurisdictions. And you so you have to kind of, when you're looking at trying to play or do business in multiple international jurisdictions especially, but also can apply at the state level,
00:30:56
Speaker
the the strictest compliance standards that you have to adhere to have to become the lowest common denominator for you. right Like you have to go as strict as possible because really you could try to go light in one jurisdiction, but if you're missing out on something in Europe and GDPR, you know, gets you a massive fine, that's not going to do well for your business.
00:31:15
Speaker
And it becomes a much more... a much more complex problem that is going to require modern cutting edge trained GRC professionals who are now at the table guiding and being taken more seriously because we need to rely on their knowledge along with working with our general counsels and our CPOs.
00:31:35
Speaker
that our implementation plans are not going to put us into a legally dangerous position or, you know, from a societal level. When will we start trying to encourage things that are damaging to the overall health of our society and our democracy?
00:31:50
Speaker
When did we become a show about protecting democracy? I don't know. We're just trying to keep up with the threats. um Yeah, I think this really highlights a few things. One, scale, right? Like how quickly they could amass $3.9 million out of $19 scams here and there. click Like just low enough that people are like, oh, I lost $20.
00:32:13
Speaker
And they're not like continually, you know, skimming your bank account. You know, that's really smart in terms of criminal play. um And also, how easy, right? Like I really fear for, especially we know that the elderly are prime targets for everything from romance scams to call center scams to to call back phishing. And like just thinking about the number of old people clicking away on Facebook and just jamming the feed full of weird products or things like your doctor, quote unquote, like coming online to tell you something. I just think
00:32:50
Speaker
that's really dangerous. So we have to educate um the most vulnerable parts of our population for sure. But I do think the platforms are going to have to bear some liability. It's also in their interest, right? Like people will stop using the thing that's full of fraudulent AI slop.
00:33:06
Speaker
um So that's, ah that's worth

Ethics in AI: Prioritizing Society Over Profits

00:33:08
Speaker
considering. But also I think to your earlier point about the metadata, when you upload videos into ads manager and stuff, it ingests sort of like the basic metadata.
00:33:18
Speaker
And I think if we can start to get standardized fields that help identify you know the creation of these things that might go a long way to detection rather than trying to do detection after the fact like just doing computer vision on it doesn't seem to work but if you can if you can require certain fields ah maybe that would that would take us somewhere um cool man well thanks for running down the news there's always obviously more that we could go over but uh respectful of our listeners time and yeah any closing arguments or ideas that you want to leave people with um kind of as we were talking about earlier we need to change the conversation to get to a point where fighting for ethics and fighting for guardrails and
00:34:10
Speaker
You know, we happen unfortunately be existing in an era where there is this disruptive technology that has now been, you know, developed on the world. And we have become a society that is so profit driven in the West.
00:34:24
Speaker
that we are forgetting about the things that made the West the leaders innovation for so many decades. And that was the fact that people thought about what were the stakeholder impacts. And by stakeholders, I mean the consumers and the customers.
00:34:39
Speaker
And, you know, if we're going to continue prioritizing profit over people, we are going to irreparably damage our society the further we lean into this AI revolution.
00:34:52
Speaker
Agreed. i would leave listeners also with this. The Latin root for attention means to reach toward, right? So if you find yourself feeling AI psychosis because you've been having too many chats with JetGPT or you just feel that like burnout after doom scrolling for too long, like just start to ask ourselves this question like,
00:35:18
Speaker
What am I reaching toward? And if it's just mindless nonsense, maybe let's reach for something that is more edifying, yeah either connection and other human beings or just something that might fill your brain with knowledge instead of ads to make billionaires richer.
00:35:37
Speaker
Correct. All right, y'all. Well, thank you for tuning in as always. And we will be back as ever next week. We are a bit delayed in our interview with Dr. Sarah Adler. I know we promised you that that is still coming up. We had some time zone tango issues as she was in Japan, ah but that one is coming up still and a whole host of other interviews. So stay tuned.
00:35:59
Speaker
Thanks for listening. We will catch you next time.
00:36:06
Speaker
If you like this conversation, share it with friends and subscribe wherever you get your podcasts for a weekly ballistic payload of snark insights and laughs. New episodes of Bare Knuckles and Brass Tacks drop every Monday.
00:36:20
Speaker
If you're already subscribed, thank you for your support and your swagger. Please consider leaving a rating or a review. It helps others find the show. We'll catch you next week. But until then, stay real.
00:36:35
Speaker
All right, here we go. Let me find my fucking notes. Okay, here we go.