Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Are we losing our craft? image

Are we losing our craft?

General Musings with Kevin Powell
Avatar
502 Plays22 hours ago

The articles I read in this episode: The promise that wasn’t kept & Ensloppification

In this episode, I read through 'The Promise that wasn't Kept' by Salma Alam-Naylor and 'Ensloppification' by David Bushell, and give my thoughts on what they're writing, examining the hype, flaws, and environmental concerns surrounding AI tools in the software industry and discuss the need for a more responsible, human-centered approach to integrating AI into our workflows.

Recommended
Transcript

Introduction to Podcast and AI Topic

00:00:00
Speaker
Hello, my front end friend, and welcome to my podcast, General Musings. My name is Kevin, and here at my podcast, I talk about whatever is front of mind for me in any given week, usually in some way that's related to front end development.
00:00:10
Speaker
And

Kevin's Critical View on AI

00:00:11
Speaker
this week we're going to be talking about artificial intelligence and AI in general and all of that stuff. And if you already know me, you might know generally where my stance is on this, where I occasionally do use it for a few things.
00:00:25
Speaker
It's a very flawed product that I think was pushed way too early and continues. know, I think

Structured Discussion: Salma Alamnaylor's Article

00:00:31
Speaker
it's like we're we're approaching beta stages in something that's being shipped and just everywhere, which is kind of annoying.
00:00:38
Speaker
But I don't want to just rant about it today. Yeah. I think my my thoughts, I've talked to about it enough here over a few different episodes, and instead I want to take a look at a couple of articles and sort of share my thoughts on them that dive into things, because they're not just riffing in general for these, they're they're a little bit more thought out. And the first one we're going to be starting with is one called The Promise That Wasn't Kept, and it was written by Salma Alamnaylor, the link for it is down in the description And it's a really good article. So I'd encourage you. Well, I'm to read through it as a big part of this.
00:01:13
Speaker
And so we're going to be starting off where she says, I recently wrote about AI and productivity and how data from the 2024 Accelerate State of DevOps report from Dora shows that widespread AI adoption in the software industry is contributing to a real and meaningful decline in software delivery performance.
00:01:33
Speaker
the The emphasis was mine there. But that's a lot, right? you know Contributing to a real and meaningful decline in delivery performance ah is an interesting statistic.
00:01:44
Speaker
And she follows that with approximately 76% of developers use AI tools in daily tasks such as coding, debugging and documentation. Which, ah that sounds about right, I think, ah in terms of just general vibes I get from people talking. 76% of people using it for some stuff to potentially a lot of stuff.

AI's Impact on Development Practices

00:02:05
Speaker
I think, yeah, that makes sense. So she goes on, this week I posted a silly and whimsical post on Blue Sky, which seemed to resonate with a lot of you out there.
00:02:14
Speaker
And it reminded me of a section in the Dora report about valuable work. So her post was, I was born to make websites, fun websites, silly websites, curious websites, websites that bend, delight, amuse and entertain websites for people on planet Earth.
00:02:29
Speaker
And there will be no shareholder value, but the websites will be built and they will be enjoyed. ah So, yeah, you know, that's I think a little bit of the the vibe of like a lot of people who came up with the old Internet a little bit where it was like, let's just make websites for the sake of making websites like that was part of the purpose of it.
00:02:49
Speaker
ah And to be fun and delight and amuse and like this for so long, that was the Internet and so many websites out there. That was literally the purpose of them. And obviously things have shifted very far from that. And I bet you most people who make websites have never even thought of doing that.
00:03:05
Speaker
um But yeah, i think that at least having that essence there is so important, right? And it makes them human, which is so important. um But anyway, she goes on. AI has always promised to help people spend more time doing valuable work by automating the manual, repetitive, toilsome tasks so that software developers can be free to use their time on something better.
00:03:28
Speaker
Despite that, the report states that individuals are reporting a decrease in the amount of time they spend doing valuable work as AI adoption increases. The maths isn't mathsing.
00:03:39
Speaker
ah So what is valuable work, actually? I have observed a growing trend of developers focusing solely on the tools used to make software rather than what the software actually does.
00:03:51
Speaker
Many people are sharing their new apps on social media, attempting to provide context for their creation by listing the databases, runtimes, frameworks, UI libraries and AI code generation tools they used.
00:04:03
Speaker
But what does your app actually do? What problems does it solve? Tell me about the value you just created. i think ah hundred percent I 100% have seen this. I don't know if this is necessarily linked directly to the AI thing or more just the influencer thing for like dev influencer space or whatever you want to call it, where ah there's just so much focus on like, here's all the tools I use to build this cool thing.
00:04:30
Speaker
And the product they're building is less what the focus is. And it's more about the things they use to try and, either show like these are all these new cool tech things and they're either on like bleeding edge things because they just want to explore those and say you know follow the trends or they're trying to make a tutorial and they're trying to cover like every single thing along the way and i think by the influencers quote unquote um because i guess you know I don't want to call them influencers because I guess that would make me one, which is uncomfortable.
00:05:01
Speaker
ah um But the people that are, you know, if you see things like that, then other people will emulate that and be like, oh, we should be building stuff with these stacks and show off these things and how fast it is and all these other things.
00:05:13
Speaker
But yeah, what but why are you building it is an important thing to be answering, obviously.

Real-World Applications and Limitations of AI

00:05:20
Speaker
So Salma goes on, valuable work and meaning is not derived from what AI makes us, apparently, faster at generating code.
00:05:28
Speaker
Meaning and value in software development is actually created through the impact of building things that make human lives better or easier or slightly less bad. Now it can be argued that much of the work in the technology industry in 2025 is not centered on making things better for humans whatsoever.
00:05:45
Speaker
but that's a discussion for another day. What's becoming clear is that the mass adoption of AI is shifting the focus away from human-centered software solutions that provide meaningful value and is reducing the entire industry to just the tools at our disposal.
00:05:59
Speaker
Just generate the code, bro. Just ship that one more app, bro. uh yeah i i definitely and i see where she's coming from with that um i don't know if it's that bad but uh you know i'm also in a some different circles or not circles i don't know if that's the right way but uh i do think that there there is a lot of that of just like oh i can make this thing so let's ship it and get the ai to do it and uh Yeah, go, go, go make stuff. But like, again, what's the purpose of what you're making um is always the most important thing at the end of the day.
00:06:36
Speaker
The new kitchen metaphor. If I employ someone to build a new kitchen for me, I really don't care what drills, hammers, nails or sandpaper they use to get the job done. I just want a valuable end result, a fancy looking and functional kitchen that makes good use of the available space, enabling me to cook delicious food in a delightful and comfortable environment.
00:06:54
Speaker
Ultimately, a great kitchen is created with vision, creativity, and by solving existing problems in the old kitchen presented. The same is true for software. Value in software development cannot be determined by how many lines of code you can bash out in any working day, and especially whether or not you are using AI to do so.
00:07:11
Speaker
Real value is delivered through vision, creativity, experimentation, and using human brains to solve human-centered problems. I 100% agree with this. And i do think it actually goes back to what she was saying in the previous section here, where I do definitely feel that thing of like, that yeah that that line where sure it doesn't matter how many lines of code you bash out in a working day, especially whether or not using AI, there has definitely been this like push of like these... but and Part of this might actually be coming from the companies themselves, right? Where...
00:07:46
Speaker
There's like this, like, I made this with AI. Look at these things we can do with AI. And then other people like, oh, I'm making these cool things with AI. But again, these cool things aren't serving a purpose. So they're not really that cool. And there's definitely like, there there does seem to be a push of creating more stuff, but not necessarily better stuff, which...
00:08:07
Speaker
I think it comes, i want to continue reading what she says here, but like I didn't interview a little while back or quite a while back now with Travis Nielsen, who, if you don't know, did Dev Tips for a long time and is the reason I started my YouTube channel.
00:08:20
Speaker
ah He works at Google now and he was working specifically on YouTube Music when we were talking. And he was talking about the way that they're using AI within and the plans for using AI within Google Music.
00:08:31
Speaker
that I think were like the right approach to using it in terms of it was to solve specific problems. And like it had to do with like the art and a few different things. I don't remember all of them specifically, but it and I mentioned a while ago in ah another episode of this, maybe two weeks ago.
00:08:51
Speaker
a talk that Tejas had done where it was talking about how using AI to solve problems, like develop the problems we run into and how you like it can find things and that you you set it up a task the same way any code would do it. Right. Like here's this very specific task for you to do that it can run in the background to get certain pieces of information or do certain things that it can then surface to the

Creativity, Empathy, and AI's Limitations

00:09:16
Speaker
user.
00:09:16
Speaker
But in a way that you're deciding, it's not just like build this thing. then know The more general I think we use it in, the less purposeful it is and just kind of a waste of everybody's time, resources, and just burning the environment down for absolutely no reason.
00:09:31
Speaker
But yeah, and it doesn't solve like these human-centered problems necessarily. So I do think there are ways that we can use it to solve human centered problems. And you see but this also with like scientific research and other things where it's like finding things. And I find a lot of these AI proponents then like latch onto that right instead of looking at, like oh, look at this cool thing. it's It's finding new solutions. And it's like, yes, because we're giving it really narrow scope.
00:09:58
Speaker
and feeding it very specific information and expecting, and we're also its experts dealing with it that can go through like the slop that it's creating and focus it down on getting something and it can open up new avenues and do it faster than we can.
00:10:11
Speaker
And I think we can use AI to solve human-centered problems. I think the problem is most people aren't using it for that. So the report backs this up, continuing. This is in the blog post. ah There is also an art and empathy underlying a great product.
00:10:26
Speaker
This might be difficult to believe for people who think everything is a problem to be resolved through computation, but certain elements of product development, such as creativity or user experience design, may still or forever heavily rely on human intuition and expertise. 150% there, right? Like that...
00:10:43
Speaker
This also makes me think of like Josh Como's website where there's like all these little user experience details that are built in that make his website better than most people's websites or blogs and and all of that with like, so there's just so many of them. And like anything like that where these micro interactions are there, I've seen AI like create a very simple one. It loves putting hover effects on like everything, right? So if you have make something, there'll be like cards and the cards aren't even interactive, but for some reason it puts a hover on there. So it's just throwing like meaningless...
00:11:16
Speaker
interactions because it found it in some code base somewhere and it liked it i don't know uh whereas like if you're when we're doing things in meaningful ways that again very focused ways that there there's thought going into it that the ai is never going to think about um because it doesn't intuit things right it it does ah general task and it will sort of With a very general task, it will make some general thing that fits that. that tends to One of the reasons I think a lot of the it's slop is because it's too general what we're telling it to do or asking it to do or whatever however you want to phrase it.
00:11:54
Speaker
But yeah, 100% there where you you need the creativity and user the creativity that goes into things and for user experience design um that doesn't we can't necessarily use overly generic patterns to be able to follow to make these things better, if that makes sense.

Productivity, Industry Practices, and Environmental Concerns

00:12:16
Speaker
ah So Salma goes on, and what's even more interesting is that while seemingly high performance teams and organizations use AI, the report finds that products don't seem to benefit. We're all just churning out AI-generated code, moving those tickets, making meaningless graphs go up.
00:12:32
Speaker
But to what end? Real software is actually not moving forward. We're just cranking out the same broken software with the same stupid bugs. In fact, I was editing this post while sitting with a fresh batch of black hair dye on my head at the hairdresser's. One wrong tap of a button on my phone deleted half of the article.
00:12:50
Speaker
I attempted to use the three finger gesture to bring up the undo button on my iPhone, which, not surprisingly, did not restore what was deleted. i ended up having to find a deployment preview of this post that I fortunately deployed before I left my house, copy and paste half of the article back into the CMS and reformat the headings, being very careful not to make a single wrong move.
00:13:12
Speaker
All the new kitchens and silly add-ons are shit. There is no value in that. Tools do not determine value created. The tools someone uses to build a kitchen are only as good as the skills of the person using them. A skilled craftsperson can probably use any old tool and produce a great result that holds up for years to come.
00:13:30
Speaker
a less experienced craftsperson who doesn't understand the fundamental concepts of space, structure, and value-based utility might be able to make a single kitchen cabinet look good to the on-trained eye, only for the shelves to be at the wrong height so I can't store my stuff. The same can be said for software.
00:13:46
Speaker
the I've been doing some renovations in my house recently. so like this I haven't done my kitchen specifically, but ah just having tradespeople come in, like the things they'll spot that I now know because I've been living in this house for a number of years, but like you don't see right away. Or just um we were talking with a designer at one point, just like the little things that they'll notice.
00:14:08
Speaker
that And like a little improvements just because they're years of training in some... like I think the problem with so many people getting excited for AI things is just like this feeling that you don't need to be an expert in something, which I think is...
00:14:24
Speaker
completely problematic and a big issue. ah Yeah, let's see. Let's keep on going with what Salma says here. But I think that's one of the issues that's coming with AI now is people thinking that they don't need, you don't need experts when the AI can help you do something, but that's not quite how it works.
00:14:43
Speaker
ah There's nothing wrong with being experienced. We all have to start from somewhere, but we can't rely on those tools as a shortcut to gain valuable experience. Experience takes time to develop. don't you I didn't plan this. but Obviously, this is going a little bit with what I was saying.
00:14:59
Speaker
And your tools are as only as good as your fundamental knowledge and skill. If you skip the knowledge and skills part, and if you fail to learn about what you're doing and the implication of how you're doing it, the human value you have to the potential.
00:15:12
Speaker
Let's read that again. And if you fail to learn about what you're doing and the implications of how you're doing it and the human value you have the potential to deliver, then you have little hope of building human value into your software.
00:15:28
Speaker
Because for the most part, humans use software. Andreas Moeller said it best in the they lied to you. Building software is really hard. So she quotes that so from that post.
00:15:40
Speaker
The true value of a software engineer is in our ability to analyze problems as well as design and implement creative solutions. To get good at these skills, you need to understand not just the tools at your disposal, but also the technologies you were building on top of.
00:15:54
Speaker
If you don't understand how an application works, then you have no chance of fixing its bugs and issues. So, I mean, that resonates even just with CSS for me so well. um And obviously,
00:16:07
Speaker
bigger the bigger scope you're looking at, like just becomes more and more true, right? ah And that includes like if you're trying to get ai to help you, because some people will read that and be like, oh, I can just use AI to fix those bugs or issues that are coming up.
00:16:24
Speaker
But I'm sure anybody who's tried doing that, sometimes it might actually do something, especially if it's a small scope on something. But it just becomes these issues of it doesn't fix it, it does something different and it's broken in a different way, or it just doesn't fix it, or in fixing it, it somehow gets rid of all your TypeScript definitions.
00:16:44
Speaker
ah There's so many weird things that I've seen people complaining about and I've run into as well. ah There's a good Reddit thread. I know I posted a link to it in a previous newsletter. I don't think I mentioned it.
00:16:58
Speaker
in the podcast, but maybe I did. But it's now that they've let Copilot into GitHub repos, like you can ask Copilot to fix issues. Like if you have a GitHub repo that has issues, you can ask Copilot to fix them.
00:17:12
Speaker
I think it's in beta. I don't know if it's an open or closed beta, but it was a ah ah Reddit thread where the person had found like six or seven threads or GitHub's githubs repos where people were trying to use it and just how it failed like over and over and it's just like this like prompting to try and and it wasn't like obviously if you prompt it really badly it's not going to do it but like it had a specific issue it was being prompted in decent ways to try and get it to fix the issues and it just got like worse and worse and yeah comedy of errors basically and it's yeah definitely
00:17:49
Speaker
AI can help with certain things, but it's it can also ruin or break or cause more issues or just not fix the thing and make you think it's fixing the thing that it's not actually fixing.
00:18:01
Speaker
So Salma goes on in the report, respondents also reported expectations that AI will have net negative impacts on their careers, the environment and society as a whole. Whilst ai has empowered anyone to build and ship web and mobile apps, the tangible negative impact of the soulless, valueless software released on a daily basis across the industry cannot be underestimated.
00:18:23
Speaker
And no, I don't want to use your new AI tool to summarize an email. Oh my God.
00:18:30
Speaker
I hate it so much. I use Google Work Spaces for my business stuff. And like when it just added this thing where it was like including summaries and like sometimes I get like short emails sometimes and it summarizes. There's just a summary. And I'm like, the summary is shorter or is longer than the email that was sent. And like even on long emails, I don't need it. It's such a useless feature for me.
00:18:54
Speaker
And I don't know. Maybe there's people who use it. But like yeah, you I do not need something to summarize my emails. or summarize documents or summarize a meeting or anything else. I want to use my brain so I can comprehend and learn and connect with what is front of mind for me or what is in front of me.
00:19:10
Speaker
Yeah, the summarize things. I also I'm always like I've used it to do summaries or descriptions for these episodes because I don't always want to write like what I don't remember what I talked about. Even I do like the edit, which is I generally don't edit these or it's very, very lightly edited.
00:19:30
Speaker
And so, like, I don't remember everything I talked about. so I'm like, oh, maybe if I can use this to summarize and I'm reading it And it doesn't always get like it missed some of the points I talk about or it just puts in so many of the different things. because if I go on rambles and I change topics a few times that like the bullet points it does create like it doesn't get the context of what was actually important. It's just like here's all the stuff that was talked about. Like if I were to look at that, it doesn't actually give me the context of what I talked about in that episode in a meaningful way that is useful for people.
00:20:06
Speaker
And so like, you know, to do a little quick, like I have to then look at it and edit it down. And it's almost more useful for me to be like, oh yeah, those are the things I talked about. Now I can actually write something meaningful. But if it's summarizing a meeting, because I don't want to actually like sit through the recording of the meeting, or if it's summarizing an email,
00:20:23
Speaker
and it's not something i've done it's someone else's writing to me like it loses the context of what was said in that meeting i'm assuming i've never seen it summarize a meeting but i guarantee you it's the same as when it summarizes one of these episodes uh but the same with the email like it won't do it in a way that's meaningful towards the context of that email necessarily so um yeah that's for me really problematic and that idea of comprehending and learning and connecting with what is in front of me i think is so important And like, what's the big rush anyway? I don't know. for For those types of things, like you can skim an email and usually get the gist of it pretty easily.
00:21:02
Speaker
Productivity does not equal value is the next section of her post where she goes on. Returning to the topic of the previous article, I want to touch again on the topic of productivity. Productivity as a measure of value is extremely misleading.
00:21:15
Speaker
The very notion of productivity has been conjured up by big capitalism um to keep us to keep us busy and to misdirect our attention so that we forget to question the broken system in which we have no choice but to participate.
00:21:29
Speaker
All of this prevents us from real growth and moves us further and further away from the pursuit of real value. Yeah, anyone who's worked at a job where you just know like you have to stay busy for the sake of being busy and or just like depends on what type of work you're doing.
00:21:46
Speaker
But when it's just like, yeah, the work for sake of work and for moving forward and it's like, oh, you're not shipping that feature fast enough or you're not doing this thing fast enough. And it's just. in there needs to be like a certain speed that's being accomplished for certain things and i get it like at one point you are you can definitely be someone who's just wasting your time but again like going back to josh's blog where so many like a company wouldn't do that or most companies i won't say no companies but most companies would never have all these little whimsical things that he's adding in there because it's not productive to do that like making
00:22:23
Speaker
All these little things just for the sake of them being there. But it ends up being something that's extremely valuable because it just reinforces his brand so much. And it makes people remember his site and go back to his site.
00:22:37
Speaker
And it makes them smile. Like every time you have and one of his articles and there's the heart thing and you keep... like clicking on it. And then it like, I don't remember exactly what the animation is, but it gets all happy once you fill it out.
00:22:49
Speaker
Or I think a lot of people don't know this, but if you move the mouse over the heart and you never click on it and you move your mouse away, it frowns like little things like that. are the opposite of productivity, but there's so much value in them that really it is worth it. So it is productive for him to include those in there. Plus it makes him happier. Like even for me, sometimes i should be doing something else, but like right now i'm working on a lot of courses and other things.

AI's Influence on Learning and Coding Skills

00:23:16
Speaker
And I'll get a question on Discord or on Blue Sky or something where someone's having a problem and I end up spending way too much time on it or they'll be like, a like and most of the time it's not solving a problem. It's someone who posts something. It's like, here's a really cool button I saw. And I'm like, oh, can I make that?
00:23:33
Speaker
And like for me, that's actually important because I learn stuff along the way in doing it. It might turn into a video and it also just becomes this. It makes me happy doing that. Like, that's what I enjoy doing. I don't I like doing courses. I like helping people, but it does become a grind and I need breaks away from that where I can still be doing something.
00:23:53
Speaker
And like, it's a bit easier for me to be able to justify it in a sense because it's becomes my job and I can make a piece of content around it. ah But i've I've made stuff where i haven't made content around it and just been mucking around in CodePen for way too long. But I need that type of thing to actually keep up with the other things I need to be doing. Yeah, the pursuit, it pushes, I definitely think that idea of like fake productivity does push us away from the pursuit of real value.
00:24:21
Speaker
You could argue there is space for both approaches in software development. Build the hard, compelling stuff using your human brain and use AI to generate the code for some more boring parts of your app. Take form validation, for example, scaffolding out a new project or setting up all the boilerplate to make some API calls.
00:24:37
Speaker
But those boring parts are only boring because you wrote the code to do the same thing before. You already learned how to do it without AI. But unfortunately, too many of us are getting sucked into the but ductivity hype cycle and engaging in daily conversations with energy-sapping computer machines that vomit out thousands of lines of code based on probability and existing mistakes that the large language models themselves are trained on.
00:24:59
Speaker
It's absurd. We have a whole new cohort of inexperienced kitchen builders who have invested in the latest must-have drills, hammers, nails, sandpaper, but have no idea how to build real value into what they're using those tools to create. And so we're seeing an influx of infinite inferior kitchens that offer no human value.
00:25:19
Speaker
They may look good when you walk into the room, but they will inevitably, I can't say that word, fay hopefully you know what I meant. and Inevitably, this is like specificity in the old days.
00:25:32
Speaker
ah They will fall to pieces. We'll just skip that. But they will fall to pieces as soon as you put the kettle on to make a cup of tea. Yeah, this has been one of my big, big concerns with AI and the raise of AI is this potential for like all these new people that are coming that are able to use it now that see it as like, oh, I can just do this instead of learning the thing well.
00:25:58
Speaker
And it will write, you know, it will scaffold this project for me. They have no idea how that's working, but they can get it to scaffold the project. and You can see like explain this to me, right? and You're not actually learning anything and doing that.
00:26:09
Speaker
if you hows this If you were to have it scaffold something for you, explain it to you, and then you were to build it yourself and almost use it as a tutorial, maybe then you could actually use it as a learning tool. um But if you're just reading its recap of what it did, i don't think you're it's the same as listening to somebody teach you something without getting your hands dirty actually doing that thing. You're not learning it.
00:26:29
Speaker
You're learning about it. You're not learning how to do it. um But yeah, with I think people who come, people who've been around for long enough, some people are definitely on the hype train for all of this um and and just jumping right on. But I think a lot of people are seeing the issues with it, whereas new people that are coming up into it are just like, oh, we can just use this for everything.
00:26:50
Speaker
um And it's an issue. Salma goes on, the data speaks for itself. Vibe coders are reporting outages and critical security vulnerabilities in their apps, losing months of work that didn't use version control, oh god, and the inability to truly learn new things.
00:27:06
Speaker
I'm not going to, are those all, one of those is a Reddit thread. I'm not going to click through to all of them. The world is already cooked, and yet we're cooking it more. What's more, there's the environmental impacts of AI. I'm glad she's bringing this up, ah which are just beginning to emerge. An MIT article titled Explained Generative AI Environmental Impact outlines how the rapid expansion of generative AI presents significant sustainability challenges, including electricity and water overuse, hardware-related emissions, and increasing pressure on power grids.
00:27:37
Speaker
The world is already cooked. and yet we're cooking it more, literally and figuratively, by shipping insecure software into the void that we have no idea how to debug, scale, or extend. In the not-so-distant future, LLMs will be trained purely on LLM-generated software, and the world will eat itself.
00:27:53
Speaker
I challenge you to find the value in that. So, yeah, very fantastic article of um by Sama there that I can't really agree with more. um But we're not finished there yet because there is after that I came across this one, which was by David Bushel in Slopification. And we're going to zoom in a little bit on here and we're going to dive into his article, which obviously is another critic criticizes a criticism. There we go. That's the word I'm looking for of AI.
00:28:22
Speaker
ah But he has some good points here, too, that I want to discuss. So he starts off, I've been reading Advising Reasonable AI Criticism by Declan Chidlow. From my perspective as a self-professed AI Luddite, the far too reasonable points made by Chidlow don't fit with my anti-AI narrative.
00:28:39
Speaker
I've been forced to think and reflect, which I don't like doing this far into the week. um But any quotes, Declan? Many anti-AI proponents are proud to never touch AI systems and wear their ignorance of the cult but of the current state of the technology as a badge of honor.
00:28:55
Speaker
Likewise, many AI evangelists refuse to acknowledge the flaws of AI models and view it uncritically without care for the flaws. From both sides, this is an a ah this is an embracing of anti-intellectualism.
00:29:09
Speaker
It isn't cool to be misinformed. And so, yeah, I get that completely. I do think if you're going to be anti-AI, just standing, you know, you don't want to be the the old man on the roof that's just shouting down and saying change is bad, right? Like if that's the standpoint.
00:29:26
Speaker
And then you get the people exactly like they say on the other end that are defending it despite its flaws and not acknowledging them. And so I think that's advising a reasonable AI criticism is, you know, let's find the middle ground there.
00:29:40
Speaker
A little bit. And if you're going to criticize it or embrace it, at least do so from a point of view that and I try and do that. Like I'm clearly more anti AI than proponent of especially, i you know, i I think that's quite clear, but I do think.

Future Implications and Responsible Integration of AI

00:29:58
Speaker
I do realize we we're not going back now. Like all of them aren't about to vanish. I just, I really wish it was wasn't so half-baked. And I do, again, i think there are ways that we can use it that can be beneficial, but it's just the way, most of the ways we're using them aren't.
00:30:15
Speaker
um So David goes on here. I'd say Chidlow verges toward AI apologism in places, but overall writes a rational piece. My key takeaway is to avoid hostility towards individuals.
00:30:25
Speaker
I don't believe I've ever crossed that line except the time I attacked you for ruining the web. and He's linking to ah slo but one of his other articles. um But he'd also he also says, I reserve the right to punch up and call individuals like Sam Altman a grifter in clowns garb.
00:30:42
Speaker
ah Disclaimer from David. Before I continue, I have to acknowledge that despite largely agreeing with Chidlow's post, I have no desire to engage in reasonable AI criticism myself. I have an agenda. I fully admit that I'm playing to a crowd.
00:30:55
Speaker
Blogging with titles like Slopaganda and Slopification is obviously antithetical to the respectful critique. The way I see it, one needs to balance needs a balanced scale for reasonable discussion.
00:31:07
Speaker
It feels fruitless when big tech, Google, OpenAI, Anthropic, etc. ah can just tip the scale and launch criticism into oblivion with billions of dollars of marketing. Their slop agenda.
00:31:19
Speaker
I have no respect for that. So then he goes on. No, thanks. I, too, am bored of it. Frankly, I'd rather quit my career than live in a future they're selling. It's the sheer dystopian drabness of it.
00:31:31
Speaker
So this makes me think a little bit of what Salma was talking about, too. And I do feel feel like he does mention Salma's article in a few minutes here. But ah yeah, but the the dystopian drabness of it. A hundred percent.
00:31:42
Speaker
ah Mediocracy as a service. I am extremely privileged to have a job that I enjoy. My work is creative. The challenges are rewarding. I don't take that for granted. When I imagine AI in the mix, it does not spark joy. I tried the tab completion slot machine, not my cup of tea. I tried image generation and was overcome with literal depression.
00:32:01
Speaker
I don't want a future as a prompt artist. I'd rather pack up my privilege and find something else." So that's where I stand right now. I am also not fond of how the sausage is made. You know, the giant plagiarism machines. If that doesn't bother you, you're just in wonderful company.
00:32:17
Speaker
We'll come back to that in a second. The, or no, so we'll continue in a second. The, I like how they've sort of shifted a few of the people now anyway from going oh, we're not stealing stuff to being like, well, if we didn't steal all of this would be impossible.
00:32:31
Speaker
And yeah, it's an interesting pivot to take. ah What's the deal with Tony Blair and Nick Clegg crawling out of the woodwork? We have past and present UK government all champing at the bit to feed us the machine. That's not conspiracy. It's happening in broad daylight.
00:32:46
Speaker
I'm not up on UK politics, so I'm not 100% sure. There. ah De-skilling. Here he mentioned Salma Alam Naylor speaks to me in The Promise That Wasn't Kept, which we just looked at.
00:32:58
Speaker
And ah the quote he takes from that article was, what's becoming clear is the mass adoption of ai is shifting the focus away from human centered software solutions that provide meaningful value and and and is reducing the entire industry to just the tools at its disposal.
00:33:12
Speaker
Just generate the code, bro. Just ship more apps, bro. David continues, AI is pushing the de-skilling of the web to a tipping point. This movement leaves no career for me. Is it any wonder I am an AI hater?
00:33:25
Speaker
Great name. Replies, these tools are also not affording me time to write the fun parts of code, nor are they enabling me to think... Nor are they enabling what I think is truly the driving force behind why i care about web accessibility.
00:33:39
Speaker
That accessibility is at the root of a more creative and more well-loved software and tooling for everyone. Skills in the web are too important to lose yet worryingly fragile in the wake of big tech.
00:33:52
Speaker
The Figma sites disaster class, oh, i like that name, ah has shown us it doesn't even take direct AI integrations to further de-skilling efforts. Oh, but don't worry. Figma has AI too. The lesser revealed Figma make was demoed live at the same conference.
00:34:07
Speaker
The product manager even boasts about engineering skills not being required. The live demo failed. It's like Microsoft, the blue screen to death when they're ah showcasing Windows. that Was it Windows 98? I don't remember which version it was.
00:34:21
Speaker
Now remember that generation from earlier. Let's take a look. Okay. Oh, he's just quoting when when the when the demo broke there. ah But the enslappification continues.
00:34:31
Speaker
I wouldn't be so insulted nor saddened if people just slowed down. Yes, this is so important. like Let's pump the brakes a little bit, everybody. Integrate the tech carefully. Wait for the tech to mature. That's not how big tech works.
00:34:46
Speaker
But ah these are things that could could literally be helpful. Honestly, I do think they could be. And I'm going to keep reiterating that. There are ways that they can be used and they could potentially be good if we pumped the brakes and waited for this to actually become a mature technology that could actually do what we need it to do and not just make slop.
00:35:10
Speaker
And if we integrated it carefully, this is such a big thing. Integrate the tech carefully. And just it's this push. And I don't know if it's just because they need investor dollars or what it is. And then but but by all these big companies like Google and Figma and Adobe and like everybody just pushing them so much. It makes people that are making smaller apps feel like they need to.
00:35:31
Speaker
And then all these integrations that are coming in now that are completely useless. And it just becomes this pattern because you see whenever somebody, a big company does something, the little companies will do the same thing because they feel like they have to. We saw that with forms, right? When Google's material added like the the floating labels that then it turned out floating labels were like the worst thing you could ever do for accessibility purposes.
00:35:53
Speaker
But because Google did it, other people started doing it on their sites thinking it looked cool. And of course, Google must be taking the right approach to doing it. But it was actually a really bad way to do things. And yeah, it's just, you can't look at the big companies and assume because they're doing it you should be doing it too, but that's just how it works.
00:36:12
Speaker
ah But yeah, that's not that's not how big tech works, as David says. One whiff of a dollar and the force feeding begins. More than a few of you are ah are embarrassing yourselves with gluttony.
00:36:23
Speaker
And then if you're just listening to this and you're not watching it, this is my favorite part of the entire post ah just because he animated. You're prompting it wrong. I love that reply. um But yeah, but you're prompting it wrong is like floating.
00:36:36
Speaker
It's fantastic. And the most infuriating thing you can ever hear. Right. Bro, I've seen what you're prompting. Our definitions of quality are so mutually exclusive, I'm not convinced we're from the same planet.
00:36:48
Speaker
Ask for facts and AI gives you fiction. Apparently everyone is happy with that. And he quotes, um and e co and he has a quote here from John Herman.
00:36:59
Speaker
It's the drive to embrace AI. In its drive to embrace AI, Google is further concealing the raw material that fuels it, demoting links as it continues to ingest them for abstraction. Google may still retain plenty of attention to monetize and perhaps keep even more of it for itself now that it doesn't need to send people elsewhere.
00:37:19
Speaker
In the process, however, it really is starving the web that supplies it with data, on which to train from and which to draw up to date details, or one might say, put putting it out of its misery.
00:37:30
Speaker
This is something that I've I'd love to know what the long game is, because as new stuff is coming, if people aren't encouraged to make content,
00:37:43
Speaker
on these things because ai like most of AI stuff is ah answering questions, or especially with like in terms of Google anyway. It's answering questions and but someone's Googling something.
00:37:55
Speaker
So the whole point is to answer their question or it's the same with ChatGPT or Cloud or anything. You ask it something. But if people stop posting new things, what's it going to get trained on? Because I know it doesn't get trained ah from coding perspective. It does not get trained on the spec at all.
00:38:11
Speaker
ah Probably because it can't learn it well enough and like how to use things from that, I'm guessing. But like it when it's able to search things and it just grabs stuff from crappy sources sometimes or like I've asked it things that are sort of modern ECSS and it it gets it wrong and it makes stuff up. And it just because I guess it within its training data, it can't find something, even though it might have existed for like over a year.
00:38:36
Speaker
And I'm just like, well, if it's unable to get these things now and it's not up to date, like I just I don't know what the benefit is for these companies at one point when, again, it starts eating itself eventually. Like there's so many air written articles out there now.
00:38:54
Speaker
that it the whole like they're building a house of cards in a way like i just want to know from the perspective of somebody who works at the top level of this what the goal is because i feel like it is a house of cards that's and they're just building it taller and taller i don't know maybe i'm wrong there but yeah i am curious ah David finishes off with words for words sake, code for code sake. AI pollution is making the web a disgusting place to work, but I guess the idea is I won't have to work much longer, right?
00:39:25
Speaker
Soon I'll deploy my AI agents to task while I kick back, sip a skinny frappuccino and watch the world burn. ah So, yeah, obviously ah two criticisms of it that I think ah you tend to agree with um for the for the most part. And I do think that taking like David said, like or the not, he David didn't say it was the quotes from Declan.
00:39:51
Speaker
Just about having reasonable criticism, I do think is important. i get where David's coming from in terms of being like, well, you know, sometimes we have to go to the extreme because the people that are pushing these things are at the other extreme.
00:40:05
Speaker
And so like by not by being like, OK, but isn't, you know, and and trying to find the reasonable space. You do need some voices that are more on the far end.

Closing Thoughts and Listener Engagement

00:40:18
Speaker
the worry The worrisome part of having an extreme opinion on anything is just like the extremes will never agree on with each other. Whereas i think if you take a more reasonable approach...
00:40:31
Speaker
that you get there in the end. yeah You can change people's minds sometimes, or you can at least maybe not change someone's mind, but get them to realize certain things. But I do think as much as David here was saying he was like taking a hard stance, I think by having the same way with Selma's article, like they're very...
00:40:50
Speaker
They're not just like, AI is bad because it sucks and makes bad code, right? They're there're they're being a little bit more ah detailed and explaining things. I think it's they're both valid criticisms of where it's at that I wanted to share with you. So I hope you enjoyed that. If you like this type of episode, please let me know.
00:41:06
Speaker
I think it probably turned into a much longer one than I usually do. um and so But just this idea of of going through articles and sort of sharing my own thoughts in them. The last one I did on this was relatively well received. So if you do like this type of approach, please do let me know in the comments. I have to be careful with that though, because I'm talking about it now. We're at the end of the episode.
00:41:26
Speaker
So probably the only people that are still listening are the people who enjoyed this. So i'm I'm asking for some biased feedback now. so But you still let me know anyway. Or if you made it to the end because it was you're listening to it as a podcast or in the background while you're working and you're like, eh, whatever.
00:41:40
Speaker
um I don't need that. That's fine, too. ah If you are watching on YouTube, you can just leave a comment. Or if not, hit me up on Blue Sky ah with your thoughts. And just let me know what you think about AI as well. If you have like actual good use cases for it.
00:41:53
Speaker
um I think we have to be very careful. My biggest concern is always the environmental impact that it all comes with. um But if you do have good use cases for it that have helped out or counterpoints to anything as well, I i always like hearing counterpoints to things to make sure that I'm not just being biased for the sake of bias. ah So, you know, let me know if if you do have any counterpoints to anything that was brought up or that I said along the way that you disagree with, ah because I don't want to live in a little bubble either.
00:42:24
Speaker
And yeah, I think that's it for today. So thank you so much for listening. And of course, until next time, don't forget to make your corner of the internet just a little bit more awesome.