Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
The WebWell Podcast, Episode 3. "Stephen & the Machine Takeover" image

The WebWell Podcast, Episode 3. "Stephen & the Machine Takeover"

The WebWell Podcast by Cascade Web Development
Avatar
35 Plays1 year ago

In this episode, we welcome Stephen Brewer, the Technical Director at Cascade Web Development, to the show. As colleagues of Stephen at Cascade, Ben and I are excited to have him share his thoughts on artificial intelligence (AI) and its impact on our world.

During our conversation, we delve into the current state of AI, including its applications in various industries and its potential to transform our society. Stephen also shares his insights on the ethical concerns surrounding AI development and usage, and how we can ensure that the technology is developed and used responsibly.

If you're curious about the fascinating world of AI and its potential to shape our future, this is an episode you won't want to miss.  (Hint: This was written by AI.)

Thanks for tuning in, and let's get started!  Follow us where you listen to podcasts!!

Send us your questions or comments @ [email protected]

Recommended
Transcript

Podcast Introduction

00:00:06
Speaker
Welcome to the Web Well podcast brought to you by Cascade Web Development. I'm one of your hosts, Simon, along with Ben. And we can't wait to dive into all things internet, tech, web development, and web design.
00:00:20
Speaker
We'll also be discussing how we balance work and life and exploring the fascinating world of internet innovation. So whether you're a tech enthusiast or just looking for some entertainment, join us on this exciting journey as we explore the ever-changing landscape of the web. Thanks for tuning in. Let's get started.

Guest Introduction: Stefan

00:00:41
Speaker
All right, everyone. Welcome to the WebWell podcast episode
00:00:45
Speaker
super excited because this is the first time that we've had company, Ben. We'll introduce you, all you listeners to our friends, Stefan. I say friend, but really we work together. We're coworkers and friends. Stefan, how's it going? Great. Fantastic.
00:01:06
Speaker
as a friend, as a coworker. So this is the portion, Stefan, as we're introducing you to the podcast.

Stefan's Career Journey

00:01:15
Speaker
But this is the portion where we kind of talk about our past, what we've been doing. But I really wanted to introduce you more, like who you are.
00:01:25
Speaker
what you do for Cascade, where you've been, what brought you here, all of that. If you can summarize your last 22 years with Cascade. No problem. We just have a few minutes though, so if you could break it down to a few minutes, tell us about yourself. Before I do that though, was it web well or we be well? I thought it was we be well, I don't know.
00:01:52
Speaker
If it was a capital B, we'd be well. But I think the whole model of it. I'm going to be well, because we'd be well. We'd be well. I mean, we do web well, and we do us well. So yeah, I think that works. Cool. So I'm Stefan. What is my title, Ben? Technical Director. Technical Director at Cascade. Before that,
00:02:19
Speaker
I mean, I worked at McDonald's for a while, did some other stuff, and then eventually found my way into the computer realm. It took me a while. I had to wait to get motivated. Funny enough in, I don't know, was it eighth grade when I took my first computer class,
00:02:39
Speaker
Really loved it. By the second day there, I figured out I could have the fastest typist in the room use the intra-IM system we had at that point to send me his work as soon as he was done. So I didn't actually type, couldn't type very well, but figured that out really quick. And I wish I would have thought of them that, hey, maybe I have a knack for this. But it took a while.
00:03:07
Speaker
the late 90s I started working at a computer store helping them with their website stuff and then transitioned into the only dot com in town in 2000. I think yeah I think my first day was actually the first day of the year 2000 and I think that lasted
00:03:28
Speaker
close to a year before the dot-com bubble burst and I was introduced to Ben through somebody that worked at that dot-com and the rest is history and that was 22, 23 years ago.
00:03:42
Speaker
Yeah, I know I love hearing that perspective. I recall our mutual friend Jack Wagner was the person who made that introduction and he was kind of holding his cards close with Stefan and kept hitting me up like, Hey, I got this guy, we could do some stuff, got this guy, and then ultimately realized that he wasn't necessarily in the best position. So he's
00:04:02
Speaker
kindly made the introduction, brokered the intro, and yeah, forever grateful for Jack out there for putting us together and letting us find our way. And he was very actively involved there for a short stretch as well.
00:04:16
Speaker
Shout out to Jack. Thank you, Jack. Thanks, Jack. I mean, that's what a testament for you guys having worked together this long, granted in separate rooms for some of the later years now. But I'm honored. So for all you listeners, I'm the newest employee at Cascade, having been now going getting close to three years. So super pumped that I'm over that little mark. But no, wait, am I three years?
00:04:45
Speaker
I think I'm three years, right? Two, two something anyways, uh, super pumped. Um, yeah, we'll round up. We're good with numbers. Um, three, three and a half. Booyah. Is that right? Time flies. Yeah. 2020. Yeah. It was August. Well, August, 2020. So August, 2023. Yeah. So you're two and a half.
00:05:05
Speaker
Yep. Yeah. So this August, I'll be three years. So super pumped. Been here so long, right? In tech world, I'm a veteran in this company, I think. And then you guys are just like the dinosaurs.
00:05:19
Speaker
I shouldn't say that, sorry. But you guys like- Can we go with OG? OG feels better. Okay, we'll go with that. Let's just say legacy. I love the idea of just like, I'm coming in with you guys like humbly, like just on the coattails, I think. But anyways, Stefan, super pumped to have you join Ben and I as we ramble for 48 minutes. So today we may go a little longer just because we got another voice, but super pumped to have you.
00:05:48
Speaker
Uh, then last couple of weeks, what's new?

Personal Stories: Skiing and Accidents

00:05:51
Speaker
What, what have we not heard you've been doing, uh, in the last couple of weeks? Well, thankfully with the, uh, the school schedule, there is spring break that hits, you know, late March. And so we decided for the first time as a family, we were going to go on a legit ski vacation and we weren't going to fly because we've had a terrible time with air travel in the last couple of years. So we hopped in the car and we drove to Whistler.
00:06:15
Speaker
which is year after year, one of the top rated mountain resorts in the world. And I've been there once before. The family had never been, but my daughters are both very proficient skiers now, and my wife's always up for going and checking out new places. So headed up there for three days of skiing at the resort. Another couple of days of just some mellow ski tours around. And honestly, I was reminded that the scale of that place
00:06:41
Speaker
uh, is, is next level. You're just up in the high Alpine. We were fortunate to have five days of bluebird conditions. So we could get up in the high Alpine every day. My daughters were constantly pointing fingers. I want to ski that line and that line and that line. And as much as I appreciated that, I also had my own fear of like, dude, there's exposure, there are rocks, there's risk. But then layer on top of that, you know, my, my two, uh, precious daughters and
00:07:05
Speaker
and their wellbeing and trying to make sure Christy was having a good time in some of the more mellow terrain while we would venture off. It all worked great, but I'll tell you honestly, at the end of each day, there was just massive adrenaline fatigue, having kind of faced up those things and managed the risk and exposed them to some of these just amazing areas and lines to ski. So yeah, it was a fantastic trip and everyone's agreed that, gosh, we should do some more of that in the future.
00:07:32
Speaker
One of the cool parts about having done this with Ben for 20 something years now is watching this grow when we first started this. I remember how impressive it was and a little intimidating that I think he would literally kayak to the office or something most mornings.
00:07:49
Speaker
Not most mornings, but I have done that before, yeah. It was wild. There's the whole idea of this guy kayaking to work and generally he would find his way up to the mountain to ski for a little while and be productive the whole time. And I would always kind of wonder, okay, is he safe? Is everything cool? Falling down a mountain somewhere?
00:08:07
Speaker
And then to watch that just expand into this whole family like doing these doing these mountain do things you know up in the mountain all the time and it's been it's been interesting to watch that expand that then reminds me you know the Will Smith movie a hitch remember Kevin James is like a risk management guy he looks at policies.
00:08:28
Speaker
in issues, life insurance in the guy, his client, which was like, I think it was like Crocodile Dundee character. I can't remember his name, but he was like jumping off buildings. He was doing all these crazy things. It kind of makes me think of that when you talk about Ben, just kayak in granted, kayaking is pretty mellow compared to, but it's it's all in there. I flat water, the flat water of the of the Willamette River. The biggest concern there is lack of water quality and pollution, I think. But yeah.
00:08:57
Speaker
I appreciate you hanging on all those years stuff. And I know sometimes my leisure time activities make people a little bit nervous. Well, at that point back then, my, my experience to going outside was pretty much running to the store to grab some red, but really quick and get back to work. So when I wasn't dropping it off at your doorstep. True. True.
00:09:16
Speaker
Well, my, my experience at Whistler, uh, Ben was a little different. So it was summertime. Uh, it was like oh six, oh seven. Uh, it was downhilling, uh, mountain biking up there. Um, it was our second day. It was around just after lunch. So around two o'clock, um,
00:09:32
Speaker
I had done this

AI Introduction and Optimism

00:09:33
Speaker
big drop and it goes to like a left turn hip and I crashed. I whipped down, ended up breaking my elbow, separating my shoulder and fractured my whole full face helmet. I don't remember the day I woke up two days later in Vancouver.
00:09:48
Speaker
uh, Canada, they had ambulanced me there, um, and, uh, ended up having a couple pins like surgery when I got back to Spokane two weeks later or a week later, had it all pinned up in like nine months of therapy to get my elbow, you know, to be able to bend straight again. So.
00:10:05
Speaker
I'd love to go back. It looked pretty the first day, the stuff that I do remember. Fortunately, I don't remember all the pain and what I was dealing with the second day. So we each have our memories on that. Well, company retreat. There we go. Let's get some Aflac policies first and then let's do that. Yeah. Right. Sponsored by Aflac. Yeah. Perfect. All right. Well, let's kick it off into today's topic, which
00:10:35
Speaker
I was joking about the title, but it kind of went in there. I called it Stefan in the machine takeover. Um, just cause we're talking about AI, which is I think a hot topic for a lot of podcasts lately. Um, in my research for this topic, um, there's a million of them. Everyone has an opinion. And, and I thought we just kind of jump in and talk about it, uh, as it relates to what we know, what we experience, um, and, and potentially.
00:11:01
Speaker
what we're seeing kind of in the future of AI when it comes to web development and what we're doing. So, Stefan, what are your thoughts on the topic? General, I guess, until we dive in. Yeah, I mean, there's a lot there. I think I am pretty much always the optimist in the room when it comes to technology.
00:11:27
Speaker
I think the market is pretty saturated with people satisfying the need to doom scroll. I'm not generally that guy. I love it. I've always been an early adopter of technology and just saw the
00:11:43
Speaker
The possible benefits for it. I guess I'm the caveman going, no, the fire's good. This is going to be cool. We can do stuff with this. Let's not worry about the whole burning thing right now. And I don't really look at this any differently. I mean, you definitely recognize some risks. It's important. But for the most part, I'm really focused on all the cool things it can do and trying not to over-hype it in my own mind. Because honestly, to me, I think this is
00:12:12
Speaker
It's a fairly pivotal moment or one of the opening salvos in that. I think there's technologies from the beginning of mankind that kind of create these moments. And I think we're lucky enough to be in the beginning of one of those. I think it's really cool. Yeah, for all the listeners, I have a few definitions for AI, but how would you

Understanding AI: Definitions and Examples

00:12:36
Speaker
define it? What is AI?
00:12:40
Speaker
I think it's actually really tough to be fine right now. I think just like the beginning of the internet, which in my life experiences is the best thing I have to compare this by. We were just talking the other day in the developer meetup, how similar this feels to not necessarily the beginning of the internet, but
00:13:01
Speaker
really operationally the beginning of the internet for us and for our purposes. You know, there were, it was just kind of the wild west in the kind of mid to late 90s of things you could do. And even into the early 2000s, it was just, there were less rules than there are now. And then there was less security and then everything was just wide open and everything was a question mark. I think that's where we are now. And I think there are definitely technical definitions for it.
00:13:29
Speaker
But I think to try to define it right now is kind of an exercise in futility in large part because it's changing faster than anything has ever changed. If I had to try to define it, I would say realistically it's the ability of computers to at least appear to understand things.
00:13:51
Speaker
Okay. I had a, so you're right. I searched up like, I don't know, five or six proper educated folks and their definition on it. And I think the, the one that I liked best was IBM's just in its pure simplest form for AI was, uh, it combines computer science, robust, robust databases to enable problem solving. Right. Just.
00:14:14
Speaker
those basic terms of computers, knowledge, and then to solve problems is kind of that equation, if you will. Would you agree? That's a pretty simple definition for it? Absolutely. Absolutely. And it's, again, it's going to be something I think will change before our eyes, but I think that's a good working definition for sure. Yeah.
00:14:39
Speaker
So I'm going to ask a series of questions, see if I can stump either of you. But I'm going to call out Ben on this one, just because I know Stefan. Well, I assume Stefan would have the answer, but we'll see. Ben, who is credited with starting or creating AI? Boy, pass, hard pass. Stefan, jump in.
00:15:01
Speaker
You know, I honestly, I don't know who's credited with it. I think right now, Sam Altman's getting a ton of attention, but for every, you know, Steve Jobs, there's a Wozniak back there. And interestingly in this case, the CTO of OpenAI, I think deserves a lot more attention. Mira, I don't know how to pronounce the last name, Marathi, I think, Albanian-born,
00:15:31
Speaker
worked for Tesla for a while. To me, she's definitely the Wozniak of this equation and the one really doing some really, really cool things. And I kind of relate to interviews I've watched and read where
00:15:45
Speaker
She doesn't love the hype. Um, she kind of wants to be back in the shadows a little bit and then, and keep the product in the shadows a little bit potentially. Um, she's a big advocate for, for, you know, using the public to test it, which is cool, but she's also not a fan of, of the hype and how fast things blew up. Um, and so for now I'd credit her, but obviously not, not the, not the first person to do it.
00:16:10
Speaker
So you, you brought up a topic, uh, that made me kind of interested in, in the foundation or theoretical work behind AI. Uh, it would have been theoretical work by Alan Turing, right? So the Turing test, um, that actually, if, if, for those that aren't familiar in, and you could probably better define this stuff in, but, but basically it's, it's asking a computer, uh, if you're a computer.
00:16:34
Speaker
And its answer being the determining factor of how human is it versus how computer is it, because it has to recognize itself as an entity, as a thing.
00:16:46
Speaker
Yeah, no, and I think that's a great definition. I think hopefully by now you're familiar enough with how I try to communicate it. I'm not gonna try to say and throw out a bunch of technical definitions for things that they exist, but I think it's a lot more useful to define things in human terms like this. You can talk to it in technical definitions when you're talking to chat GPT, but I think for our purposes that makes sense.
00:17:14
Speaker
I love chat GPT. I ask stupid questions in there just to hear its response. And today I just wanted to see what it thought I would look like describing me when I'm 75 years old.
00:17:29
Speaker
And it was like, it gave me this huge paragraph of like, you know what, I can't really describe that because there's so many factors and stuff. It was interesting its perspective on how to try to answer that. But who, let's go to this then. What was the first AI program ever created? And it's okay if I stump you on these too. Yeah, I don't know. I don't know.
00:17:53
Speaker
Again, with at least in my mind, the definition being somewhat flexible. Exactly. Yeah. Can you consider the first if then statement AI in a way, you know? Right.
00:18:05
Speaker
Right. So first day I, uh, as defined as something that could actually learn and teach itself something, right? So, um, was a chess program, uh, by Christopher, uh, Strachey, uh, in 1951. So basically he had, he had written a successful program where the, the checkers, uh, program, uh, on a computer would actually, let's see, play itself. Right.
00:18:31
Speaker
and learn from its moves and improve, right? It's pretty interesting how he talks about that, but let's see. What are some good examples of AI that we use on a daily basis? I think we is a stretch, but...
00:18:51
Speaker
Probably the one that most people are most familiar with, that they probably didn't realize that they were getting familiar with it, is watching the kids use Snapchat filters and all this stuff. Twitter, now that the algorithm is out there, there's a lot of AI happening in that algorithm, which people have been using for several years now and had no idea probably. But I'm sure you've got something else.
00:19:19
Speaker
No, those are good. I think of chatbots, right? So Amazon, Delta, Amex, all using AI to know how to respond and guide us. And again, it's pulling from their learning of just the database. So they're posed with questions and they've already been given the answers. So how do we define that? Is that AI? Is it not? And I think oftentimes they're bleeding into
00:19:44
Speaker
Yeah it's it's definitely a it's it's able to respond to feelings it's responding to tone in the words and able to identify that much like. If we are people it would be body language right of knowing if i'm angry just based on my expression or if i'm crossing my arms so similar to that. Microsoft being.
00:20:06
Speaker
Right. So they revamped, uh, a lot of what they do in their chat mode, uh, to talk to users, uh, versus just results, uh, giving results, smart compose in phones. Is that live now or is that still, is that still beta? Still beta. Yeah. From what I read, uh, still beta, uh, to my understanding. Yeah. Um, I'm still, I'm still getting into that personally. I signed up for it, but haven't, uh, haven't actually gotten to use it yet.
00:20:36
Speaker
I did get too smart. I got into the beta for that, which is interesting. What's that like? It's not there yet. It's not chat GPT.
00:20:53
Speaker
I think it'll get better. And that's something that I try to keep in mind with all of this is, again, this is just the very beginning of something. It's definitely not as accurate in some things. So far, my testing doesn't really give you the value that I think we're all getting familiar with elsewhere. But I do think it will get there.
00:21:17
Speaker
Yeah. Uh, would you define the next couple as AI or just good tools, good software, good writing, right? Uh, fall detection or crash detection in like Garmin devices or Apple watch, right? Yeah, honestly, I don't, I don't know enough about, uh, about how those work. Um,
00:21:38
Speaker
I've actually got a friend of mine that works at Garmin. That'd be an interesting question as opposed to him. I definitely think it would be a great place for it because I think a lot of those can be static, you know, static algorithms that it's looking for, you know, a drop of so much and maybe some sound and there's just these static variables you could pick up on that I would anticipate.
00:22:01
Speaker
A lot of that functionality is based on so far, but I definitely think those, if they're not already, would be fantastic use cases for the type of technology we're seeing now. Yeah. Um, later on, I have some quotes from Sam Altman that we had talked about previously too, and we'll talk about some of those, but for now, uh, what are some real concerns that I think, uh, people in general, both public as well as us that are kind of a little more embedded into it, um, our feeling.
00:22:33
Speaker
Ben, you wanna go with that one? Yeah, I mean, I guess at a high level, it's just that the machines start making decisions we don't want them

Balancing AI Concerns and Benefits

00:22:41
Speaker
to make. They start overstepping the bounds of their learning and perhaps, you know, trending in directions that aren't positive. And gosh, look at some of the societal trends that we're seeing. And if they're kind of, you know, hooking onto those on some level, yeah, I mean, what does it look like when we quote, lose control?
00:23:01
Speaker
That's, that's pretty nerve wracking. And you think about internet of things and from a security perspective, I've heard some things around like.
00:23:08
Speaker
Gosh, what if you've got these decisions being made on your behalf? If people are able to then hack into those and influence some of those real life things, physical things in people's homes and lives, those could be pretty terrifying. And I guess from an autonomous driving perspective is a great example. If you've got cars increasingly making decisions and they start
00:23:34
Speaker
Um, you know, making the wrong types of decisions, the consequences of, of those decisions could be pretty terrifying. So there's, there's some fear based, uh, aspects I'm I've, uh, I've come up with. Right. Yeah. I think it's funny, you know, reading on the internet and watching podcasts and stuff. I think, uh, there's a lot of Hollywood going on. Um, I think people.
00:23:57
Speaker
People are probably worried about the wrong things. I think people are way too focused on things they've seen in movies as opposed to things like that. And I even think in a more macro sense.
00:24:12
Speaker
right now, it's making our jobs easier. I mean, I pretty immediately picked up on it as a tool to do things that were kind of mundane. You know, we talk all the time about how for us, it's like having a free or a $20 a month intern for whoever wants to use it. And it's fantastic for those use cases as it advances and starts affecting more change on that front. I worry a little bit about, you know, where are we 20 years from now?
00:24:40
Speaker
if the thing is able to do your job for you, what does that do psychologically to people? I think we all like to kind of take pride in what we do and it becomes to some extent part of our identity. And if you've got something that does that for you, there's all these utopian ideas that can come to mind. But at the end of the day, who are we if not our work at a certain extent?
00:25:09
Speaker
Yeah, there was an article published in New York Times last Friday talking about where do we see AI in the future, like good or bad.
00:25:16
Speaker
And he broke it down, so author Cade Metz. He broke it down to like short-term, medium-term, long-term. And I think it was pretty interesting because you just touched on his short-term generative. AI can already answer questions, write poetry, generate computer code, and carry on conversations as chatbots suggested. They're first being rolled out in conversational formats like chat GPT and Bing, right?
00:25:40
Speaker
medium term, this is where you just talked about it. Many experts believe AI will make some workers, including doctors, lawyers, computer programmers, more productive than ever. But they'll also believe some workers will be replaced, right? Some of those medium to, I don't know, entry level positions that are, could be autonomous, could be a robot, right?
00:26:04
Speaker
long-term for companies like OpenAI or DeepMind, a lab that's owned by Google's parent company, which, side note, I thought Google was the parent company, but I didn't realize they had a parent company. It's just, oh yeah, that's right.
00:26:21
Speaker
So they plan or the plan is to push technology as far as it'll go. They hope to eventually build what researchers call artificial general intelligence. So AGI, a machine that can do everything the human brain can do.
00:26:37
Speaker
That's scary. So we'll go back to this quote from Sam Altman. I like to hear what you guys think on this, but Sam was quoted saying this, I think it's weird people think that it's a big dunk when I say, I'm a little afraid. He continues to say, and I think it's a little crazy not to be a little afraid. And I empathize with people who are a lot afraid. So he's kind of saying like, I don't know if we need to be here, a lot afraid,
00:27:04
Speaker
but you need to be aware and a little afraid. You know, what's your takes on that? Well, I'll jump in again. I think, oh, Stephanie, I was going to say that I think that's, that's healthy. Um, I mean, I think in a lot of aspects in life, the line between respect and fear get, uh, get blurred and people kind of use those terms interchangeably sometime.
00:27:28
Speaker
But I think for me, it's certainly something to respect. Again, back to my fire analogy. I mean, to me, every new technology, that's the comparison I make, because most new technologies can be used for something positive or negative, a lot of times in equal aspects. And so when you're talking about something like true AGI,
00:27:51
Speaker
Obviously, the positive, the possible positive aspects of that are huge, but there's another side to that that has to be equally as negative possibly. And so I think that's, I'm glad somebody in his position is thinking about that, but what makes me fear a little bit more than I would like to is that's only one company doing this and it's the first one.
00:28:17
Speaker
You know, I remember again, back at the beginning of the internet, part of that wild, wild West was search engines that, uh, you know, most people today have never heard of that back then were all the rage. Google was, was a small fry. We were all about all the Vista and, uh, it was amazing. And I, I'm not sure if there's still a thing. Um, and so I try to keep that in mind that so far I'm looking at Sam Altman and, and, and Mira, uh, and.
00:28:45
Speaker
I'm seeing personalities to me that I've personally got a lot more respect for than other big tech personalities. I don't get to buy from these guys that I do from certain other big names in technology, and then that's great, but that's just one company.
00:29:03
Speaker
Yeah, I know it's a great perspective. And I, I think for me, it's, you know, there's so many areas in our lives now where it's the old saying of, you know, just cause you can, doesn't mean you should. Um, you look at capitalism and how far people push that. Um, you know, we, we've got regulations set up, but if you're, if you're, you know, core fiduciary responsibility is to increase shareholder value, you know, some of the ethical and environmental and, you know, humane concerns that, that might get in the way of other rational, um, mandates.
00:29:33
Speaker
kind of get thrown out the window and it's great that we've got some leaders in this space that are leading well and not avoiding some of the tough questions. On the other hand, we're talking about democratizing this down to such an amazing level and the impact that can come of this kind of work where it can be pushed
00:29:55
Speaker
is massive in its potential. So those things combined, again, it can result in incredibly positive things, but how we regulate or manage against some of the risk when this gets in the wrong hands,
00:30:14
Speaker
That's the part that I need to continue to learn on and find comfort in the realities of that risk scenario because it's certainly concerning to think about some of the wrong people getting the power of these tools in their hands.
00:30:33
Speaker
So funny, I open this saying how much of an optimist I am, but it sounds so quickly to fall into kind of more doomsday stuff. And I just want to say that altogether, I'm still extremely, extremely optimistic. I think the things it's going to do for us are going to be hugely, massively beneficial. I think in terms of medical technology and science, the ways this is going to help us, it's just going to be great.
00:31:02
Speaker
I'm super optimistic. I think that, again, there's always, always risks. But I think it's going to turn out okay. And if I'm wrong, nobody's going to know I'm wrong very long. So that's okay. That's right. That's right. Yeah. I don't want to compare it to something that I'm just ignorant of, but I think of like nukes, right? Nuclear technology. They could supply power for cities.
00:31:28
Speaker
or they can destroy it, right? So I feel like it's similar in some aspects to many tools that we have that could help or hurt, right? So there are, I listed out five risks of AI, and this is definitely on the dooms side of it, doomsday side of it, but there are a couple pieces in here, and I'll read a quote from someone else that I was reading up on as I was researching the topic.
00:31:56
Speaker
That that made me think about this. So one aspect of A.I. is is how people are defining it. Right. Stefan, you were talking about how it's it's kind of all over the board when it comes to that.
00:32:07
Speaker
And Jason Carmel, he's a web-based or web-creative lead at an agency firm at agency in New York. He says people are conflicting sentience with intelligence. Intelligence being able to collect and apply information, whereas sentience requires the ability to feel and perceive things, right?
00:32:32
Speaker
Inconsciousness takes it a step further having a level of self-awareness often People describe AI as being sentient when really it's just a good regurgitating of information right and I think that's I I liked that kind of Approach to it of like hey, we're not talking about it having feelings per se but but definitely regurgitating that right now at least at this point in it and
00:32:57
Speaker
which leads to some of the five risks that I itemized. And I'm actually gonna go backwards in it because I really wanna talk about just the one main one on here. So backwards, obviously the security threat of AI, right? Of it becoming potentially aware, realizing, wait, humans are actually harming themselves, I'm gonna protect them.
00:33:17
Speaker
Right. And just like any movie there. And there's a movie coming up here in a second. So weapons and war. Right. It's starting to control them. And then there's a game. There's a movie that that I brought up to Stefan. He went with Terminator like of
00:33:34
Speaker
having actual artificial intelligence i went with war games remember that one with matthew broadwick. Where the computer basically thought it was a simulation but was activating like missile codes to nuke. The world do you remember that movie from eighty three i don't recall that one.
00:33:51
Speaker
you got to look it up because I think it'll make you giggle a little just because it's Matthew Broderick when he was like a child and then also 1983 so super old but it made me think of that too where it's thinking it's it's doing a good thing and realizing oh shoot it's it's actually like the people were
00:34:09
Speaker
who programmed it, made it flawed. Questions around privacy being another one. How much will it be able to access? How much data will it be able to access? It's a good one. Job automation in unemployment, that's another risk that we had just talked about. But the last one I would kind of, I think is a little more pertinent to now is the misinformation.
00:34:34
Speaker
If what it's doing is regurgitating, if AI is just regurgitating what's already been said or is on the internet or published or whatever, what if, hypothetically, what if it's actually reading information that's wrong and it's taking it seriously and it's providing it again, right? We're talking about social media as we're talking about just the internet and how fast that information can spread. That to me, I think is one of the bigger risks currently. I like to know what you guys think on that one.
00:35:05
Speaker
Yeah, no, I can get behind that 100%. You think about the way in which the information superhighway is accelerating the dissemination of information true or false. And then just having this accelerant essentially applied to that creation and distribution of information. Yeah, I can imagine our next election cycle could be pretty spicy. Yeah, and I think up until now,
00:35:34
Speaker
you had kind of this, this final stop and disinformation where at least we saw a video, even a picture of something that was it. You could say, look, I'm, I'm looking at this with my own eyes. I believe my eyes. This is bad. Um, I think we're getting to a place really, really quickly where that's no longer the case because anybody who gets caught in some kind of a compromising, uh, situation like that can simply just like they, they tried with Twitter for a while saying, Hey, my, well, my Twitter got hacked. I didn't say that.
00:36:05
Speaker
and they eventually get busted, I think with this technology, that's going to be a really easy place to go. Well, that's not me. That's something that the OpenAI did or something. I think that's going to be a really easy stop for these people. But I think these are the right kind of worries. Again, I think a lot of the fears
00:36:24
Speaker
have gone very, very Hollywood and a huge mistake, I think it's being made in a lot of concerns, is almost everyday projection on a massive scale where
00:36:37
Speaker
we're projecting human flaws and ideas onto circuit boards, you know, and thinking that these things are going to have the same flaws that we do. And they may or they may not. But computers at the end of the day are never going to be people. They're never going to be humans. And so I think it's a mistake to try to frame everything around that idea. And then you could at least focus on what I see as
00:37:06
Speaker
more realistic risks that you need to actually mitigate, such as disinformation. Yeah. I was reading up on our article again on chat GPT-4, so new release.

AI Regulation Challenges

00:37:20
Speaker
Did you get a chance, Stefan, have you played with that versus previous version?
00:37:25
Speaker
Yeah, yeah. They keep dialing back the number of uses. It started out at like 100 requests per every four hours, and I think it's been sitting for a while now at 25 requests per every three hours, which makes it a little bit less useful than it would be otherwise because one of the cool parts about this, you're kind of having a conversation.
00:37:46
Speaker
having it refine its answers, but the bottom line is it's vastly superior to the previous versions. In programming specifically, I'll often feed it a problem in one of the older versions and get a pretty decent answer, but it can't seem to get past a certain error or something, and then I will open up a version four prompt and feed it in a certain amount of progress I've made on the previous versions, and nine times out of 10 that solves whatever problem I'm trying, I'm going for it.
00:38:16
Speaker
So before opening, I handed that out to the public. It was handed over to a group to imagine and test dangerous uses of the chatbot. And I found these prompts or these questions pretty interesting. So the group found that the system was able to hire a human online to defeat CAPTCHA tests. We're talking about a robot hiring a human to do this. When the human asked if the robot, the system,
00:38:45
Speaker
Uh, or sorry, when the human was asked, sorry, when the human asked if it was a robot, the system unprompted by the testers lied. And it said it was a person with visual impairment. So again, this is before it became.
00:39:01
Speaker
given to you, but I was like, Whoa, this is that's the danger is when it's starting to lie, right? Testers also showed that the system could be coaxed into hide how to buy or suggesting how to buy illegal firearms online into describing ways to make dangerous substances from household items.
00:39:21
Speaker
And after changes by opening AI, the system no longer can do those, right? But that's kind of what we're having to test. So they went on to say it's impossible to eliminate all potential misuses, which, I mean, that's the same with any technology that we have now, any tool now, right? A hammer. There's a lot of misuses of a hammer. You know, I've hit my thumb a lot and it hurts, right? That's not what it was intended for. You know, like, and I know that's a weak analogy there, but
00:39:49
Speaker
I think this comes to wrapping up topic on AI is the future of it, remedies. Stefan, what do you think would be some good remedies? And I say remedies not being this cure potentially, but definitely guardrails. What do you see needing to be set in place to find success with AI in the future?
00:40:12
Speaker
And I hate to advocate for this, but I think ultimately regulation is going to be the only answer. And I think that is my biggest fear. I don't know if you guys watched any of the congressional grilling of the CEO from TikTok, but we're not there.
00:40:34
Speaker
We are not voting for the type of people who are going to be able to do what needs to be done here. And of course that can be said across the board in so many subjects.
00:40:46
Speaker
watching those things just deflated me in terms of the things that really need to be done as soon as possible to make this work. Because otherwise what you're going to see, it's just a major reactions like we're seeing from the EU right now. Italy just flat out banning opening eye. Good luck. That's not gonna work.
00:41:10
Speaker
So yeah, hopefully we get there. Hopefully we start voting for the right people who can actually do what needs to be done. Yeah, I think my overall impression of that is I agree with regulation. My fear is that the technology outpaces it. It's going to outpace it, right? Yeah, I don't know how law is going to keep up.
00:41:38
Speaker
Yeah, I have very, very, very little hope that that's not going to be the case. Again, after watching those hearings, we are just so not there. I think for a while,
00:41:53
Speaker
we will be able to rely to a certain extent on these companies doing what they need to be doing. Again, OpenAI, the things they're doing right now are fantastic. And they've got in-house ethicists and philosophers that are helping guide these things. But on the other hand, a lot of very comparable technology is already open sourced and out there. And to that extent, can regulation even help that?
00:42:24
Speaker
Yeah, I don't think there's a great answer there yet. Maybe one of the benefits of these systems, hopefully making education better could get us to a place where we're voting for the right people to deal with this stuff. But like you said, I think one outpaces the other. It already has. And I'm not sure how fast I can possibly catch up. Yeah.
00:42:52
Speaker
Ben, any thoughts on that? Yeah, I'd really just echo Stefan's comments on, you know, when I watch these elected officials try and grill technology CEOs and executives on, on, you know, the true nature of the problem. I feel the same way I do about a lot of RFPs where you've got committees that are evaluating
00:43:14
Speaker
uh, you know, proposals and making decisions for organizations that, that just, you know, are not qualified to, to make those types of, of decisions and identify and understand what certain things mean, right? They might just be gravitating towards, well, what's the ubiquitous platform? What's, uh, you know, what's the, where's the easy button, the comfortable thing that's going to protect me and my, my role, my job, um, et cetera. And, and you know, when it comes to elected officials, God bless them for all of their, their hard work and dedication and,
00:43:44
Speaker
uh, investment in our, our interests, but it just feels like, you know, we need some, some, probably some industry experts, some really smart people on the other side of the, of the, uh, lectern that are, are, you know, posing those questions. Otherwise, I think some of the, just the, the savviness of, of these, um, these CEOs and leaders is, is just going to steer a conversation in a way that, you know, our, our elected officials are not equipped to manage. Yeah. Yeah. I think that makes perfect sense.
00:44:14
Speaker
Yeah. Well, so that's, that's a lot of negative in there and not negative, but, but real questions

AI in Web Development at Cascade

00:44:20
Speaker
in there. Um, Stefan and, and then I, yeah, caution. That's a better word. I look at, uh, AI and, and Stefan, I'm also optimistic. I, I love the idea of putting it to use. And so wrapping up with kind of a final question, um, how do we see AI, uh, for what we do as cascade web development?
00:44:40
Speaker
within possibly evergreen or just how we serve our clients. What do you think are some ways that we're gonna be able to optimize or leverage AI in the future? Yeah, I think already, like I said, I started using it pretty immediately as a tool. There's definitely certain things when you're coding that you put off because it's gonna be tedious and you just don't wanna deal with it right then. So you do something more interesting.
00:45:07
Speaker
and having essentially a junior developer that you can throw some work over to and it pops it right out instantaneously and you don't need to worry about being diplomatic, although
00:45:18
Speaker
I am. I say, please, still, I'm hedging my bets. In case I'm wrong about some of this stuff, I want to be nice. But you can just feed it this work, and you don't have to worry about it getting frustrated and be like, oh, really? You're going to change that right now? Again, I did what you told me to. You can just have it do what you want it to do and throw those more tedious tasks back to you in a functional way most of the time. That's already been huge for me personally. And I think the other developers are catching on to that.
00:45:46
Speaker
getting their legs as far as prompt writing and prompt engineering as it's being called. And then in terms of features, I think the sky's all in it. I think we've already talked about some language things we could do. I can't remember Michael said something the other day that was pretty interesting and it was kind of along the line something I was thinking in terms of
00:46:07
Speaker
Not to get too far into the weeds, but a constant security challenge is trying to figure out, hey, this person's trying to log in. Is there anything anomalous about this login intent? And we kind of have our own simple algorithm for figuring that out that's not perfect. And it frustrates people. We've all been there where we're requested to give an MFA code.
00:46:30
Speaker
in a situation where like, why? I just did this yesterday or I just did this last week. I think there's some potential to solve some problems like that where it would be helpful to have a person watching every single transaction and thinking about it and being educated about it, but that's just not a scalable solution. It can have a human on the other side of every login attempt as an example. But you can have an API call on the other side of every attempt.
00:46:55
Speaker
And so definitely looking at a lot of possibilities where, again, it's not scaled with a human, but with a robot, it starts to make more sense. Ben, where do you see opportunities for AI for Cascade in the future, too?
00:47:13
Speaker
Yeah, I think those are some great examples Stefan mentioned. I also think about what can be done. Ultimately, I think about how can we help our clients operate more efficiently with what they're trying to accomplish. And so a lot of that goes into features that could be included into Evergreen.
00:47:28
Speaker
and certainly empowering our team, watching some of the programming capabilities of AI right now just magically happened before my eyes as Stefan's demonstrating some of the power there. And then I certainly think that what can be done to accelerate some of our own marketing efforts and our own ways of creating compelling content. Obviously all of that stuff, all of those examples come with, well, sooner than later, some of that's
00:47:57
Speaker
some of the those use cases will be more easily identifiable and, you know, less effective. But, you know, just getting that that initial leap forward, you know, example of foreign language, if we were to be able to say, hey, translate all of this content on the website from English to Korean. Well, that's a huge leap, but it's not good enough. You still have to have the trans creation partner go through and evaluate and make sure, yeah, the uses and the
00:48:23
Speaker
the context and the conjugation and all that is is right and proper. But if you can eliminate that initial like tedious step of rewriting everything in, you know, that that additional language, that's huge. My hunch is and I haven't talked to a lot of people in Transcreation, but that they're already using that and they're already doing that. But, you know, if we can empower our clients to to have that huge leap forward to have that much better than Google Translate and then, you know, just minimize the
00:48:52
Speaker
the amount of work to get that transcrated content live on the internet. Gosh, that feels as a really exciting example, especially in the world we're living in, where accessibility is given increased value and consequence for those that aren't following some of those trends. That stuff really excites me. Yeah. Well, I know we talked about it.
00:49:16
Speaker
I was going to say the other thing that it's going to take a little bit more time is the API is open up, but a huge one that I would love to work on is accessibility features. I think there's a lot of things where people have certain physical challenges that this could help a lot. I think in terms of helping to describe
00:49:40
Speaker
a picture to someone that can't see it in a text. The idea that we can feed a photograph to a machine and have that spit out a description of that picture or somebody in that situation I think is fantastic. And if some of these APIs open up and get better and better, there's going to be huge opportunities like that. And that reminds me of just a quick side note that I saw the other day that
00:50:09
Speaker
made

Future of AI in Work and Accessibility

00:50:10
Speaker
me smile. I've got a friend who's got some cognitive challenges and she finds herself in kind of an administrative position where she's having to communicate all the time and she would normally be challenged with sending out kind of a
00:50:24
Speaker
something that's perceived as maybe a short, slightly terse email or something. And this is not a technical person at all, but she's already using this thing. She'll type in her rather hostile, quick response, and then have chat GPT give her something that won't cost her her job. So I think it's really funny that everybody's worried about how many jobs this is going to cost. There's one job that's probably been saved already from it. I love it. Love it. I could use it. That's awesome.
00:50:54
Speaker
Well, we've talked about that, that I'm the feelings API, right? Where you said it to me. So yeah, if we could use chat for that, that saves me that work. But no, I agree. Marketing wise, I have used it. I've tested it to just give descriptions. In fact, for the podcast, for WebO podcast, for the description, I had chat GPT helped me with it. Again, this is in that phase when it first came out last fall. This spring, Stefan, when you're just hot off of it and just like excited,
00:51:24
Speaker
I find all sign up so I signed up and I started just giving these prompts to do things that I needed to do anyways right and so I I feel like I'm in one of those categories which is gonna help me do my job better right I was listening to another podcast with some developers that they were talking about that exact thing where 90% of their job sure you know a
00:51:45
Speaker
student just out of college, a person just out of college could probably do all that. But it's that experience, it's that 10%, it's that life lessons, I've been through it before that makes them keep their job and makes them worth their money, right? But I think it's leveraging that tool to help do the things we're already doing. I think that's where I'm excited to see that grow.
00:52:07
Speaker
Same. I think that's been one of those fears with most new technologies. I can't imagine that at the beginning of industrialization, there weren't people that were pretty hostile towards this whole process that was going to take place and what that would mean for them. But at the end of the day, obviously it created a hell of a lot more jobs than it destroyed. And I look for the same thing to happen here.
00:52:32
Speaker
Nice. Yeah. Well, I think that's a good place to end AI talk for now. I wouldn't be surprised if probably in like three, six months we do AI part two or something. And just with how fast stuff is changing, I'm excited to see what we can end up doing with it, how we can leverage it and really share that and talk about it. Um, and maybe gain some insight from other people as well. Um, your, your friend, you were talking about, uh, Stefan, um, where'd you say they worked?
00:53:01
Speaker
I didn't. That's right. You didn't. Another, another developer, someone else, but just get that insight from them. Oh, I'm sorry. I misunderstood the reference. Yeah. Yeah. I've got a friend that works at Garmin and I talked to him a little bit the other day, but we didn't, we didn't get in depth on, on what they were doing specifically. Yeah. Yeah. That'd be neat.
00:53:25
Speaker
All right, well, I think that wraps up episode three. Just as a reminder for all the listeners, please do follow us. Send any questions to webwell at cascadewebdev.com. We're excited to be able to have you guys join us as we go out through this year on random tech topics. Excited, thank you, Stefan, for joining us today. Thank you. Thanks, Stefan. Appreciate it, Simon. All right, thank you, listeners.
00:53:59
Speaker
you