Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Best Of: What are we building? And the future of human flourishing... image

Best Of: What are we building? And the future of human flourishing...

S4 E33 · Bare Knuckles and Brass Tacks
Avatar
110 Plays3 days ago

We've spent the last several months talking to people who live at the intersection of technology and the humans on the receiving end of it.

A data privacy attorney. A corpus linguist. A clinical psychologist. A performance coach. An entrepreneur who built a business on failure.

They don't all agree with each other. But they're all pointing at the same thing: the gap between how technology gets built, deployed, and sold — and what it's actually doing to people.

This week's episode is our attempt to pull that thread.

  • Mike McLaughlin — The AI ecosystem is running on bad data, has no real mechanism to fix it, and the next wave of cybercrime will target the training data itself.
  • Kimberly Becker, PhD — AI-generated text is structurally overconfident, and a corpus linguist traced that pattern all the way back to how decontextualized certainty language helped fuel the opioid epidemic.
  • Dr. Marissa Alert — What organizations call employee resistance to AI is, clinically, a fear and identity threat response that most rollouts are spending millions to ignore.
  • Tychon Carter — Winning is often where the real crisis begins, and the goalpost never stops moving until you decide your value isn't determined by your output.
  • The "Bad Hombre" — A solopreneur who built a business on public failure makes the case that the willingness to fail more than most people even try is the only real competitive advantage.

Every one of these conversations eventually arrives at the same place: the distance between what we're building and who it's landing on.

Recommended
Transcript

Introduction to Technology's Human Impact

00:00:08
Speaker
Hey listeners, George Kay here. We were on the road this week, so we weren't able to record a new episode, but we thought it was time to look back on some of the most recent conversations.
00:00:19
Speaker
Over the past several months, we've been having people on who are operating at the intersection of technology and the human beings on the receiving end of it. We've talked to lawyers, linguists, psychologists, coaches, entrepreneurs,
00:00:33
Speaker
And what we found cutting across all of those conversations is a single thread, the gap between how technology is being built, deployed, and sold, and what it's actually doing to people.
00:00:45
Speaker
So this episode is a best of compilation in our attempt to pull that thread together. We've gone back into more recent episodes and cut few moments from each of these five guests, and we think that they belong in the same room together.
00:01:00
Speaker
They don't necessarily agree with each other, but they're all pointing to something real.

Cybersecurity Threats to AI

00:01:05
Speaker
So first, you'll hear from Mike McLaughlin, cybersecurity and data privacy attorney, we talked to about the AI ecosystem and how it's running on bad data and doesn't have a way to fix it yet.
00:01:17
Speaker
and how the next wave of cyber attacks may target the training data itself. Kimberly Becker is a corporis linguist and we talked to her about how AI generated text is structurally overconfident and she traced that pattern all the way back to the beginnings of the opioid epidemic to show us what happens when certain language gets weaponized with false confidence at scale.

AI Adoption and Employee Resistance

00:01:44
Speaker
We also highlight Dr. Marissa Alert, a clinical psychologist who works with organizations on AI adoption. And she told us what it looks like when employees resist and what is the actual trauma response behind that resistance and how companies who are spending millions on rollouts are largely ignoring that and how successful companies are rolling AI out.

Navigating Success and Personal Crisis

00:02:08
Speaker
Then we highlight our conversation with Tyshawn Carter, performance coach, and not for nothing, the first black winner of Big Brother Canada. And he walked us through why winning is often where a real crisis begins and what it actually takes to build a life that you recognize as your own.

Building Success Through Failure

00:02:24
Speaker
And lastly, we talked to Chris, aka the Bad Hombre, about how he built a business doing $50,000 a month in recurring revenue without a formal degree, without a roadmap, and with more failures behind him than most people accumulate in a lifetime. And he had something to say about how to fail.
00:02:42
Speaker
These aren't five separate conversations. This is largely one kind of conversation that we have been having across many months, many cities, and to be fair, across many cups of bad hotel coffee.
00:02:55
Speaker
So this is Bare Knuckles and Brass Tacks. Let's get into it.
00:03:03
Speaker
I truly think the next wave of cybersecurity incidents is going to target AI. It's going to target AI training data. You think what we are seeing right now when it comes to ransomware and targeting and encrypting files and making it inaccessible to companies and then they have to pay the ransomware, they have to completely rebuild.
00:03:19
Speaker
Imagine if you've got ransomware actor who gets into your training data and just inserts poison and data in there such that you have no idea what your model is actually going to output. And if you've got an AI tool that's used in operational technology, or if you've got an AI tool that's used to make very significant decisions in healthcare, for instance, and you can't trust the training data, that's a huge problem. And that's not a problem you can easily fix simply by rebuilding your infrastructure. You basically have to completely rebuild the model. But I think that's where we're going to start to see cyber crime go. If you don't trust the provenance of your data, if you can't prove the provenance of your data, you're going to be in a very, very bad way. This is essentially the the where we found ourselves in cybersecurity when most companies moved to backups and started backing everything up every one to two weeks so that they can overcome this problem. This is something that nobody's thinking about right
00:04:09
Speaker
And this is going to significant. And to your point, I think it's been shown that it doesn't actually take a lot to throw off the model because you're dealing with statistical derivations. And so if you, you could have gigabytes of data, but it only takes a small percentage deviation to throw everything else off, right? Because it cascades through the calculation.
00:04:31
Speaker
So Cornell University in October published a report on this very topic and they looked at how many documents, how many individual documents would it take to poison a large language model of any size. The number they came up with was 250.
00:04:44
Speaker
You can have a 13 billion token model and it takes 250 documents to destroy it and to poison it completely. And if you think about how many that is, 250 pages in an Encyclopedia Britannica volume, it's nothing.
00:04:58
Speaker
But that's what we're talking about. And so it's very small numbers that can have a huge cascading outsized effect. So the question becomes then, if we're going to if we're looking at this from a data perspective, how do you get quality data in the hands of ai developers that are really on the cutting edge, that are being very innovative and that have very specific use cases for that data, not just give me an image of a cat sitting on George's head, although yeah I do use ChatGPT for that pretty frequently.
00:05:24
Speaker
But how do you do that? So now we get to the quality data aspect of it. And right now there's no way for there is no Amazon for data. There is no Etsy. There is no eBay for data. There's nothing that a company can go on to or an AI developer can go on to and say, I'm looking for X because this is what I need for my model.
00:05:43
Speaker
More than that, a lot of AI developers don't necessarily understand how to how to effectively articulate their data requirements. they get at model They get model requirements and they say, I need my model to do this. I need it to distinguish between a naval combatant and a fishing vessel. I need it to be able to parse through this molecular compound so that we can create a cure for cancer.
00:06:04
Speaker
Great. And what data do you need in order to get to that point? Now you need data analysts. You need people who understand both the problem set as well as the different types of data to be able to translate that into data requirements. And then once you have those data requirements, you need to actually go and find that data, figure out who has it. That's a really long tail. And if you're just a small so plucky AI developer,
00:06:25
Speaker
Your AI developers, your actual engineers are the ones who are having to do that entire process. There doesn't exist a marketplace where an AI developer can simply say, I'm building this model. Here are my data requirements. Who's got the data?
00:06:37
Speaker
but But then i have to look at it like this, right? And and I see the issue is, first of all, that a lot of people in those client side positions or even in development positions don't actually understand what a proper model data set looks like.
00:06:51
Speaker
I think there is a massive void and gap in education in the market because everyone can hop on GPT or something like that. And then, yeah, great. You can do really cool things with prompts. And then they think they're a prompt engineer. like I have met so many people who are not in tech who come up to me because they know I work in tech and they're like, oh, hey, like I want to work in AI. And they set up some weird like coaching business like based on what they've done with an open source model.
00:07:22
Speaker
Well, if you back up even further from my realization that people were citing my research wrong, um when I really started digging into the language of technology, AI generated text, um I started looking at citation chaining and what it means when you play telephone sort of with science, like, well, so-and-so said, and then, you know, I tell you and, or I write it and you cite me, but I didn't quite get it right. And so you don't quite get it right. And then someone cites you. And so I started. And then it compounds over time.
00:08:00
Speaker
Exactly. I started looking into that and um I came across the Porter and Jick letter. So I don't know if you're familiar with this. this i am not. Okay. So the Porter and Jick letter was the original, and it was actually a letter, a five sentence letter from doctors um who cited one of the first studies about opioids that that said um opioids were not addictive in institutional settings. It was not a research study.
00:08:32
Speaker
It was just a letter to a journal. And that is what pharmaceutical companies picked up and cited over and over and over again. oh my God. To convince these doctors that they needed to prescribe opioids. It wasn't even a study. It was a decontextualized five-sentence letter. you can You can look it up. And there's a great article from 2018 when some medical researchers realized this.
00:09:01
Speaker
That, ah sorry, I'm just going to pause. That is just yeah take a minute absolutely horrendous to think about. Sorry. Okay. Yes. We'll come back to that. We'll come back to something about that.
00:09:12
Speaker
Sure. Yeah. And if you think about, I mean, just take that and and expand it out to what's happening now in the CDC with vaccine, you know, with all of this science rhetoric that's in the news, it's very concerning that we don't have a sense of the hierarchy of evidence whatsoever.
00:09:33
Speaker
We don't know the difference between what's found in a randomized control trial versus a a cohort study. You know, these are things that scientists know that the normal
00:09:46
Speaker
the normal media consumer doesn't know. And so that was where my concern began. And then all of a sudden, as a person who's not studied medical writing at all, I'm like, oh, wow, this is where We take that instance of someone playing telephone with science and we scale it with AI.
00:10:07
Speaker
Because people always ask me, can a human make that same problem? Sure, a human can make that same problem, but AI can make it at scale. And that's what terrified me. Yeah, machine speed and scale is really the bigger problem. And you can point to the same thing through social media, right? This has been called context collapse, where something just kind of lands in front of your attention span and you see a claim or you see some hyperbolic statement and you are encouraged through design decisions and UX choices to react to that or to comment on it.
00:10:40
Speaker
But you have no further context. And I think we saw... that problem accelerated through social media and now at sort of the human language level.
00:10:52
Speaker
Yes, your, your point is well taken, but I, I did not know that about that letter. And that is, um, just as somebody who knows many people who have suffered through the opioid epidemic, that is, heartbreaking. When we read AI texts, it's missing humility and hedging language. Do you think we're processing it differently at a cognitive level? Are we being primed to accept more absolute statements because of linguistic cues for uncertainty or just absent? Because I know, you know, when I deal was a lot of the prompting type work that I do, i um
00:11:25
Speaker
I'm really big on human in the loop. Like I take an answer that ah that a prompt produces for me and I always make sure that first of all, I rewrite it in my own words. Like I treat it as an initial draft, not as like final.
00:11:36
Speaker
um I think in a professional world, it is absolutely unethical to just produce AI slop and use that as like your final product that you submit. um But I do notice that, you know, the certainty of the model, irregardless of whether or not the statement is correct,
00:11:55
Speaker
that is That is an accuracy that is missed. And I fear that a lot of people have become so reliant on these model outputs that the critical thinking that we typically have, actually read and write our own material, or that we read books, entire books, not just a prompt that cites a random page.
00:12:18
Speaker
um I feel like we are slowly decaying that. And that is a serious concern about over usage of prompt based models to produce either academic or professional or even I hate to say it self therapeutic information.
00:12:36
Speaker
Yeah, I think it's really, I think our our brain started to make that shift because we are so siloed because of algorithmic. I mean, I don't know, you know, LinkedIn, my LinkedIn is a very specific audience because I hear from people whose views align with mine because that's what the algorithm gives me and suggests for me.
00:12:58
Speaker
So it started with this kind of siloing that happens in our social media threads. We only hear... what we agree with, essentially. We never get any pushback. We don't get any friction about our beliefs.
00:13:14
Speaker
um And so the idea of like certainty amplification, which occurs when something tentative gradually becomes presented as like an established fact, is, and and then we start hearing this boosted language, right?
00:13:30
Speaker
um And that seems normal and confidently wrong is the best way that I can, you know, when. yeah I feel like the entire Internet is the Dunning-Kruger effect, right?
00:13:43
Speaker
There is very little about people writing this. are people ready not just to use AI, but to integrate it into their work? How do they feel about it, right?
00:13:54
Speaker
And the reason that stood out for me is that as a psychologist, one of the core things that I do is focus on behavior change. How do I help people, one, not just recognize what needs to change, but to support long-term change?
00:14:09
Speaker
And when we talk about change, there's a whole lot of resistance that comes up for a number of reasons. oh yeah And so if AI is coming into the workplace, that resistance is often predictable, right? And yes, there are a few people who are excited, right? They're enthusiasts, but they're also the folks who might fall in the middle or might be scared to death.
00:14:29
Speaker
And what do you think happens when we're scared, right? We might fight, we might flee, we might freeze. And those things still show up when people are at work.
00:14:40
Speaker
And that freezing might look like, well, I'm not gonna engage. I'm not gonna use this tool that my employer spent who knows how much money on. And so as I noted that, it was like, all right, like, let's let's see what people are doing about this.
00:14:55
Speaker
um I did some work about May last year, trying to have more conversations around this. And i don't think people were as ready to talk about it.
00:15:05
Speaker
And I think as more studies are coming out, I know there was an article in Harvard at Business Review that talked about why adoption fails. And we started seeing more about, okay, this anxiety, this resistance piece, this fear piece really needs to be examined a lot more closely. Nice.
00:15:22
Speaker
Yeah, i'm I'm seeing like a lot of issues with it and, you know, ah from a techn technology standpoint and you're seeing forced um adoption from a process standpoint where, you know, and this is like free consulting everyone, but if you've not figured out your manual process yet and you try to automate it, it's it's going to fail and you're going to waste a lot of money.
00:15:43
Speaker
um I was actually particularly upset yesterday because I read the headline that Jack Dorsey had ah cut about um half his workforce and block, but citing purely um ai ai efficiency being the cause. and um You know, i ah I had to fight every urge of my body not to just scream from the mount tops on LinkedIn and be like, no, you piece of shit. It's because you didn't properly plan your organization and you're just profit driven because that's all these shareholders are doing is just trying to drive obscene profit and growth based on ruining people's lives because people's lives don't matter. So that's that's one point I find interesting and in what you were saying there and that you know people being fearful of adoption, but organizations, I think, I think organizations intentionally never considered people to begin with. And maybe that's a thread we can kind of pick off a bit later. Have you encountered cases where speed works or that it needs to be deliberate or is that a false dichotomy?
00:16:45
Speaker
that's interesting question. um and One that I'm still spending a lot of time reflecting on because I think it really depends on the organization, right? Like I know there's smaller businesses where the sole purpose is to use ai right? To bring products to market. And so, yeah, there's a lot of enthusiasm there. And so, yeah, we're all on board. There's alignment there.
00:17:06
Speaker
And so, yes, let's proceed. But there are also instances where if you are spending a lot of money up front, And your strategy isn't very sound in terms of, all right, what happens if people don't actually use this? If we can't see the increase in efficiency and performance that we're hoping for and we're shelling out hundreds of thousands, even millions of dollars and we're not seeing those outcomes.
00:17:31
Speaker
then Or they don't even know how to measure the outcomes. They're like, we're going to be more productive. And you're like, do you have current productivity metrics? it's it's It's funny that she's calling this out because when I brought Dorsey earlier, this is like relevant to this.
00:17:44
Speaker
um They actually, say analysts cited that this move was made not because of known results, but because of predictive results. But they think that they're going to save money and they think they're going to um exponentially elevate their output.
00:18:01
Speaker
by making this move. And so I'm like, that's a speculative bet on human lives. It's wild. I'm like, ah based on what are you guys like making this massive change?
00:18:13
Speaker
And mean, we see it not to digress a little bit, but it's like, we talk about a cloud code and like cloud code is going to replace CrowdStrike and all this. and I'm like, do you not understand what enterprise software development is?
00:18:24
Speaker
Like, you can't just have some like child vibe code a solution and suddenly like, it's, This is mad. You must look at this, Marissa, and think it's madness. I do. I really do. I really do. And I think you raise a critical point. There's a lot of guessing going on, right? A lot of guessing. And it's one thing if it's based on data, right? Because there's predictive analyses and indices that people can use. But if people are guessing, right?
00:18:49
Speaker
about outcomes and efficiency and what productivity is going to look like based on not really looking at the data, especially the data from the people who are going to be responsible for implementation. Right. Whether that means actually using the software, actually engaging with the automation tools. Yeah. I sometimes think we're just living through the executive's fear.
00:19:13
Speaker
Right. Like the decision is being made out of panic. Get it in. Right. So like we we talked earlier about the psychology of the people at the bottom up, like trying to metabolize the new technology. But I also think there's the psychological component of the people just like panic ordering it, like just get this thing and do it.
00:19:33
Speaker
Yeah, let's build the plane while we're flying. There are folks who are genuinely concerned about job loss because they've been affected by layoffs. They know family members who lost their jobs. It wasn't said that it was because of AI, but...
00:19:46
Speaker
If they see that there are mass layoffs, there's a concern. They often might put together puzzle, OK, maybe this is because of AI. I think the other thing, too, is that people attach a lot of meaning to their job and the roles that they have within organizations. And so it's often central part of their identity.
00:20:04
Speaker
And if AI is coming into the picture and it can do some of the things that they can do, it begs the question, okay, am I still valuable to this organization? And what would it mean if I can't do this work anymore?
00:20:17
Speaker
And so that's the potential identity threat. Who am i if I'm not able to do this work? If a machine or an AI tool can replace me? I think another aspect of that, and there was actually a study done by writer last year that found about, i think, don't quote me on these numbers, but I think it was about 31% of employees actively sabotage their organization's AI strategy. And that number was 41% for millennials.
00:20:45
Speaker
And identity threat was one of them. There's also a fear of, not just fear, but lack of trust, I should say, in AI tools. Not trusting the output and also lack of trust in terms of leadership, right?
00:20:59
Speaker
Do I trust that leaders actually want to keep employees at this organization? Do I trust that they find the work that I do valuable and irreplaceable?
00:21:11
Speaker
There is a human component to the work that I do that at this stage right now, and perhaps, who knows, i may never be able to replace that. And so that's another concern that folks have as well.
00:21:28
Speaker
I think what you're pointing to is also, you know, technology adoption has always been very uneven, right? And I think this one in particular, probably exacerbated by algorithmic media and like a faster news cycle is pointing out some other cultural trends, right? I think as a society, at least in the US, I think it has always been a problem that people identify their profession with their identity, right? Like if I am not to this, then what worth do I have in the economy, which is a sort of neoliberal fever dream, but it's a real issue.
00:22:12
Speaker
um And also, as you pointed out, lack of trust is kind of pervasive, ah political, social, and then like even interpersonal. It's like the psychology is laying bare some of the cultural fissures that maybe we've never really dealt with. And and then plus knowledge work was considered...
00:22:32
Speaker
like the, oh, we can automate factories and it's sort of like the quote unquote lower rung, the manual labor. And it's kind of a psychological shock that like paralegals might be replaced or financial analysts, right? These were supposed to be the quote unquote safe jobs. Well, the reason you went to college, the reason you yeah did all this other prep. Anyway, that's my my rant over to you, George.
00:22:55
Speaker
Yeah, I think you're bang on too, though, because I think it speaks to first of all the the one, Interestingly enough, the one job that they've constantly found AI can't replace are nurses, which are of them. But the nurses and are in public health care, at least in Canada. It's a massive debate.
00:23:14
Speaker
It's being constantly defunded and then criticized. and And the stress on nurses is absolutely unbelievable. Like I've known a lot in my life and they are constantly...
00:23:26
Speaker
beaten down by their employer and by the ministry and by funding. And I think in America, it's probably the same thing you're not working a privatized system. You also speak on a really good point, George, and I think Marissa, you'd probably agree with this.
00:23:40
Speaker
The hyper growth that represents the economic bet on AI is has created this massive widening of the of the wealth gap, right, between the working class and the ownership class that I think a lot of people are now um seeing their identities that often got tied to their roles broken.
00:24:00
Speaker
And now they're, they're, they're, The foundation of the economic system in which most of us in the West grew up on no longer applies. And I don't think people have psychologically found a way to exist in the paradigm of today's post-AI revolutionary economy, which is the same thing as like in the 1800s post-industrial revolution, factories started happening, people were moving out of the farms into the cities, and a lot of confusion was occurring and a lot of abuse occurred.
00:24:29
Speaker
which led to a lot of labor ah conflicts and that kind of thing. i think I think we're going through that same sort of cycle. um But that paradigm creates a ah real difficulty, right?
00:24:40
Speaker
And I think that kind of leads into, you know, how much of what solves AI rollout is actually more of a burnout problem in disguise, where people are already running on empty, being handed one more transformation to observe,
00:24:55
Speaker
And they're seeing this transformation as a thing that's going to push them out of a job. So now their security, their economic security, their food security, their ability to provide their family because of this AI investment that their bosses are so excited about that they have to now use, it's going to take away their quality of life.
00:25:14
Speaker
And I think that's what's causing a lot of the burnout you know in the last like three, four years. I went from Urban Planner, randomly got onto Big Brother Canada, a show which is a social game full of manipulation, but also understanding people and building relationships and friendships.
00:25:32
Speaker
I ended up winning that series, ah not even expecting to because I i didn't watch the show. i wasn't a fan of it. I was literally just being myself. Won the series and... Came into a place of winning money, fame, followers simultaneously, but not knowing what to do with it. So it was this idea of success coming at me before I was ready for it and or I was felt prepared for And immediately I felt this feeling of anxiety, depression, all these things started coming up for me that sort of helped me understand like what it feels like to be successful yet still feel empty. Right.
00:26:12
Speaker
where the others are looking at you from the outside thinking you had it all figured out and you didn't. That really brought me on a journey toward coaching. ah Not right away, first it was like a healing journey. it was like, why do I feel so empty even though I have fame, followers, ah perceived success from the outside? And then when I started doing that work, I realized there was just this internal misalignment within me.
00:26:37
Speaker
I didn't believe I deserved what I had. I didn't believe i was great. I didn't feel like I loved myself and I didn't know who I truly was at my core.
00:26:48
Speaker
And once I started going on that journey, it was a lot of therapy. There's a lot of coaching and there's just a lot of internal work like journaling, writing, reading, and figuring out like who is Tashaun at its core. And i am I defined by the work that I do or am I just good?
00:27:05
Speaker
regardless of my external successes. And once I started doing that work, it brought me on the journey of coaching because I realized that through using a coach or through having a coach, I start to develop a level confidence and understanding that I can attain whatever I want, but I'm not my achievements.
00:27:24
Speaker
I'm not my successes. I'm not my failures. these are things that just happen along the journey and all those information. So as I went on that journey, I started to become a coach and started coaching clients because I noticed that a lot of people feel this way no matter what field they're in.
00:27:39
Speaker
They get a job, a senior level job, and they don't feel like anybody understands them. Or they don't feel seen or they don't feel complete. They see it with celebrities all the time. It's like they get them they have a big hit song and they want to they have all this pressure on them to keep it going.
00:27:53
Speaker
And they don't know how to manage that. So I realized that like this is a common struggle where even though you look successful on the outside, you can feel empty on the inside. And I just wanted to help people navigate that journey and start to realize like their value is not determined by their output, their values determined on how they see themselves.
00:28:11
Speaker
What would you tell someone who is grinding towards a goal that they have you know, they kind of have put all of their stock in like, if if I can only get this, you know, I will be happy because this is a very common trap, you know, across this ah neoliberal economic order that we're living in. Like, it's just and I'm going to get to a little bit of that later, but it's very much like.
00:28:38
Speaker
um Everything feels very linear, right? I mean, we even call it the corporate ladder. Like everything runs in a straight line, right? It's like, ah go to school, get good grades, get the grades, go to the college, go to the college, get this job, get, you know, it's just always like, they just kind of i keep moving the goalposts with this idea that once you get the goal, you somehow have achieved nirvana. So what what are you telling those people, especially the the high-powered execs that you coach?
00:29:07
Speaker
but That's a myth. It's a myth because you will always move the goalpost. There will always be a thing down the road that'll make you happy no matter what. And if you look back on your life, that's all that's ah all a lot of us have done our whole lives. Like you're saying, it's like, yes and when I graduate high school or when I move out from my parents' house, I'm going to be so happy because they can't tell me what to do.
00:29:30
Speaker
And now a lot of adults are wishing they were kids again where everything was just taken care of. So it's like, Like when I'm an adult, whatever I want, I'll be so happy. And now you're an adult and you're like, oh, wow, it was so nice when i didn't have to worry about bills or stress or kids. And then it's like, it's always a thing. It's when I graduate. It's when when i when i find my person. Then you get married, then you hate that person. Then you want to divorce them. And then it's like when I have kids. And then you have kids and you're like, when the kids grow up. Like there's always a thing you're going to say, i will be happy when.
00:30:03
Speaker
And I think that's where it becomes a choice. Happiness is a choice. great Gratitude is a choice, right? And a lot of times we look at all the things that we don't have rather than assessing the things that we do have. And I think, again, when I go back to reflection, all that comes from reflection.
00:30:21
Speaker
We don't call ourselves the truth a lot of times because we're standing within ourselves. All we see is, oh, I wish I was taller or I wish I was slimmer or I wish I had more money or wish I had a better job.
00:30:33
Speaker
But when you're able to acknowledge that you have the like the things that you've wanted for so long that you've got and then still say to myself, you know what, but I can do more.
00:30:46
Speaker
that That is a different perspective than saying, you know what, I will be happy when. And I think a lot of us, it's like we don't take time to reflect on the journey that we've that we've taken, the wins that we've gotten along. it And I think that's where sometimes our power our power seeps out because we don't realize we've got We've come so far.
00:31:05
Speaker
Like when I look at where I am today, i would have dreamed to be in this position where it's like, you know what? I went on TV. I was able to help thousands of people. i was able to tell my story to them. Like when I reflect on what I've been able to do, it makes me realize like, wow, I have come a long way.
00:31:22
Speaker
But it takes that reframe.
00:31:26
Speaker
it It is true. I didn't have ah formal education or formal training for for any of the things that I'm doing now. But one of the things about me that I think ah is different is my willingness to fail more than most people. Like, I think I fail more than most people even try. And I think that's what really helps me.
00:31:47
Speaker
With my success, which is not ah a crazy success, I haven't had a crazy exit or anything like that. But ah think I think having a business... especially in this economy that is doing 50K MRR, you know, it's it's not it's not crazy, but it's it's something, i mean there's lessons, there's people that can learn from that.
00:32:06
Speaker
and And I think the biggest thing is is being willing to to fail more, having your own back, and embrace the cringe. Because you're going to be very cringe when you start doing things. You're going to ask really dumb questions. You're going to look really dumb sometimes.
00:32:25
Speaker
But the faster you fail, the faster you succeed. And that's sort of what brought me probably into this world. And and it was just by by accident, to be honest with you. My dream was to be an actor. I'm a very creative guy. And 10 years ago, or sorry, 11 years ago now, when I was chasing that dream and I was a starving artist, living in a basement apartment, doing auditions, but I was like, I can't live like this. i need to do something with flexible hours that can support my dream.
00:32:57
Speaker
And I thought that was going to take a year or two. It took 11 years. So, but, um you know, obviously like after 11 years, you're you're your dreams change and now you have different priorities.
00:33:12
Speaker
I still want to do something related to arts and filmmaking. But this business, I became to embrace it. I neglected it at first. a lot of times you're good at something and you're like, ah, I neglected it. i I neglected being a salesman, a car salesman.
00:33:26
Speaker
Of all things, a car salesman, the worst reputation you can have. you know every every Every time somebody's describing some some sneaky person or something, they use the car salesman. Use car salesman.
00:33:40
Speaker
But that's what landed on my lap. And the reason why that landed on me is because I was doing a commercial for for a car dealership and I got paid, you know, $400 to be in that commercial. And i was like, if I was to do this for real, how much money can I make? And they told me if you're really good at it, anywhere from 150 to 200K.
00:34:00
Speaker
And I thought, if you guys are paying me to pretend to be a car salesman, I can probably just do it in real life. And that was 10 years ago. Logical. That first year, i went from making under 40K a year to 80.
00:34:15
Speaker
First time ever. The second year after that, it was my first six-figure year. And the first thing I realized is like, holy shit, I thought six figures was a lot of money until you make You're like, yeah, no.
00:34:27
Speaker
But it's it's good. And then it goes up 50% every year. so now it's been... five years since I started my business and yeah, I've built it to 50K a month.
00:34:39
Speaker
I think one of the things that helped me out a lot is the change of environment because how likely are you willing to take a risk around all the people that you grew up with all ah everybody that you know, everybody that you see on the day-to-day since you were a child, if I took you and I put you to in the middle of Oklahoma, where you don't know anybody, how likely are you to to take a chance there?
00:35:04
Speaker
Where nobody knows you. So you're you're willing more willing to take chances. And me being an immigrant, i've I've gone through that. And the more you move, the more you realize, ah, the things you guys were fighting, I'm from Costa Rica. So the things you guys criticize in Costa Rica is minuscule once you move to North America. And then I'm in Toronto and then I moved to Ottawa.
00:35:28
Speaker
And I'm like, ah, this is, this is all like, it it makes no sense to not take a chance. Like, ah like people forget very quickly. and And so to allow all these voices to hold you back, because one of the things that I develop a fear by moving so much and in having to like adapt to new environments. I live in in British Columbia for a year.
00:35:52
Speaker
And so one of the things I, ah ah the the fear I got is like, huh, when I go back to Costa Rica, The people that didn't take those chances, they're having the same conversations they had when I left and they're in the same shoes. They haven't experienced life the way I have.
00:36:10
Speaker
They haven't been in an Amex lounge in an airport or anything like that. But they there was a lot of criticism when I was doing the things that I wanted to do. I mean, want to be an actor. That was a big ambition.
00:36:21
Speaker
And so I i think, look, if if you're in a situation that you feel ah you there's something in you that wants more out of life, but you're there's these voices that are holding you back, ask yourself if these voices are from the environment you're in and ask yourself, what if I live in the middle of and of a city where nobody knew who I was?
00:36:44
Speaker
Would I still be afraid to to do this thing, to open this business? What if um i made a new Instagram and started doing things and I didn't tell anybody here? Because I'll tell you one thing, the first instinct somebody has When they they see something that is not familiar, it's to ridicule it.
00:37:04
Speaker
So they will ridicule it. And at first it's going to hurt. But again, for me, it's like I just keep doing it and doing and doing until it becomes irrational for them to ridicule it.
00:37:19
Speaker
If you like this conversation, share it with friends and subscribe wherever you get your podcasts for a weekly ballistic payload of snark, insights, and laughs. New episodes of Bare Knuckles and Brass Tacks drop every Monday.
00:37:32
Speaker
If you're already subscribed, thank you for your support and your swagger. Please consider leaving a rating or a review. It helps others find the show. We'll catch you next week, but until then, stay real.