Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
When Is It Worth Learning AI? A Conversation with Ashley Faus image

When Is It Worth Learning AI? A Conversation with Ashley Faus

AI-Driven Marketer: Master AI Marketing To Stand Out In 2025
Avatar
164 Plays1 year ago

In this episode of AI-Driven Marketer, Dan Sanchez sits down with Ashley Faus, the Head of Lifecycle Marketing at Atlassian, to dissect the current hype around AI and its future in the marketing world. They discuss whether AI will meet its lofty expectations, how it's being adopted in enterprise environments, and the balance between investing in AI and human resources. From using AI for daily tasks to addressing legal and ethical concerns in large organizations, Dan and Ashley cover a wide array of insightful topics. Expect a nuanced conversation about AI's role in job displacement, the necessity of human creativity, and the integration of AI into marketing strategies. Tune in to refine your understanding of AI’s potential impact in the marketing sector and beyond.

Timestamps:

00:00 Discussion on timing of enterprise AI adoption.

06:14 Early adoption crucial for competitive advantage, risky.

07:08 AI, digital natives and evolving computer interfaces.

12:58 Seeking Chat GPT solution for homeschool challenges.

15:03 Education system reinforces gender stereotypes, impacts mindset.

19:25 AI automates ticket summaries, aids in decision-making.

21:23 Writer struggles with AI-generated content limitations.

25:22 Early journey in conversation, content strategy essentials.

29:38 Embracing AI technology but still values humanity.

30:17 15 years of experience surpasses robot efficiency.

36:19 New grads meeting, AI-based onboarding, content suggestions.

38:30 Evaluating time investment for process efficiency.

43:23 Humans may no longer be needed for work.

44:15 Mind wandering, creativity, rethinking value of work.

47:18 Humans struggle to work themselves out of job.

Recommended
Transcript

Risks and Legal Issues in AI Adoption

00:00:05
Speaker
Ashley Foss. I was going to say welcome back to the show, but this is the first time I think you've been on this show, even though I've interviewed you a couple of times on various other shows. But I'm looking forward to this conversation because we've been going back and forth now on LinkedIn about AI and when it's appropriate to adopt AI as a team. All right. That's the conversation we had. It maybe even pulled up your last
00:00:26
Speaker
last post where you brought up, you screenshotted a comment that I'd added on your, you had on one of my posts and then you made a post about it. And I think like the conversation we're simply essentially having here, and this is a conversation less, less than an interview is around like, when should teams adopt AI? Because for enterprise teams, it's like, if you adopt it too early, you can get burned on it because the tech isn't quite ready. You lose trust, you break systems.
00:00:53
Speaker
Obviously, the small teams are doing it because they have less to lose, but enterprise teams, you got a lot to lose. You're talking about billions of dollars of revenue and trust and shareholders and government regulations. Shoot, we're seeing some enterprise companies lose legal battles right now because the chat bot made promises and it's part of their org.
00:01:13
Speaker
Courts are saying you got to deliver the product for free. Well, and the copyright issues too, this is the other interesting, you know, the legal battles are still playing out with like all the stuff with the New York times, for example. So the legal landscape hasn't caught up to the tech landscape. And so it's not even an issue of a chatbot made promises. It's like, you don't know if you put stuff into some of these models.
00:01:39
Speaker
Like the underlying terms of service, who's reading that? Is your standard marketing coordinator reading that? They're definitely not. I don't know if you saw Chris Penn recently put something out. He's like, do you understand what you're signing away in the terms of service? And he's like highlighting it and saying, does anybody understand what they sign in the terms of service?
00:01:56
Speaker
No, we're all signing our baby and our life away in terms of service agreements with every single company we work with, right? I mean, I think they even put out a documentary called like terms or sign away your life or something like that. Somebody did a documentary on going deep onto that and like how tech companies are essentially like screwing us all over, even though a lot of it obviously won't hold up in court, but like the things they put in there are so nuts. Like what are we actually signing when we do these? People have done like exposés on this, but
00:02:26
Speaker
Yeah, they're there. Yeah, yeah, it's crazy. So yeah, so all this to say, it is a timely conversation. And I think it's interesting.

Differences in AI Adoption: Enterprises vs. Agencies

00:02:36
Speaker
for us to have it with me kind of on the enterprise side and then you, I'm more of like the agency's solepreneur, like you're building a really cool product that can do some cool stuff that's AI. And so I think, I think that spectrum, you know, we were joking before we started the record button of like, okay, how do we make this spicy? Like how spicy are we going to go? And I think acknowledging that there's a spectrum and you know, it's nuanced and it depends on all of that stuff.
00:03:02
Speaker
There's always nuance. The other reality is again, on the enterprise side, you don't always have time to have those nuanced conversations with 500 or a thousand or 10,000 employees. And so you end up starting to get really roughly grained policies and rules that if you actually dig into them, that's maybe not what people meant to say, but it's, it's the easiest way to not get an illegal battle or not.
00:03:33
Speaker
break all the systems, right? So I do think that it's interesting to think about how do you, how do you take an incremental approach to this in a safe way, given that you might potentially put something out there that affects hundreds of thousands of customers or tens of thousands of employees. Cool. So let's set some precedents here. Like where do you feel personally like AI is going to be five, 10 years from now? Do you think it's going to like kind of take over everything or it's going to be in its place?
00:04:02
Speaker
I think it's going to be in its place. We saw this with marketing automation platforms. Like I remember the first time I saw Marketo and I was like, this is how, this is the future of marketing, right? And everybody said that HubSpot and Marketo and that marketing automation platforms were going to completely take over. It's going to eliminate the marketing jobs. And instead what happened is it spawned an entirely new class of jobs, like a Marketo specialist or a HubSpot specialist, right? A marketing automation platform specialist.
00:04:32
Speaker
From my perspective, AI is going to be in its place. Is it going to change how we do our jobs? Yes. Do we need to wait to see how it's going to change our jobs? Yes and no. Again, when the legal landscape kind of eventually catches up, that will have some implications, but I think it's going to be in its place. I don't think that we're going to end up with
00:04:55
Speaker
No marketers and no writers and no artists and no graphic designers. Like, come on guys, humans love to work. We love to create work for ourselves. Like we have a lot of rough, we've had robots join our everyday life for years. And yet we still work 40 to 60 hours a week. We like to create work for ourselves.
00:05:16
Speaker
Like we still have people working in factories, even though automation has been heavy in factories since the early eighties, right? So there's definitely a precedence there as far as like automation and all that kind of stuff. We needed more people working the robots. And I guess some of the, a lot of the labor did go overseas, but even over there where they're still fairly automated, making these iPhones, you still have a factory full of hundreds, like thousands and tens of thousands of people working on iPhones, right? Because they just can't do everything.

Cautious Adoption of AI by Large Companies

00:05:41
Speaker
Exactly. There's definitely a precedence. Even Disney, as innovative as Disney is, Disney will not introduce new tech into their ecosystem until it's at least eight years old. They're just like, we can't risk bringing in something new and improved and shiny until it's at least been around for a long time. It'll probably be a while before, and they're probably already been messing with AI for a long time, but you think they're going to introduce large language models into stuff they do?
00:06:06
Speaker
Heck no, it's gonna be a while before Disney actually does because they can't risk, they can't risk something going wrong. Exactly, exactly.
00:06:14
Speaker
So I get it. There's also on the other end, a lot of people are saying that like, well, if you don't start now, then there's going to be a gap between you and the ones who do. Right? Because there's this adoption curve, especially when it comes to like the data you have and the data you're compiling for it to train on, the sooner you get it training on it and improving it and learning on how to do whatever your heck you want it to do sooner, the better it gets.
00:06:42
Speaker
And those are the kinds of conversations people are having. So like, well, then there's this like early adoption thing where if you kind of go to and start moving it along, even if it's rough at first, yeah, but five years from now you have a five year headstart by the time it's actually mature or whatever. So that's a risky game too. But then the question is if it's going to be so much better with, and some of this, again, this is why in one of my posts, I differentiated between the adoption cycle and the hype cycle. Yeah.
00:07:08
Speaker
A lot of people are kind of mixing that and primarily talking about the hype cycle and, you know, does AI live up to the hype? Well, if AI will live up to the hype in five years, is the learning curve actually going to be easier in five years? So there's some interesting things too where, you know, you talk about digital natives and
00:07:31
Speaker
why they are so much faster at picking up technology. And it's because they don't have any preconceived notions about how it should work. And so if you think about like with the way computers have evolved, right? Like you used to have to put a disc into a computer. And so that icon to save being a disc, the reason that that is the icon is because it used to be like when they first introduced it, that was the way that they could build the bridge between physically saving on a disc
00:08:01
Speaker
and saving it on your computer.
00:08:03
Speaker
If you ask digital natives about that, they have no idea why that icon is what it is, right? And now they've grown up in a world where everything just autosaves. You don't have to hit the save button, it just autosaves, right? So they never actually had to undo the knowledge of saving to a physical disc, translating that mentally into an icon on a screen to a button that you push, right? Like they didn't have to do any of that. And so there's an element of it where
00:08:33
Speaker
I almost wonder if it's almost like the digital natives thing of AI where my nieces and nephews are actually going to be so much better at it. The thought of me calling out to Siri or the G word, I don't want to say it because it's going to be like, hello, can I help you? No, you can't help me.
00:08:56
Speaker
I, that's not fluent for me. Like we don't have a bunch of smart home devices like that. And so, but for my nieces and nephews, they already know to call out

Generational Differences in Tech Adoption

00:09:06
Speaker
for the lights or for the music or whatever. So it's, I actually, I actually don't know if I totally buy into, you're going to be five years behind if you don't start now.
00:09:18
Speaker
I'm going to push back on that. So after working at a college, a lot of people had told me like, Oh, Gen Z, so tech savvy. And I worked with them. I'm like, no, they're not. They know how to consume it. They know how to receive it. They know how to go and engage with it, but they know nothing about how to actually use it for productivity's sake.
00:09:37
Speaker
I'm like, I had to train them on how to actually use social media, not for personal self, but like, no. Okay, let me teach you how to do it for a business. Now, I will say if they were actually good at being a creator, they did. That translated perfectly, even if it was like them being a fashion
00:09:55
Speaker
like guru, whatever on Twitter and Instagram, that knowledge and they built a following with that, that knowledge did translate. So if they were a creator, because being a creator is almost like being a marketer, you have to reverse engineer what the audience wants and give it to them and cycles and so many marketing lessons. But most of them, yes, they know how to get around Facebook and Instagram or TikTok or whatever the heck the platform is better, but only from a consumer perspective, not from a work perspective.
00:10:23
Speaker
So the interesting thing that you're pointing out here, which is part of why I don't think AI is going to replace all of us. The underlying skill is not actually the tech. The underlying skill is the human connection or putting together an outfit or eliciting an emotion via song or via images or whatever it is, right? Like the thing that is actually transferable is still
00:10:48
Speaker
the core human connection. And so I think that is fair. That is a fair pushback because I've seen that as well, working with some early career folks, that sense of calendar management, time management, inbox management, Slack management. The thing that is hard about that is not actually pushing the buttons on Slack or organizing your channels on Slack or making folders in your inbox.
00:11:15
Speaker
The thing that's hard is managing your attention and prioritizing all of the information that's coming in from all of these different places. That's the actual skill. So that, that's a fair, that's a fair pushback. I still, I don't know. And we can get into this, right? Like I, maybe I'm just using AI the wrong way, but I've been trying to like, all right, I'm gonna, I'm gonna test it out. Right. And.

AI in Personal Life: Parenting Assistance

00:11:39
Speaker
I've tried to get chat GPT to like, everybody keeps saying, Oh, I love it for planning trips. I love it for like planning. And so I keep trying to get it to plan. I think it's terrible with that. Terrible. It plans the worst date night ever for me and my husband.
00:11:55
Speaker
I already had a date night in my head, right? And I had actually literally brought it up to him and I was like, hey, I have some ideas of like what we could do, you know, whatever. And so then I was like, this is a great opportunity for me to test my chat GPT skills. So I was like, okay, chat GPT, like, you know, and I put in a little bit of info about like who we are and what we like and what, you know, and literally said like, plan a date night for today that includes dinner and an activity. And it was like, here's,
00:12:25
Speaker
here's three different restaurants you could try. And I'm like, I, that's not helpful. Like I'm not, I know restaurants. I can pick a restaurant, right? And it's like, you don't, I don't know. I don't know what these people are using it for. That's good at planning a trip or a date night, but I think it's terrible at that.
00:12:43
Speaker
I'm like looking for a conversation I had in chat GPT recently. I'm like, but I have so many, it's hard to find them all. I'm like, oh gosh. I had one where I threw it a curve ball that I hadn't, I didn't know where it would go, but I threw it a parenting question as I'm good at thinking about marketing stuff. So, but I'm like parenting, it's pretty tough. I'm like, but I had a very specific problem that I needed a solution for. And I could have gone to Google, but I'm like, I don't think that, I don't think someone's, someone might've written an article about this, but I think chat GPT can do this.
00:13:13
Speaker
I have two older kids, a 10-year-old and an 11. She's about to turn 12.
00:13:18
Speaker
And we have these moments because we homeschool where they just get overwhelmed and they run into these growth or fixed mindset issues where they're like, I can't do this. I don't like, I don't get it. And it's like, it's very fixed mindset. Like clearly you can get it. You just have to keep working at it. You need to choose a growth mindset. But I'm like, it's not enough for me to tell them, Hey, you have a fixed mindset. Let's choose the growth mindset. That's not a bad place to start. But I was like, okay, chat GPT, here's the situation. I kind of break down like, Hey, I'm a parent.
00:13:47
Speaker
I'm trying to teach my kids how to have a better growth mindset. They're running into a fixed mindset issue where they believe like they can't do something. What is not something I can just tell them or review with them? Can you give me a process that I can run through them that becomes a go-to process every time they run into this specific limiting belief, this fixed mindset?
00:14:06
Speaker
And what it gave me, this five-step process, I was like, yeah, this is fantastic, actually. I don't know where I got it from. Now, maybe if I did it in Perplexity, where it actually sites all its sources, it would have told me where it got the idea from, but it was fantastic. And I need to actually pull it out and print it off and start working into my parenting rhythm.
00:14:25
Speaker
Uh, I actually thought it was pretty good. I have seen people like post videos about what you ran into a date night, kind of like, Hey, let's see what chat GPT did for us. And then they like go through a day in their vacation. They're actually, it's like, this was horrible advice. Like this was a horrible day. What we could have done was I have seen vacation or date night nightmares that Jack GPT has planned. Well, it's interesting because as you were talking about the parenting thing.
00:14:50
Speaker
I thought the punchline was gonna be that like, oh, it just pulled out like step one, believe in yourself.
00:14:55
Speaker
Step two, say I believe in myself, right? Like these latitudes. That's why I'm looking for it if I could pull it up. Yeah. And the interesting thing is I almost want to play a game of like AI versus Ashley because as you were talking about that, there's actually a really interesting article that Harvard Business Review wrote years ago at this point called the trouble with bright girls. And it actually talks about
00:15:21
Speaker
And you're probably actually combating some of this by homeschooling, but the way that the education system is set up tends to work very well in the early years for girls compared to boys because they have to sit down and they have to be orderly. And so they're constantly praised for being so smart versus boys, since they tend to be more unruly and high energy, they're constantly told if you would just sit down and focus, if you would just work harder, if you would just try,
00:15:51
Speaker
And so the girls are basically praised for something they're inherent, you know, quote unquote, inherently good at. And the boys are constantly told to work harder so that by the time they get into higher levels of math or science, you know, hard things, if the girls can't do it or they it's hard, then they're like, well, I just can't because I'm inherently not smart enough versus the boys are like, oh, I just have to work harder. Right. So it's this super interesting, even the language shift
00:16:17
Speaker
makes a huge difference in how you cultivate that growth mindset versus a fixed mindset. So it's interesting because you got this process and I'm like, didn't it just read that article and pull it from that article? If so, good job chat DPD, but would that have been the top Google search result if you Googled it? So there's some really interesting things that I think
00:16:41
Speaker
The thing that it did, though, that Google will never be able to do until it incorporates AI and it's working on that is that it made it contextual to the exact specifications I had. Yeah, right. Where Google articles, it's got to be kind of off a little bit, then you have to reverse engineer it in your head to fit the context. And that's, that's kind of exciting. But how does this translate into like, business and enterprise? Now, like Dave Ramsey, which is a big company locally to me, they're embracing AI, but they've like made a policy, it's like nothing public facing will be AI generated.

AI for Internal Efficiency in Companies

00:17:11
Speaker
because we're a trust brand. And we have too much at stake to be able to generate AI stuff without at least has to pass through a person 100%. But like we will not like everything will be human created, because we can't afford it. But they're still playing with it on the back end. And they're using it for, you know, like a boss needs to be come up with a growth plan for something that an employee gets stuck in. So they're starting off with chat GPT to kind of like brainstorm and then be like, it's pretty good copy paste. Good.
00:17:39
Speaker
So it's like they're using it for stuff like that all the time. I wonder, are you doing stuff like that in your job at Atlassian? Yeah. So we actually have Atlassian Intelligence, which runs across our platform. And so we have access to that internally. We've got our own little playground where it's a safe space, basically, where we can put stuff in, we can connect it into our internal systems. So there's actually some pretty funny things. So we, on one of our teams,
00:18:10
Speaker
you know, Slack just rolled out their AI summaries or whatever. And we have a bunch of Taylor Swift fans on our team. And so somebody jokingly win, um, tortured poets department dropped and nobody said anything. And so she popped in and it was, she was like,
00:18:25
Speaker
doing a wellness check like, Dan, Ashley, are you okay? Like we've heard nothing about Tortured Poets Department yet. And so we were all kind of laughing. We're like, well, yeah, cause they're off listening to the album. They're not on here. They haven't formulated their deep thoughts about it yet. And so then the next day, one of our teammates who had been on vacation was like, yeah, I'm going to test out the new AI Slack summaries. And it pulled it up and it was like,
00:18:47
Speaker
you know, the teammate expressed concern for Ashley and Dan's wellbeing about Taylor Swift. The other team, you know, this other team may also express concern and she was just like, this is the best thing ever. So there's still some limitations. Like it's actually, it is handy. We have AI summaries at the top. So like if you write, you know, a big strategy plan and then you want to generate a summary for the top of like executives or somebody stumbles on the page,
00:19:12
Speaker
then it'll, you can adopt the AI. I know obviously you can edit it, but you can hit, you know, summarize this page and it'll put at the top. We have stuff that's been rolled out externally. So it'll summarize like if people, there's an incident or support tickets. And so if you're the person who's coming on for your shift for support, it'll do an AI summary of all of the tickets. So you don't have to go through, you know, every single comment that's come up, you can get a sense of where the queue is. So we're definitely incorporating it in that way.
00:19:41
Speaker
I think the hard part is when we think about, and again, some of this is like, where does the human stop versus where does the human start? Like we've also used it. We wanted to rename a newsletter. And so, you know, put it a little prompt and have it just spit out like 30 different names. And of those names, like two or three were actually good and the rest you throw away, but you really only need one good name.
00:20:06
Speaker
You don't actually, and so there's part of it where I, I personally struggle with this because I'm like, I mean, only one of these is even usable. Right. But that's the only, you only need one usable one and a human is going to start to tap out at maybe 15 chat GPT. You can just be like another one, another one, another one, another one. Yeah, but it's ability to get better. It doesn't, it doesn't get better the more you produce. Right. So, so there's some of that where it's like.
00:20:35
Speaker
I dealt with this the other day for myself. I was like, this is a perfect example of session titles for inbound, right? I just ran a poll to say like, help me crowdsource my problem, my topic. And I was already starting to think of titles for these sessions. And I was like,
00:20:50
Speaker
This is a perfect thing for chat GPT to help me with, for me to practice my skills. So I went to it and I said, you know, here's who you are. And I told her, I was like, you're a copywriter at Nike and you used to work at Apple and Ogilvy. So like, you're a boss at this. And, you know, here's the audience. And I said, and you've already come up with these three taglines, you know, make the others. And so I gave it what I put in and then obviously hit do it again. And I gave it, I was like, that's interesting. I noticed that you keep asking questions.
00:21:19
Speaker
Can you restate these as declarative sentences instead?" And it was like, yep. And in some cases it tweaked it, and in other cases it didn't, right? And I asked again, I was like, these are good, but why don't we focus on alliteration? Why don't we focus on rhyming? And so I was tweaking it. And so, yes, the quality of the prompts does matter and the quality of the input matters, but it's just hard because I'm actually
00:21:45
Speaker
really good at this stuff. And I have the full context of exactly what the topic is. And by the time I write out the full context that's in my head, I've already used that context to come up with five really good titles already. On one hand, it's not very fair.
00:22:06
Speaker
that I maybe write this 2000 word page and AI just has to write two sentences. And it's like, well, if the two sentences are bad, maybe your thousand words are bad. But at the same time, by the time I have to give it a thousand words to make it better, like I could have just taken a walk around the block and done it myself. Like it's a, there's still some of that hard nuance and same thing, even the support tickets, right?
00:22:29
Speaker
How often do you still have to potentially reach out to a teammate and say, Hey, do you have five minutes to just give me a quick download on this? Writing is hard to struggle with. Naming's hard. Naming's hard. Like, so I think there's, there's some of that too, where if you give it the wrong problem, the assumption that, Oh, well this is hard for humans, but it's easy for AI.
00:22:55
Speaker
is not, there's a reason certain things are hard for humans. Like the reason date night is so hard to plan is because humans are finicky. And why normally like Mexican food, but tonight I want, you know, Chinese food and it gave me a Mexican restaurant. So it doesn't, well, it doesn't know that you want Chinese food tonight. Like that's not fair, you know? And it's the same thing in business. In some cases, like another use case that we're doing is
00:23:19
Speaker
You know, and again, you're going to do it for this episode. You're going to put it through whatever the AI tool is, maybe the one you're building. It's going to generate 20 clips of whatever we talked about.

AI's Role in Content Generation and Limits

00:23:28
Speaker
You're going to pick the top five. You're going to send me three. You're going to keep two back for yourself. I'm going to listen to those three clips, pick one to promote this episode. Like that's how this is going to go. What that means is that 15 of the clips that it generates are useless. Is that because we didn't have anything useful to say?
00:23:45
Speaker
Or is it because it didn't pick up, right? Or is it because it couldn't pick up where the best bits are?
00:23:52
Speaker
It's a combination of both because I do a lot of those. I mean, I'm generating clips multiple times a week for different clients and I'm having to go through and pick them out myself. And sometimes some episodes because of the person being interviewed has a higher clip rate. And it's very interesting because you can usually tell just by listening to them, you're like, oh, this person knows what they're talking about. They're going to get more clips because they don't meander through. They're more concise and give concise points. So you can tell like how they communicate and what they're saying is just more clippable.
00:24:20
Speaker
And it would have been the same for a human. Whether an AI did or a human did it, they still would have had a higher clip rate than normal. Like a human would also struggle to get good clips out of a, someone who just kind of meander through and didn't make solid points, wasn't concise, didn't have anything new to say. So I have noticed that, but it is getting better at finding good clips. I don't know. It's ability to find higher clips is better. And I do notice how they rank stack them now from like most likely to be good to at least likely to be good. That's generally right. Yeah.
00:24:50
Speaker
Yeah, well rating system. I'm like, I don't know how it knows which ones are gonna be better, but it's I Get less clips at the bottom of the pile. That's for sure. Yeah. Yeah. Well, and it's so one thing that's interesting that we're seeing that is a combination of obviously the source material that you put in and The quality of what you get out but sometimes it's off and
00:25:14
Speaker
by just like 10 seconds. Yeah, that's right. That's pretty frequent that I'm pushing it 10 seconds one way or the other. Yeah. It's almost that sense of I'm going to begin this conversation. For example, I already know that this is not the smartest conversation I've had.
00:25:31
Speaker
precisely because I'm so early in my journey on it, compared to conversations we've had in the past about thought leadership or about content strategy, where I have good one-liners, right? So if we were to say, you know, we'll go back to something that I'm more familiar with, where I would say, the funnel is dead, use a playground instead. Okay, what do you mean by that, Ashley? What are the top three things that you need to think about? Well, you have to address content depths, you have to address intent-based content, and you have to use explicit CTAs.
00:26:00
Speaker
It's like it would clip it at just the funnel is dead. What should you do? No, I gave you the one-liner, the funnel is dead, use a playground instead. And I gave you content depths, intent-based content, and explicit CTAs. But why did you cut it off at just the first part? Like clearly.
00:26:21
Speaker
The valuable piece is the second part. And so we've seen that a few times where, or it'll say, you know, we're doing it for our products, right? And it's like, we're going to announce a new feature. And so we put it out there and it's like, and now announcing, and then it like cuts the clip there. And I'm like, what are you doing? Like, clearly we need whatever the announcement is. If it's Jira dark mode, what tool are you using? So I don't want to.
00:26:48
Speaker
be rude. We're working on a couple of tools here. I'll DM you separately. And I know, of course, this is going to be the clip where you're going to be like, here are what's tools Ashley says she's using. I'm going to stand behind the Atlassian policy of not naming tools that have not
00:27:05
Speaker
been fully vetted yet again, when a large brand like us says we're using something, it gets a little twitchy. So we're still experimenting. We have seen quality, a variety of levels with a number of tools. And now we can solve that. We could just go manually clip it and just extend it. And so basically the way we're using it is give me 10 clips, give me a sense for what's interesting. And I can tell within the first few seconds of like, what are you even talking about? No one cares.
00:27:34
Speaker
discard this or, Oh, this is super interesting. Let me listen to the end. And then when they clip, if they cut it short, then I can just be like, okay, I know exactly where to go and just say, extend it.
00:27:44
Speaker
Yeah. It's, this could be an episode about clips really fast, but I'm going to try to steer it back to the enterprise. Cause I'm like, there's so much I could say about clips, but we'll talk afterwards. You'll give me your best practices and then you'll be some things I've learned about clips. But you talked a lot about on LinkedIn, like the opportunity cost of learning AI to maximize what you could be getting out of it now versus what you could be getting out of what's currently available right now and working well.
00:28:10
Speaker
You know, what's funny about that is I feel like nothing's really working well right now. Like there's not like a killer, like, Oh, LinkedIn ads are killing it. When I go to every single channel, it's like, everything's expensive. Everything takes a long time. Anything that's fast is not that great. There's no killer in the economy. He's kind of like struggle. I don't mean anybody's being like, Oh yeah, we're freaking killing it right now. Yeah.
00:28:33
Speaker
Which right now I'm like, well, I might as well dig into AI and make what I'm currently doing more efficient as kind of how I'm seeing leveraging AI to do better, faster. Yeah. How do you see the opportunity cost? Yeah. So like there's the triangle of good, faster, cheap, pick two, right? If it's going to be good and fast, it's going to be, it's not going to be cheap. If it's cheap and fast, it's not going to be good. If it's good and cheap, it's not going to be fast, right? Like those things. The point I was actually trying to make was less about strategies and tactics and more about skills.

Skill Shifts and Human Oversight with AI

00:29:03
Speaker
So I have 15 years of experience in my craft. I'm very good at what I do. And because of that, trying to get someone else to do it is really hard, whether that's a human or a robot. And I know we've had this conversation a little bit too. You're like, yeah, AI is kind of like the interns. You have to break it down and you have to give them step by step. You have to do a playbook. I actually really loved, again, I know I keep going back to Chris Penn, but like,
00:29:31
Speaker
He's been doing, he's been in this world for like 10 years. You know, it's like, you were, what is it? It's like you merely adopted the darkness and I was born into it. I feel like that's like Chris Penn. Like you're the, you're the adopter and he's the, I was born into it kind of guy. But he was talking about templates and he said anything that you do with a template today is an excellent use case for AI.
00:29:51
Speaker
tomorrow and you know, where tomorrow is very near term. Do a lot of things with templates because at the point where I am the person doing it, you need, you need me. It can't actually be done by a template by the time it gets to me because otherwise you would have done it already. And so I think that's, that's what I'm talking about with the opportunity cost. The work that I'm doing to me still requires a human and
00:30:19
Speaker
If I'm the human required to do it, it probably means you need someone who's got 15 or more years of experience. And that's a different level of work. And so for me to figure out how to distill what I just know, because I've been doing this for so long, into a set of written instructions for someone else,
00:30:42
Speaker
to do what I do, that takes a lot of time. And then the question is, why would I invest that time in the robot instead of in the humans on my team that I'm supposed to be investing in? There's some feeling aspects to it too, where
00:31:04
Speaker
I actually had a conversation with someone else who was an early career person and they were talking to me about some struggles with somebody who was more senior on their team who was supposed to be giving them feedback. And they kept asking for feedback and the senior person said, well, why don't you just try it? Use it with chat GPT. I don't have time. Just put it into chat GPT. And they were like so offended. And I said, well, maybe the delivery was not
00:31:30
Speaker
Right. But is it possible that you could have done that task without the more senior person you were trying to do it with? Right. And I think it doesn't feel good if your manager tells you, basically, I don't have time for you. I'm going to go train a robot instead. Like how does that make you feel from a career growth standpoint or a job stability standpoint? It doesn't feel great.
00:32:00
Speaker
And then at some point I'm still going to have to teach you something. So am I going to teach you the AI? And the answer might be yes. But as we talked about kind of at the beginning of the conversation with that learning curve, does it make sense for me who's a bit blind to lead you who's a bit blind instead of
00:32:21
Speaker
giving you who doesn't have as differentiated of a skill set yet because you don't have as much experience. Why don't you be the one to learn this and spread it to the organization, right? It's a good growth opportunity. It's a good visibility opportunity. It sets you up well from a longterm perspective and it minimizes the opportunity cost of having me do it when I already have like my, my old school skills are actually still better than
00:32:48
Speaker
the new school skills. It's not a straightforward conversation, especially when you're talking about humans on your team and what precedent does that set about where I invest my time if I only invest it in systems and process and tools and I don't invest in the humans.
00:33:10
Speaker
It seems pretty stark when you compare it to like giving it to AR, giving it to people when it could just be a spectrum a little bit. I mean, obviously you don't compare AI to people. It's kind of like, Oh, I'm only giving you 80% today. I'm giving 20% to the AI. It's just part of like, kind of like you spend time in Salesforce or HubSpot tuning up a dashboard so that you can get more out of it. The thing I'd like to know from you is, are there, is there anything you do that's just pretty repetitive? I mean, and you walk through the same process every time you deal with it.
00:33:39
Speaker
Those are the opportunities where I find AI can help usually it's in one of those yeah, I found that naming naming is a kind of a weird thing because It is something you have to do. There is a process to it Some people have different processes, but I finally found a book that I loved
00:33:54
Speaker
from Alexandra Watkins, and she had a fairly particular process that was very straightforward using essentially words defining a name and then finding idioms that those words were in and then finding the, or no, it's like word rhymes with that word, idioms with that rhyme in it, and then swapping back out the original word. Do that a lot for the thing you're trying to name. It's one of her approaches. And that's how I came up with Mike Club for one of the podcasts we launched, Fight Club. Oh, that's kind of a different feel. It takes on a little bit of that feel from
00:34:23
Speaker
Fight Club, but now it's Mike Club, right? So that's her naming process. Chat GPT is really good at running that process. And it'll get better as it gets more advanced and GPT five comes out and all that kind of stuff. But it's fully capable. Shoot. I mean, an algorithm can almost run that process. It's so simple. It's interesting. Yeah.
00:34:42
Speaker
Yeah, the stuff that I find to be repetitive, for example, but it's not exactly that where you could say you just need to put in the one, you have to have the original idea of Fight Club, for example, and then you put that in, you say, okay, you're going to run her process and here it is. But onboarding documents, especially whenever I'm hiring in
00:35:04
Speaker
Clusters where if I like, if I'm hiring two or three people within a quarter of each other, the onboarding documents are pretty similar and it's, you would think like, Oh, just copy and paste the previous onboarding doc. And it's fine. Right. But a perfect example when I built the APMM program for Atlassian. So it was a cohort of four. So I had a joint onboarding plan for all of them. That was, you know, here's the Atlassian business. Here's some of the key teams markets, et cetera. And then I had individual.
00:35:34
Speaker
plans for each of them. And the most annoying part of those plans was all four of them were supposed to book a one-on-one with each other. And so basically like swapping out the headshots and the names of the people to be like, Dan, Ashley is your teammate. Ashley, Dan's your teammate, right? Like I wanted it to feel personalized to you. So I didn't want you to be
00:35:58
Speaker
a teammate on there. And again, this was a couple of years ago, so I didn't have access to AI. But that type of thing of being able to say, or say if I could put in this level of person, should have one-on-ones with this level of person from these teams. So you wouldn't have a VP meet with every new grad, right? But you might have all the new grads meet each other, trying to
00:36:23
Speaker
map those out for every single person. It's like, if I could just tell the AI, like make this on, like tweak this onboarding plan, you know, they're still going to have a certain, you know, nine out of 10 of the people I want them to meet are the same or they need to meet someone from every single team, match them with the appropriate level of person given their level. Like that would be something that would be so helpful. And then the other thing, which we do have this of like suggestions where
00:36:53
Speaker
it'll say at the bottom of a confluence page, these pages are frequently read together or like people who read this page often read this other page. So for an onboarding document, that would be super helpful if it could like look back at what I've been working on and see, okay, all the people who are on my team have been working on these 10 documents and then basically do a little write up of like recent projects that you should know about and then it can pull in
00:37:20
Speaker
all of those versus me having to be like, all right, which of these things are useful for a new person coming in? Cause you know, I don't want to pull something for two years ago. That's not helpful. Like I want to pull the last quarter of work to kind of get them up to speed. That would be a perfect example where I think there's probably ways to do most of this. But again, for me to find, probably if you're using copilot or something, do you guys use Microsoft?
00:37:47
Speaker
We, I think we do have, I mean, you're kind of big enough that I'm like, you might have your own internal systems. Yeah. We have some internal systems and all this stuff. Right. Like that's the other question of like, there might be some data stuff that other Microsoft's almost a competitor at this level, you know? Yeah. So, uh, there's, there's some front of me stuff going on. You know, we got, we got some AI stuff. They got some AI stuff. Sometimes our stuff works together, right? Yeah. But this is again, where I have, I have all the knowledge in my head.
00:38:17
Speaker
of who they should talk to, what we've been working on, all of that. For me to go find the tools and prompt it to do that feels like it's going to take as long or longer than I just sit there and type it out, right? It really becomes this calculation of
00:38:32
Speaker
Is it worth building a process for? Is the amount of time I'm going to spend building a process worth the time I'm going to save later, having to do this over and over again? How much time does it take me? How often do I do it? Because if I can get back that time over and then you start doing a cost, like what is that? A breakeven analysis on time. Really? You're trying to think when the payback period is. It's like if I'm going to get it back, like one thing I use as a showrunner for pre-interviews,
00:39:00
Speaker
And it's already paid for itself well over. I invested time into that in December, and I use it every week for almost all my interviews. And it saves, I mean, it only saves me like 30, 60 minutes, but it only takes five minutes to run. And so it's just so much faster that over time, I've banked all that time I've saved back using that over and over again now. So everybody's got to run that own, their own analysis on that. I do it even if I don't save time on it, just to freaking learn sometimes, because I'm like, oh, I wonder if it's capable of doing this. Let's find out and I'll just do it.
00:39:29
Speaker
I just had a client that I built a strategy consulting process where he was forecasting scenarios for
00:39:41
Speaker
Businesses based on different trends that were taking place in their industry and he had a storytelling Format that he had that was really precise based on two different trends with two different factors per trend usually like that goes up or down or whatever and Then like hey tell a story. What does ten years look like out economically? Sociologically politically like all the different things. What does the lead-up look like and he had a bunch of different things? I'm like it's perfect use case for AI because now he can AI is really good given all the right context
00:40:10
Speaker
to then fill in the story of what could be with all the different ways this can pan out. And that used to take him two weeks to do that per client per scenario, but now he can do it rapid fire live with the client. Um, and now it's what used to take two weeks now takes, you know, 20, 30 minutes and he can sit there and play around with it live with the client. So I feel like AI is going to be filling in a lot of gaps like that, that used to, I mean, but what does that do? You probably, he probably replaced the person on his staff doing that now.
00:40:38
Speaker
It's interesting though, because again, going back to the marketing automation platforms and how we're just going to fire all the marketers, that's not what happened.

AI's Impact on Job Landscapes

00:40:48
Speaker
You either tilt them to running the robots, right? And refine it and whatever, or you give them a new job. And yes, there, as we have seen throughout history, every time there is a new innovation, there are some people who are, it doesn't work out very well for them, right?
00:41:06
Speaker
And that's not great. And we do need to have a conversation about that at a larger scale, about re-skilling, up-skilling, up-skilling, you know, the fact, again, like in the US, the fact that basically healthcare is tied to a job. And so if you don't work a traditional job, you now don't have access to healthcare. Like AI replace, quote unquote, replacing people. And then now they're homeless or they don't have healthcare. Like that is not the fault of AI. There's some systemic things that
00:41:37
Speaker
we got to have a conversation about, right? Like, and again, yeah, yeah, yeah. It's a scary thing. It's predicting 95% of marketers are going to lose their job, right? That's Sam Altman's prediction. I'm like, 95 is a lot, Sam.
00:41:49
Speaker
Yeah. Okay. To be fair, his prediction isn't 25% are going to lose their jobs. He says the AI will be doing 95% of what marketers are currently doing now, which is more specific than job loss. People equate that to job loss, but I'm like, well, it's obviously going to make new jobs, but even if he's only half right, then I'm like, that's still a lot of
00:42:10
Speaker
It's still like 50%. It's still 50% of what's currently being done being done by AI. It'll add 20% back, but there's still probably going to be a shortfall in there somewhere. And that's kind of what I'm expecting is that there will be a shortfall between jobs added and jobs now automated.
00:42:25
Speaker
Well, and the other piece of this, and again, it gets back to, you know, I've seen a couple of folks, Perry Hedrick talks about this from a PR standpoint. He's like, for all the agencies that are charging by the hour, you're screwed because AI makes that a lot faster. So when we start talking about value of work, and again, this gets back to my thing of like, by the time you have me doing something,
00:42:50
Speaker
You need me to do it. You didn't just accidentally be like, I don't know. I wish somebody could help. You don't pay someone with my experience, my salary to do a certain type of work. That's inefficient. And whether it's an AI doing it, whether it's an intern, whether it's an agency, whatever. There's a reason that more senior people are paid more.
00:43:13
Speaker
Yeah. Yeah. And if you find out that they're doing all the time, they're doing junior level work, that mismatch, you want to stop them from doing that because.
00:43:22
Speaker
You don't want to pay humans to do that work, right? So again, this question of, well, if 95% of the tasks that humans are doing, if the humans are only going to do 5% of the work, that doesn't necessarily mean you only need 5% of the humans. It means that the 5% of work that humans will still do
00:43:45
Speaker
is hard and it's not going to work the same way to say you have to log in for eight hours a day, you have to sit in a chair for eight hours a day. By the time I put all this into a prompt for chat GPT, I could just take a walk around the block and come up with it myself.
00:44:03
Speaker
Like that's literally the real example where I was walking around the block and my mind was kind of wandering and I had this in my Subconscious and I started coming up with stuff and I'm like running back to the house You know because like I was like no I'm not gonna take my phone or anything and I'm like Repeating these things to myself over and over so I don't forget them this happens to me all the time I'm at the gym or I'm on the water like I am somewhere else and my brain is
00:44:27
Speaker
is doing what it does because humans are going to human and our brain is smart and it likes to be creative and it likes to find patterns.
00:44:36
Speaker
And if you just let it percolate a little bit, but like, how do you, how do you account for that time out of the office where I did that work? You're getting a very good value from me for doing that work outside the office, right? So we also have to fundamentally rethink how we judge the value of human work and how we compensate the value of human work. Cause that's ultimately,
00:45:01
Speaker
the issue, again, going back to some of these systemic things. The reason everyone's so stressed about losing their jobs is because it means they can't eat and they can't go to the doctor. And that's the real pain point. They're not worried about being bored or looking stupid. They're worried that they can't fundamentally meet the lowest level of Maslow's hierarchy of needs. And so that's the other big shift as we think about this from a business standpoint, that's hard because
00:45:29
Speaker
you know, the point of business is to basically maximize profits, maximize shareholder value, and you're going to run into some of the same issues we ran into when we had the manufacturing and the industrial revolution about, you know, the humans.
00:45:41
Speaker
Yep. The one thing that helps me sleep better at night when it comes to mass job loss is the fact that marketing is kind of a black hole and it will always take more. Somebody, my boss told me that a long early in my career, he's like, I could always, I could feed 10 times the amount of budget staff and talent into that thing called marketing and it'll take all of it. Yep. So I'm like, well, since it's a black hole, like can AI fill that black hole?
00:46:08
Speaker
Maybe, maybe not. The thing that will happen, this is my prediction now. We'll see if it works out or you combat it really fast. But I feel like tech wall, like companies will make a certain amount of profit revenue.
00:46:22
Speaker
They'll want to invest a certain amount of that in order to make more revenue. Now, I think a percentage of that goes towards people. It's usually a big percentage. I think tech as a general category will slowly eat away at that more and more as tech becomes more effective, but they'll always be, it'll be a balance of like, do we get this new tool or do we hire more people?
00:46:43
Speaker
which is a thing that people are already judging, but as tech becomes more effective, it'll probably eat up a bigger percentage of the pie as far as what they're able to invest and still have a healthy margin for profit.
00:46:56
Speaker
Yeah, the interesting piece of that is I think that the people, the skills or the, again, the work that the people do will shift. I, somebody still has to implement the tech. Somebody still has to buy the tech. Somebody still has to review the legality of the tech. Somebody still has to review the compliance of the tech, right? Like somebody still has to tell you the tech exists. I, it, I don't know. Again, I just feel like humans are really bad at working themselves out of a job. Like we really are.
00:47:26
Speaker
And even if we work ourselves out of one job, we managed to find another job to do, you know? So I agree with you. I do think that there are still a lot of companies that have not even caught up to... I was on a call the other day, I did like an AMA and somebody was asking me, they were talking about
00:47:51
Speaker
like a super old school CRM system. And I was like, I'm sorry, you're seeing which one now? Oh, I didn't realize they were still. They're still around. Yeah. Cool, right? Like it's, there's, there's, I think you and I, you, Dan, obviously on the AI side, like you are very early adopter, you know, both in the adoption cycle and the hype cycle, right? I would say I'm skeptical on the hype cycle because
00:48:19
Speaker
That's, you know, how I roll from an adoption standpoint. I think at this point, anybody who's willing to like play with it and has an account and trying it is probably in the early, early adopter category or like early majority category of the adoption cycle. Right. But, um, even my tech, even in my tech savvy friends are only just now signing up for like chat GPT plus, like actually throwing a little bit of money at it. And those are like my heavy tech friends.
00:48:45
Speaker
Yeah. So I'm like, that means it's so early. It's so early. Exactly. So all this to say that you and I having this conversation are like five years from now, and the reality is there's people who still are- The tech will probably grow fast, but the adoption of it will be still pretty slow. Exactly. All righty. It's early on. Ashley, thank you so much for coming on and having this fun back and forth. This conversation has been fun. I've learned a lot, even just this back and forth.
00:49:14
Speaker
Yeah, same. Well, and I'm remembering my optimism a little bit, but I think that's probably a good thing. Well, and you've made me a bit less skeptical. Because there's so many people that aren't even really leveraging automation really well yet across the board. I hardly ever see people use automation while they're then like super basic drip sequences is usually what they're using it for. I'm like, guys, we could have done drip sequences 14 years ago. That was available then. So we're barely even cracking the surface on that, let alone what AI might be able to do.
00:49:43
Speaker
Exactly. Exactly. But yeah, this is fun. I will be curious. We'll see if you motivate me or shame me into being more optimistic and taking your views. I believe it might be a bit of both. Well, we'll see. Don't want to shame anybody. Hopefully just provoke you in good ways to get on the train. If I can prove the train's worth getting on, that's the goal. Love it.