Resetting Expectations and Embracing Failure
00:00:00
Speaker
Best way to behave as an experimenter is to set your range, set your expectations. Every instance you're going to run experiment, reset your expectations and go, okay, this could fail. We've got to be ready for the next thing and and create a plan to do that.
Introducing Unite Voices Podcast
00:00:13
Speaker
Welcome to Unite Voices, hosted by Katie Green. Real stories from the people behind today's most innovative experimentation programs. No fluff, just wins, failures, and the lessons in between.
00:00:27
Speaker
Welcome to Unite Voices. We're so excited to have you on the show today. And we want to hear everything about you. This is Unite Voices. This is about you. So we're really excited to hear your voice and your story.
00:00:40
Speaker
Give us a little intro. Give us a little sprinkle. Who are
Celebrating NASA's Artemis Mission
00:00:43
Speaker
you? I'm really happy to be here. And as you can see, I'm kind of repping my NASA shirt, celebrating the Artemis mission, just made it made safely back from around the moon. um You know, I'm a big science nerd myself. My dad was a science teacher. Come by it naturally.
00:00:56
Speaker
um I mean, a couple of firsts. We got to call out, first of all. First woman breaking glass ceiling, smashing through lower orbit, which I think is amazing. I'm a dad of two girls, so this is something we've been watching really closely.
00:01:07
Speaker
And if I may, very important to me as well, first Canadian. First Canadian out of low orbit, which is... amazing for us. We've been part, I grew up knowing Chris Hadfield, the most famous Canadian up until Ryan Gosling, I guess. ah And, you know, my hero for the longest time. So to see another Canadian break another record, go around the moon, come back slightly. It's pretty amazing to us. um That's, that's actually two in a month. If you think about it, Ryan Gosling and Hail Mary. And now captain Jeremy Hansen going around the moon. That's two Canadians in orbit, which is pretty good for us.
00:01:36
Speaker
I say that like I had anything to do with it by the way, but that's, that's what Canadians do. You probably know this about the Canadians that you've met in your life. We like, we take credit for each other's accomplishments that's how we do it's how we stay relevant that's what socialism is by the way we have socialized medicine and we all claim each other's achievements it's really fun that it's giving community is community that's how we operate right that's love it it's the best best um so i'm i'm happy to uh see that come through it's a great news story as well i mean i needed something like that fueling me it's like a i said this to my wife and i'm going to use it so it's a booster rocket for my soul right now that's in the news cycle watching this up and down back safely was just I needed it so that's why I'm wearing this today very happy
Explorers vs. Innovators: Embracing Being Wrong
00:02:15
Speaker
that. I think it's also very relevant to the industry of experimentation, right? I was telling you that I love Project Hail Mary so much. I'm doing anything to make it relevant on my LinkedIn content because there is so much, right? That's obviously like very scientific experimentation, but ah there is something similar to the ego of people who are into this exploration. And I actually went to a conference last Friday And one of the quotes from one of the speakers was, I don't want to be called an innovator because innovators claim they know everything. I want to be an explorer. i like that. And that's giving the NASA piece here the excitement around the Artemis mission So, yeah, I was like, let's I want to take that with me. It's like I don't want to be an innovator. I don't want to be an expert. I want to be an explorer, which I think ties back into your philosophy of being the wrongest in the room. We're in an era where everybody wants to be right. But being wrong is a huge competitive advantage.
00:03:10
Speaker
Tell us about that. Yeah, it's funny. Yeah. Yes. Being wrong is in the room um is a phrase, you know provocatively titled, of course, but it's sort of the counter to trying to be the smartest in the room, which isn't what we're about
Strategic Advantage of Being Wrong
00:03:22
Speaker
anymore, is it? And and leaders, I think, are starting to recognize that it wasn't a title I made up, by the way. I was given this by ah the head of Canada at a company I worked at when I walked in my third day and I froze my wrong is in the room. What do you mean?
00:03:33
Speaker
His point was, relax, it's a compliment. He said, you know, like you were asking all these questions, you weren't, you were willing to look green and like figure it out. And, you know, in a sense to to look wrong and we need more of that. Please keep bringing that to the team. And I've i've really embodied that. That's really important to me. ah You know, it takes some explaining, I think, and especially to the leaders listening and those with teams who are trying to organize, especially in this new era with AI.
00:03:56
Speaker
It's really important you recognize like how powerful and important it is to kind of set the table. Do you know what i mean So, When I say set the table, I mean, first of all, being wrong is in the room means kind of going first. It means being the leader to share a raw idea, go to the whiteboard, say, Hey, I've been thinking about this. What do we think for teams that aren't really practiced at doing that yet? It's so important that a leader sets that table first, but that may not be part of your style. So I'm kind of here with a bit of a pitch to help leaders understand
Creating Psychologically Safe Environments
00:04:23
Speaker
why it's so important. The ladder goes like this first.
00:04:26
Speaker
psychological safety and being wrong in the room kind of go hand in hand. It's giving other people in the room the space to share their ideas because that's where those ideas are. Growth teams on two to three people is gone. Like we're not doing that anymore. We're doing high participation. Everyone has a say. Everyone has the ability to share and bring up an idea of of equal value. And like then you test it and you give it merit and you glom on.
00:04:49
Speaker
Creativity gives you stronger signals to begin with. Here, and look, yeah I'm sure you have a lineup of guests, Katie, on the show that i can talk about like analytical models and like statistical rigor far more than I could, and those are just as important, but even they would tell you, I'm sure they would agree, that you can't apply a statistical model and make a bad idea win. You need great, strong, awesome, team-generated ideas to get a signal in the first place.
00:05:12
Speaker
I think we're kind of missing a piece of that lately. We're we're trying to automate too much. We're pushing things off to AI. or We're stopping the thinking in the places where we should be continuing to lean into thinking. So my model is a bit different. My model is as a leader, here's how you can show up to get more people of your team to share, to bring creative ideas, to share their point of view, take some risks together. What that does is gives you stronger signals. You can measure faster, gives you a bigger result and bigger results resolve faster. You can even move faster, which is a competitive edge.
00:05:38
Speaker
um On the participation piece as well, think about how powerful it is for those big tech brands to talk about. They're hiring the best talent. They're moving fast. They're agile. They're changing things constantly.
00:05:48
Speaker
The idea of experimentation is supposed to be so difficult is a moat for them, right? It's not. Every team can run experiments. Does the model look a bit different as a startup at a startup than ah an enterprise? Yeah, of course it does. It's still experimentation. There's still rigor. And you still set the table for your team to
Experimentation in Decision-Making
00:06:03
Speaker
take those guesses and be willing to be wrong with new evidence. um And the last the last piece I'll add as well, if if that's not a compelling enough argument, moving faster, making bigger signals, if that's not enough, you'll argue less.
00:06:13
Speaker
You'll actually argue less as a team if you're looking at the same data set and you're not sure what you're trying to do and how you're trying to interpret that result. Experimentation is your tool to go, okay, look, we're stuck. Why don't we run a test? Let's go experiment and see what we think the market has to say about this. We'll come back in a week and we'll reconvene and make our our minds up then.
00:06:29
Speaker
um Setting the table, psychological safety, going first. It's hand in hand in setting the table for an experimentation program. Absolutely. i want to dig into the psychological safety and learning over winning. I think you have
Shifting Roles for Better Team Culture
00:06:41
Speaker
so much to say there. But before we move on from the piece of being the wrongest in the room, I think I see in the industry a lot that people want to be the decider, right? There's a lot of authority and gravitas that comes with being a decision maker and saying, this is exactly how I want to go. I'm doing it with data. I'm doing these things. Yeah. But how do we shift from being so decision-focused to being more of a facilitator? because that's what I'm hearing from you, right? is The facilitation is important in building a culture to make sure that the decisions aren't impacting your bottom line in a negative way or your culture in a negative way, right? Because then we get into retention, we get into all the other pieces that make a successful program. So can you talk a little bit about what it takes to go from being decision first to being facilitation first?
00:07:33
Speaker
Sure, totally. So as teams are kind of approaching the sort of the midsize, they're funded, the founders are finding their hands in a lot of different areas. I think what you're talking to, Katie, is sort of like the movement from sort of founder-led decision-making more into sort of team-led, autonomous, we can make our own decisions with some structure.
00:07:50
Speaker
Um, structured autonomy decision-making is where the biggest
Encouraging Non-Judgmental Idea Sharing
00:07:54
Speaker
teams move the fastest. That's how we're we're operating now. Let's talk about it. Uh, I talked already about going first. It's important leaders, this like any culture change, like any transformation leaders do have to go first. It means sharing raw ideas, get hit in the whiteboard.
00:08:06
Speaker
Um, I call it to the boards, hockey reference, like from the whiteboards to the boardroom. That's how we bring ideas. So leaders have to go to the boards first, share their stuff and show that it's safe to share a raw idea. One thing you mentioned, Katie, earlier, I forgot how you phrased it, but um I call it idea ownership.
00:08:21
Speaker
We want to get rid of idea ownership. How often do we find ourselves in a meeting? We're saying, hey, what's that one we were going to run last week? That experiment, you know, Ted's idea. But attaching Ted to the idea, even if it was his idea, okay.
00:08:32
Speaker
it It makes it difficult for the rest of the team to sort of see themselves in that experiment. It puts a bit of pressure on Ted. Ted can maybe be a bit defensive and put himself on the back foot because now he needs his test to win. That's not how we want to look at innovation. That's not how we want to like run an experiment. So the idea, first of all, as leaders is we've got to stop assigning people to ideas. It's a team's idea now, right? Someone came up with it. That's great. I could talk all day, by the way, on another time about how like...
00:08:56
Speaker
Okay, one person came up with that idea, but with the right environment and the right creative rituals and the right facilitation, your team, namelessly, will come up with that idea eventually. They will. Ted got there first, but you can remove the ownership of that idea.
00:09:09
Speaker
um Second, we want to debate the premise behind the idea and not the person. Right? How often are we saying, I don't agree with you? Katie, you're wrong. I don't agree with that point of view. Ouch, that hurts. We don't want to hear that we're wrong, but we're not wrong. Our idea may be debated. So instead of saying, Katie, I don't agree with you and you're wrong, you say, Katie,
00:09:30
Speaker
Okay. I don't think I agree with that premise. I don't agree that more users are doing this or more revenues opportunity is an an opportunity over here. We can talk about the premise and then suddenly we're on the same side. Same thing goes for arguing on principle. I mean, we can roll our eyes all we want about the company value. Sometimes every part of us is welcome. If we think values are bit woo woo. Okay. But I'm telling you the difference between an org, especially in experiments, the difference between an org that argues on principle and values and the difference between a team that doesn't is stark.
00:09:59
Speaker
They move faster. They generate better results. They argue less. They agree with each other. i believe they have happier days. You need to be arguing on principle and saying, look, I'm thinking customer first. I'm thinking about designing a solution for my customer that, um
Principled Idea Discussions
00:10:11
Speaker
you know, what what your principle might be, you know, operates well in the dark was when we had or um finds their way without any help. That's a principle we had. Oh, hey, I'm approaching the problem from the same point of view. Okay, now we're a team.
00:10:22
Speaker
Now we're talking about the same thing and we're arguing on principle. Leaders need to get in the room and start setting that stage as well. I love that. Great phrases you can use, Katie, for that would be things like, how might we? Great classic phrase. Hey, how might we move a customer over here? Great phrases would be, what do we need to know? What do we already know? That's a great way of stating the obvious.
00:10:41
Speaker
um And what needs to be true is kind of my favorite. What do we need to be true right now that we can go experiment with that will tell us we can go direction A, B, or C? One other way we can teach is deciding first. We actually move the decision from the end of our process, which seems cognitive to the front where we decide first, gather some evidence, and then we go and run an experiment on that premise so that we don't have to get back together and debate the same results. Um,
00:11:04
Speaker
And that's how we we kind of look at it, Katie. We look at how we want to rephrase, reframe the way we make decisions as a team. In fact, I'm writing a newsletter. I'm using this podcast as my sort of soft launch for room to think. My brand new newsletter coming out. Thank you. i hear i hear your applause. I am absolutely daunted by the idea of writing on a frequency, on a cadence, but I will do it. ah And it's three things. It's very important to me. One, it's...
00:11:26
Speaker
ah One weekly thing that you can do every week with your team that you can test something small and easy in market to get more information about your customer. Two, it's one creative power up for you to facilitate great meetings with your teams, generate more ideas, come out of the room feeling energized. Lots of little fun activities. You mentioned Taylor Swift earlier. There's actually an activity called What Would Taylor Swift Do?
00:11:45
Speaker
That'll be my newsletter. Sign up at Room to Think. Okay. Number three, one behavioral science thing. like We have to apply the science here. Why are customers and behavior... Pardon of me. Why are customers and buyers... behaving in such and such a way. There are some amazing like cognitive heuristics we can talk about. There's some neuroscience here at play.
00:12:01
Speaker
Some of those help explain the behaviors of our customers. And I want to tie that in to close the loop every week, weekishly, ah hopefully launching here in this pod. Thanks. Weekishly is a really good commitment.
00:12:12
Speaker
Thanks. I can do it. I believe in you. really hope I can write weekishly. Yeah. If you promise people on the
AI's Role in Human Creativity
00:12:17
Speaker
internet you're going to do it, you have to do it now. It's actually a really fine thing. I need Yeah. No, so I think ideas. I want to come back to that because you said take the the kind of take the assignment of the person out of the idea because it's a team idea.
00:12:33
Speaker
i i talk a lot about AI. Obviously, Chameleon is an AI-first platform. And – Having AI support your idea generation, I see a lot of success in that, right? You're able to separate it from the individual. Your team feels like they have ownership over it. They're all looking at the same thing, ah you know, literally on the same page. And I also know so many people will relate to this. When I, you know, was I've been running programs for many, many years and you kind of run out of, you're looking at the same page, right? Unless you're willing to do, you know, you could get complete like page blindness. You're like, I cannot look at this hero block one more time because there is nothing else I can do to it. But that's not true, right? There's so many ideas you can come up with. But Drake Somm is somebody I interviewed recently and I loved his quote. I posted it on my LinkedIn. His quote was, AI should take the boring stuff out
00:13:27
Speaker
not the creative stuff out. So how do we balance that in a world where AI is generating ideas? I think AI democratizes the ownership of those ideas to be on the team and it is able to surface things you might not have thought of, but how do you make sure you're keeping the fun, you're keeping the creativity that is so innately human?
00:13:47
Speaker
Amazing question. And that's that's the question I think we need to talk about for leaders. that but that's what That's the landscape that's transforming right now, right? How are we making and surfacing the best ideas from our team, using AI to kind of color in the edges, but not taking away, to your point, the creativity. My my phrase is, don't drop the thinking. We still operate with the best you know machines in the business, Cognition Station. Those are still the best in the business. We've got to make those things top performers, supplemented by AI, 100%.
00:14:11
Speaker
um I have a two-parter, Katie. I want to talk about AI first. I think it's an important place to start. Everybody wants to figure out what's what's next. But B, I want to talk about your concept of, hey, we're back here in this meeting room arguing over the same surface area, the same concept. I have an answer for that. I'd love to share that. It's called a premise. We'll talk about a premise in a bit. What I think is really fascinating in this space, part of the reason I love working in experiments,
00:14:33
Speaker
is experimentation at the moment is probably the strongest discipline within marketing that's making the broadest use of AI beyond just chats and windows and interfaces. It's using just about every flavor of AI tool right now. And that's really important to making sure we know what works.
00:14:47
Speaker
AI and experiments isn't just chats. um I look at this sort of chronologically, like when we go through the the innovation cycle, how we're going through our research and gathering and our evidence gathering. First of all, grab a note taker right into your agile meetings, into your customer interviews, into all the things to grab a note taker. oh my gosh, Katie, I have ADHD listeners to this pod have probably figured that out by now. The speed with which I speak. ah I can't tell you what an unlock having a like virtual pretty accurate and knows my stuff note taker in the room is it just talk about Drake's point of get rid of the minutia and the admin.
00:15:19
Speaker
What an excellent use of AI. So get yourself a note taker when you're running interviews. Yes, you got to be polite. Tell people you're recording them, but that kind of comes with the territory in and in a moderated interview. um All the things right first, you're getting interviews permission to do that second.
00:15:32
Speaker
Record so you can focus. I mean, when I talk about in my newsletter and facilitating great meetings and great ah listening, how to listen actively, how to use your empathy to really get that out, a note taker is going to help you focus on your customer in that meeting. And you don't to have a second person in the room taking notes for you. They can go run more interviews. Look at that double interviews.
00:15:49
Speaker
um Obviously, take your take your ah transcript from the recording, dump that into your knowledge base, which of course you're maintaining with the help of AI as well. That becomes your knowledge base to reference and find patterns, find incongruencies, find those sorts of things, suss that out, and then run even more experiments, run even more interviews.
00:16:08
Speaker
So that's your note taker. Next, getting prepared for those meetings. This is where chat comes in handy. Chat is the absolute best. When I say chat, I mean like LLMs, window, text typing.
00:16:19
Speaker
um The chat interface is great for creating a script on topic. So what is it we want to solve? We can't seem to move customers past this payment page. We've got a weird drop off we can't explain. Hey, reference our knowledge base, reference these interviews. And here are some transcripts I've just run with some customers stuck on this page. Can you figure out what I'm missing? i believe these three areas are important and it will return things for you to kind of scratch your head and go, yeah, hadn't considered that.
00:16:41
Speaker
That's a great use of of chat, helping you dont run a mock interview, by the way. I know we don't like pretending AI is a human, but debating Claude, asking it to be a sort of debate partner based on your body of knowledge is incredible. I urge you to try it. um It really helps kind of put a mirror to you and the way you solve problems. And you go, oh, gosh, that's a blind spot of mine. I didn't touch that. Thank you for raising that. It makes you a better interviewer as well. So have like a mock interview with AI before we have those interviews.
00:17:08
Speaker
Okay, now the meat of it. I'd say probably the strongest area AI is playing right now in terms of experimentation is in prototyping. And of course, there's like there's there's different tiers of this, right? So there's you guys, there's PBX, a chameleon, like AAA tier, making like and a clickable resonant prototype in seconds that actually gets people to click and do the thing. That's that's engineered, that's high level stuff.
00:17:29
Speaker
Not every team can necessarily get to that level yet. They should aspire to, but there are other ways to generate prototypes as well. On the lowest end, by the way, for some of my outbound, when I talk to clients and they try to find new new work, I'm sharing with them a prototype I made by just downloading the original HTML file of their pricing or product or homepage, asking ah Claude to say, hey, if their customer were more concerned about some of these things, can you write a better homepage or pricing page that reads like that? And I show up saying, hey,
00:17:55
Speaker
Wouldn't it be nice if you could ask your customer to um act with more urgency, deal with some FOMO, maybe trust your brand to be a little more safe, that kind of thing. You can present that in like two minutes. That's a prototype too. So prototyping and it's it's endless varieties of stacks. There's PBX, there's low-key stuff. You can do all sorts of different like formats of prototypes with that.
00:18:16
Speaker
The reason prototyping is so important, especially in this space, is it's it's a provocation, right? now that Now that AI is here, what we're not doing is we're not asking AI what it thinks about our content and our spot, because that's a way to look backwards. That's not what AI is for.
00:18:27
Speaker
What we're doing now is we're using AI to kind of surface those things that we haven't really seen and to make a usable, practical, but quick provocation on the thing that we're trying to test to get a reaction out of the market, out of interviewers, that sort of thing. It's great at making something good enough to get that signal.
00:18:45
Speaker
It's not putting designers out of work. I worry about this a lot. It's not putting designers out of work. It's just, it's creating something that much faster that they can then go and wrap their hands around and work on the best work of their lives because it's validated. That's the sort of provocation piece.
00:18:58
Speaker
Takes two minutes. I also, ah I also, what?
00:19:03
Speaker
I also promote, I guess I promote the concept of staying within your lanes in terms of like levels of fidelity. You know what I mean? We're still sketching, right? We're still drawing wireframes. We're still doing that. And the reason we still do that, even though this is my caution, this is your like, what do you call those pitfalls or whatever? The AI pitfall is don't generate a sparkly looks like brand new lovable. I mean like the app, not, I mean actually both lovable and the app lovable prototype that you can use right away because you'll have a different conversation.
00:19:32
Speaker
Right. When you're talking sketches and wires, you're talking about structure and flow. When you're talking about like ah a static prototype, OK, you're talking about colors and placement and so on. When you're on a clickable prototype, you're talking about all the things you miss some of that really important conversation if you move too quickly into a high glossy clickable prototype. So stick to your lanes, ask it to generate a wire, ask it to generate like like fig make level stuff and then move yourself um up into like a clickable prototype. Stick to the system.
00:19:57
Speaker
I mentioned premise. I'm going to bring it back to premise. Katie, you said, hey, we're in a room. We're debating the same thing. Why are we here again? We know what the problems are. We've interviewed 10 people. Like, well, we're still here. We haven't solved the problem. Oh, yeah, I've been there. Okay.
00:20:10
Speaker
um The way we got around this, this was when I was at Intuit, by the way. We had to solve some really gnarly, like, tax filer problems that were both highly competitive at that moment. There was a new entrant. We were under the gun. If you don't know, tax season has a deadline, and so we're under the gun with timing as well. So this team needed to solve, right?
00:20:28
Speaker
The way that you can tie experiments and their success together is by following the underlying, ah call it a premise, that is motivating your buyer. So we've got the classics, you've got like trust. Do do they trust in um the brand and the security of the page? It's always your classic trust signals, but they're kind of foundational. Then there's just classic usability. Like, are we actually allowing the user to proceed from screen to screen, find the button, click the thing, toggle the thing? Like, are we... are we hitting all the marks for usability and making sure that we're there. There's suitability. Does the product actually solve their problem? and does the messaging say so? Then you get into sort of the more marketier stuff. You know what i mean? There's FOMO, there's urgency, there's do I see myself in this product? There's luxury, that kind of stuff.
00:21:09
Speaker
Those foundational buyer behaviors are what actually link together your
Linking Experiments to Buyer Behavior
00:21:13
Speaker
experiments. So if you solve something over here, one of the big solves that into it was helping mobile users compare products really easily. If we know the comparability, that's ah that's a heuristic. If we know comparability is super important to the segment, keep pulling that thread, keep pulling the comparability thread, keep finding other experiments that this is what AI is great for, tangentially help solve that in other areas of the business because that's demonstrated a win in the first place. So if you're stuck in a room, you're not sure how to proceed, but you know, there's some more like juice in the fruit, so to speak, more grapes on the vine, follow the actual motivating factor behind why the customer is clicking it. Don't worry so much about design. Think more about what the effect is on the customer. And I promise you, your success rates shoot through the roof. Your team's more motivated. You're winning more often. You'll eventually hit a ceiling, but that's when you go back into that good work and solve a fresh new problem. Make sense?
00:22:00
Speaker
Totally. i but I have a ah big question about psychological safety, right? Since we're talking about a i psychological safety and being the wrongest in the room is so important for culture. And what AI is exceptionally good at, for better or worse, is thinking that it's right.
Balancing AI and Human Creativity
00:22:19
Speaker
it's the I love – I also heard this where AI is like a teenager.
00:22:25
Speaker
It's like 100% confident and right. Okay. I really liked that one. I really liked that quote from the conference I went to last week. but So AI is really good at thinking it's the rightest in the room. So how do you balance teams that are leveraging AI to speed up the places that are, you know, where we can automate and remove some of the quote unquote boring stuff. But how do how do we balance that, which is using a tool that thinks it's the rightest in the room with being the wrongest in the room and having the psychological safety? Psychological safety is more than just being nice. I want to understand your perspective on leveraging the rightest in the room tool while maintaining the wrongest in the room culture.
00:23:10
Speaker
It's question. Yeah. How do we use a tool? i I'm going to rephrase to the way that i I understand it on the daily. How do we use a tool that is confidently wrong? ah like People of my generation would recognize like 60% of the time it works every time. Yeah. How do we leverage that kind of tool in a process that's supposed to be rigorous, accurate, fast, whatever? Okay, great question.
00:23:30
Speaker
ah The answer is leaning on are more human aspects. um That's not um a coy answer. I'd say there are a couple of skills that I'm most concerned... For the listener of the show who is um younger, maybe more junior member of the marketing workforce, trying to figure out their place in the space, your your cheap code is lean into those more soft skills, those things that you can really bring from within that AI is never going to be able to take from you. and And here's how you apply that into experiments specifically. It's not woo-woo. This is real work.
00:23:57
Speaker
Number one, the human skills that matter the most are facilitation, facilitating a great meeting, interview, room, brainstorm, so that you get results. You read the room. You never cause offense. You get quiet people to speak up.
00:24:10
Speaker
You get great ideas and you get great results out of a great facilitation. That's a skill. You need to be empathetic. you need to read the room. you need to change your language, change your rapport. It's tough. It's a tough skill and you you need practice to learn it.
00:24:22
Speaker
Number two, um, Read into the emotions and not into the prompt. What I mean is AI today, AI's current versions aren't good at really understanding what's behind the why and and so on. Yes, if you wanted to, you could find like a semantics engine. Those have existed for a while trying to get the tone of things. It's not accurate. It's not enough to act on.
00:24:42
Speaker
You need a human to understand that your interviewer is actually hiding something. You need a human to actually ask politely and kindly if they'd elaborate a little bit more on on a subject that maybe is tough for them, right? If if you're a product or a SaaS product or ah um a product that deals with a tricky subject or you deal with ah pain or um a feeling of inadequacy in your customers, there are tons of products that solve for that problem. You need to be really good at empathetically reading their emotions so that the interview can continue. You can get great results and concepts from them. um even Even in copywriting and generating a prototype that is just provocative enough to get their attention and have them continue down the page, but not enough to put them off and say, well, reading the room and having that level of emotion is is very important. and it's It's something that the next generation needs to lean on.
00:25:25
Speaker
ah How do we deal with hallucinations? ah The one skill that is really the most important in that space for leaders and for the next generation of marketers is the ability to self-correct with kindness, with gentleness to yourself and to your team.
00:25:35
Speaker
um Self-correction looks like this. it's It's science in a nutshell. You know, the simplest definition of an experiment is, am I willing to change my position based on new evidence?
Self-Correction and Experimentation
00:25:45
Speaker
Yes or no. That's it.
00:25:46
Speaker
Now you're a scientist. if if we're willing to do that and we're willing to take new information, run a test and go, gosh, that came out a bit different, didn't it? If you can confidently walk into that room, bring your test card, your test brief and say, okay, guys, ah here's my update. Didn't go to plan. That's okay. But here's what we learned. And here's what we're doing next.
00:26:02
Speaker
That's it. That's self-correction. That's all you're willing to do. And Katie, you nailed it. AI is not great at that yet. You have to come on, you've done this. You have to tell it it's wrong. And then it comes back full force. Like you are so right. i am so sorry, but we can't, we can't operate like that. We can't rely on AI to be prompted by us for its own self-correction. that's a human skill. So lean on that.
00:26:22
Speaker
And finally, for all the founders who want to move faster, speed is your most important weapon here. distribute This isn't my idea. This is not my original thought. It's it's out there in the the Twitterverse or whatever. But the idea that building used to be the roadblock and the thing people were um trying to accelerate is now so quick with no code, with with Vibe code stuff. So building no longer becomes the biggest hurdle. Distribution and getting your signal, your marketing message out into all that noise with the extra content that we're seeing now, distribution becomes the hardest part.
00:26:49
Speaker
The way teams are going to succeed with this new AI companion is using it to speed up their existing process, run more interviews faster, get a stronger signal with more creative ideas in the market faster, facilitate great meetings so that those ideas come out of brains that much faster and you can get them into market. Speed becomes really important. And it's not that simple.
00:27:07
Speaker
to get a team to collaborate on a calendar, to hit a sprint ah to hit a sprint milestone. it It takes a lot of leadership and a lot of empathy to help a team operate that much faster, but that's a skill that you're going to need to have to win. That's it.
00:27:20
Speaker
You have really good advice for leadership. You've been in this industry, you've been in experimentation for quite some time. So I'm curious if leaders are listening to this right now and they're saying, okay, we have a really solid AI strategy. you know, I just posted the other day about AI mandates versus AI strategy. So i'm I'm assuming this leader is already advanced and saying we're implementing AI in a thoughtful way that is looking forward, not looking backward, which I love that piece that you said. I'm going to quote that for a long time. um But sure yeah, he's very good. He's very good. So just I want to talk about leaders who are listening to this right now. And they have a really solid AI strategy because we're seeing more and more teams do really well with that, with things like PBX, right? They're vibe coding, their you know but they're still a human in the loop. It's a very successful program is the leader we're talking about right now in this persona.
00:28:11
Speaker
They may be thinking about the psychological safety element of it. Can you just name a few traits quickly that leaders, if they're listening to this and saying, gosh, I have a really solid AI strategy. We're learning really well. We're learning really fast. But do I have the psychological safety that is required to create a culture of experimentation that is has longevity and legacy, right? Because that's how you're going to get the best product, the best outcomes. And I'm just curious if there are individual traits you can say, hey, leaders, if you have these things on your team, you're probably looking pretty psychologically safe. Can you name any of those off the top of your head? And I hope it's not stealing any of the thunder of your newsletter, but if you can name any of those, I think it's helpful to share.
00:28:56
Speaker
Yeah, thank you. If anything, I'll use some of these answers in the other weekly-ish editions. Perfect. Thanks. ah Yeah, what are the traits you know what are the traits of a leader? And I think actually I'm going to go a little further. And what are the traits of their teams then that that need to be successful in this? Yeah.
00:29:10
Speaker
Let's start with the leaders. um So... So A, I will say I didn't come by a lot of these ah skills, traits, whatever, naturally myself. So I want to make it really clear to your your listeners. You can learn this stuff. It just takes some practice, maybe a little coaching. Sometimes that helps too.
00:29:26
Speaker
um We all have empathy. You're born with it. That's great. But actually using empathy in a way that gets your team to move is a skill. it It means listening. It means being firm in your direction. It means being firm in what the result needs to be and holding, forgive the metaphor, but like holding a bar and what like quality, the amount of evidence, the rigor needed to hit your decision.
00:29:46
Speaker
um Again, this is like a whole other topic for your pod, Katie, but there's still so much subjectivity in decision-making at the leadership level. That's okay. It's not going anywhere. But experiment an experimentation team's job is to give their leader the most evidence they possibly can as as precisely as they can to help make it really difficult, quick decision.
Empathy and Trust in Leadership
00:30:05
Speaker
It's hard to do. So first, it's using that empathy to set the bar and be really clear about what that is. You have to articulate really, really clearly. Two, it's kind of an agreement. It's trust. It can change and it will as your team becomes more capable.
00:30:17
Speaker
But you as a leader need to say, this is what I'm demanding of you and I've got a big decision. It looks like this. i need to make it by this date. I need your help in going out into the market to find evidence that looks like this, this and this for me so I can make my decision. um that shape The handshake, the trade-off that you need to give your team is dot, dot, dot. I'm going to let you have some autonomy. I trust you. You can spend some budget. You can take some risks. I trust you to move quickly without my constant input because you know the goal. You know where we're at.
00:30:42
Speaker
By the way, that divide gets way wider at teams that are great at testing. You have leaders up here looking ahead at what's next and you have their teams scrambling, not scrambling, hustling is the word I want. You have their teams really operating quickly with rigor to come back quickly with the evidence that they can then feed their leaders and say, hey we found this. You can use this. Right. That's what a great team looks like. Leaning on your empathy is really important as a leader. I'm going to turn to look at the team now. What are the traits of a team that's really set up to run great experiments?
00:31:08
Speaker
One, you come into meetings with some level of agreement already. um I think you're not ready to hit the gas if you are coming at a meeting still completely odds with each other, locking horns, not agreeing on things. Foundationally, you need to be pointing in the same direction and the team needs to understand where they're heading. So a great way to do that, i write about this in my newsletter, is the three separate types of goals. There's the strategic one above my pay grade.
00:31:29
Speaker
Then there's the sort of ops and teams one. Hey, we're looking to push new paid users. And then there's sort of the work that I see in myself every single day. Hey, I can do that by getting more engagement in this content. Great. We all feed into that one thing. If you don't have that yet as a team, leaders start there. like They need to see themselves in the work and in the motions carrying it into market.
00:31:45
Speaker
um However, if your team is already curious, if they're speaking up, if you you don't have sort of like the 80-20 rule of like 20% of people are providing 80% of the ideas, if you can smooth that curve and get way more participation, way more people contributing to experiments, that doesn't mean they have to own them. It means looking at the risks, QA. It means, you know,
00:32:03
Speaker
suggesting some better copy because they are a copywriter. That can be their role in an experiment. They don't own it to participate. If you can get a high level of participation, lots of ideas out and get that team challenging sort of the status quo, you are experimenting. You're you're in the right place to to take on like a PBX, to take on like Figma make creating the craziest new idea. You can do that as long as you're aligned and you're pointing the right direction.
Low Ego and Learning-Focused Experimentation
00:32:24
Speaker
Something I like to ask people, there's a couple of final questions here, but I think being an experimenter naturally means you have a bit of a low ego. And can I say that for every single person who tests? Absolutely not. But the fundamental DNA in what we do is saying, I want to learn. I don't already know.
00:32:44
Speaker
And with that being said, I'm so curious if there is one failure you have used to propel yourself forward. So this is me asking you to blast your failure on this show to the internet, publish it forever. But I think it's important, you know, as practitioners to say, yeah, I failed many times to get here. You kind of teased that in your last answer where you said, you know, it it takes a while to learn these things. Tell us a little bit about the speed bumps that you hit to get here. gosh. Take a number, Katie. Okay. I didn't call my talk wrongest in the room on stage because i I swing for the fences all the time. Well, I swing. I don't hit. um
00:33:24
Speaker
Yeah, gosh. Statistically speaking, I think um my ideas stink the most out of anyone in any room ever. like I have a lot of ideas and that's part of what I bring to a team, ah but it comes with a lot of stumbles as well. So...
00:33:36
Speaker
Uh, you mentioned ego, the experiment I'm going to bring out of my, oh gosh, like buried down deep portfolio is, ah one that was kind of ego
Acknowledging Failure and Avoiding Ego-Driven Experiments
00:33:45
Speaker
led. And this is why I no longer involve my ego in experiments.
00:33:48
Speaker
Um, I'll start in the middle. It was winning. It was working across the entire organization. This premise, this concept was crushing it. It was a very deliberate, like toning down and reduction because a lot of reductive experiments do very well, by the way, listeners, you don't always have to add. Reductive stuff is very effective.
00:34:08
Speaker
We reduced the reading level, use your Hemingway app. We reduced the reading level on some of our body copy on product pages. We reduced the number of features we show and hid them under things. um we just We put less in front of the customer so they didn't have a cognitive overload trying to figure out what they wanted to buy.
00:34:22
Speaker
It was crushing. It was crushing both in Canada, it was crushing across in the US teams who were trying it. It was a premise that was carrying us forward. And Katie fed my ego a little bit. Here it was leading leading this thing. I got to walk into rooms with executives and show off our winning experiment and double digit growth.
00:34:37
Speaker
Yeah. Okay. Fed the ego a little bit. So experimenters, here's how not to do this next part. I call it the three-legged stool. If your context changes, like the place you're putting it, whether it's channel, medium, whatever, if your customer changes, uh-oh, this is what I did. If the person you're selling to changes fundamentally, or if the content itself, like the thing, the mechanism that you're testing changes fundamentally, you got to retest it. You you can't rely on it being as stable as as places where that three-legged stool is intact. If one of those things changes, there you go.
00:35:06
Speaker
Okay, so I took this winning formula, this recipe to another business line that was a very fundamentally different business. It wasn't PLG. It was more like sales led. It was very hands-on and it was a different kind of customer. But here I was saying, reduce, cut all your copy, like lower the reading level, drop everything.
00:35:22
Speaker
Katie, how do you think it did? Yeah. It bombed. It was embarrassing. And of course, I had to walk into room with executives having promised like, hey, this was probably going to win and now we're down a couple points and we're going to go back to the first. You know what I mean? And I learned that, first of all, three-legged stool, but second,
00:35:40
Speaker
You can never oversell an experiment. You can't get in there because there are so many ways to embarrass yourself when it doesn't quite come out the way you'd expect it. And you kind of got to go, okay, the best way to behave as an experimenter is to set your range, set your expectations, set the table. Like we started this podcast saying we set the table and what those expectations are, what we believe to be true. And it comes out yay or nay. Yeah, I had a pretty strong sense it would come out on the positive, but...
00:36:03
Speaker
You know, you live and learn. So try very hard to not do what I did. Try very hard to every instance you're going to run experiment, reset your expectations and go, okay, this could fail. We've got to be ready for the next thing and and create a plan to do that.
00:36:17
Speaker
I oversold hard. Yeah. I think we've all done that. I've certainly done that. And then I'm like, oh no, i take it back. Don't listen to anything I've ever said. And it it can also like undermine your authority. You're like, no, I swear I've been doing this for 10 years and it's never happened like this. I swear. And so it's very funny. No one's taking my title, wrongest guy in the room. I got that down the block. worry.
00:36:38
Speaker
See, hold on to that. Hold on to that dearly. Thank you for sharing that. right it's It is helpful. you know some of the biggest names in experimentation, how do you think they got there? They failed, right? And as as I remember being and very junior and I'm like coming up with ideas and I'm like, all these ideas have to be winners. And it's just nice to end this little show with a, no, it doesn't. Like just never forget that. and even the people who are more technical and doing the engineering and the data analytics and All of those pieces, it's like, you know, they're fundamentally, it's so more black and white on that side. But coming from, I'm like more of the marketing persona. And, you know, so it's just a good reminder that like from every single angle, the only way we excel is by failing forward. So thank you for sharing that. But with my last question for our show, this is the final question i will ask you is someone's listening to this. And if you've made it this far,
00:37:37
Speaker
We love you. But somebody is listening to this. What are they doing tomorrow to take action on the things that you have said in today's show? Well, gosh, Katie, watch me go. You should first of all, sign up for my newsletter, Room to Think. You'll find that formantor.ca. That's my consulting practice, helping teams build world-class experiment organizations. I'm writing to a new person on Room to Think. It's, again, it's my weekly-ish attempt at ah sharing three important things, great facilitations and power-ups and meetings, something small and easy you can test simply in your market like tomorrow to get a signal. And three, how do we actually know this is working? What's the neuroscience and the and the behavioral
00:38:13
Speaker
like the cognitive heuristics our customers are using to make those buying decisions today what's working um if we can explain all that we're on to something maybe maybe you won't make the same mistakes i did you'll actually launch a winner uh that's what they can start doing second katie like just get testing remove yourself from the experiment remove your name from the idea uh go first right put that shaky not well thought out kind of rough idea on the wall first and see how your team takes to it And then let them go.
Fostering a Culture of Trying and Learning
00:38:39
Speaker
them try. Give them a little bit of rope and a little bit of like experimentation um you know experimentation um runway to kind of try and fail and be gracious when they do. But always asking, keeping that bar high, saying, okay, that's great. We've learned something, but what are we doing next? What are we doing with that?
00:38:54
Speaker
that's what you can do next. Thank you so much for your time. This is incredible. i can't wait for the folks to see this, but thank you for being on Unite Voices. Thank you so much for having me. What an honor. It's very nice.
00:39:06
Speaker
Thanks, Katie. Thank you. Thank you.