Introduction to Efficient Spend Podcast
00:00:00
Speaker
Welcome to the Efficient Spend podcast where we help marketers turn media spend into revenue.
Guest Introduction: John Lorenzini
00:00:05
Speaker
My guest today is John Lorenzini. John, thank you so much for being here. Thanks for having me. I really appreciate the invite and connecting on LinkedIn.
00:00:14
Speaker
Yeah, we're going to have a really fun conversation today. I know that you have a ton of experience with incrementality and lift testing.
John's Career Journey in Media and Tech
00:00:20
Speaker
I think it would be helpful for the audience just to give a brief background into your experience with optimizing media spend. Sure. Kind of all over the place. I was at media agencies, started out in print doing Bristol Myers Squibb ads.
00:00:36
Speaker
then hopped over to Mediacom where I worked on Dell, optimizing their search accounts, then became a digital analyst, then was at Zenith for General Mills doing all digital analytics, and then moved over to Big Tech.
Role at Lift Lab: Optimizing Media Spend
00:00:53
Speaker
I was at Google for four and a half years, mostly focused on food, beverage, and restaurants, and then eventually
00:00:58
Speaker
working on all reach-based products, so your GRPs, reach frequency, those kind of optimizations within Google, then Facebook, global CPG, and then Snapchat, retail, restaurants, travel, and energy. So kind of all over the map of Big Tech, and now I'm here at Liff Lab, VP of Marketing Science, helping clients optimize their media spend.
00:01:22
Speaker
Yeah, I'm excited to dig into Lift Lab. I'm really excited though.
Experiences in Smaller Companies
00:01:29
Speaker
You might be one of the few, and I'm not sure that has marketing science background at some of the big tech companies, Facebook, Google, and Snap, right? That's a pretty interesting combination. I don't know if folks like hop around at those companies or not, but what's your experience between all of them?
00:01:49
Speaker
As you think about the lift testing, incrementality landscape, how do you think these folks are doing the big ad channels? What was really interesting through the career progression was I always went smaller. I think some people work the other way around. They start at startups and then they go to big tech.
00:02:10
Speaker
But I started at Google and then Facebook Snap. And what I realized there is you have to wear a lot more hats as you go smaller. So you have to learn how to build the engine as opposed to just drive the car. So I think that in terms of the growth I experienced with a machine that hasn't been built yet, I really understood the inner workings as opposed to pushing a well-optimized report, pushing a button, getting that report.
Handling Incrementality in Ad Networks
00:02:40
Speaker
The smaller companies, and this is kind of why I jumped to startups, was I wanted to learn more. And I think that you learn more at smaller, but it has to be scrappier and it has to be a little bit more, oh, you want to solve this problem? Okay, go for it. I have no resources for you. So I think that difference is you could do more at the big companies, but there's a lot more bureaucracy. There's a lot more silos that happen and hoops you got to jump to just to get access to that data.
00:03:04
Speaker
and then sharing it out with the market.
00:03:11
Speaker
Sure. Being inside of those companies, do you feel like some are doing it better than others? In terms of, I guess, the big ad networks, who do you think has their minds wrapped around incrementality and lift testing?
Approaches to Incrementality Measurement
00:03:30
Speaker
Who's the top tier?
00:03:32
Speaker
Yeah, I think it comes down to a little bit of the ideology that aligns with its product's goals or what it's good at, right? So you saw early internet last click attribution and, you know, I don't want to go into a whole diatribe against last click, but it features, it works well for lower funnel products. For upper funnel products, it's a little bit about more view through attribution and all these other pieces.
00:03:56
Speaker
And in jumping through these companies, you kind of had to drink and undrink the Kool-Aid. And I think through that, I started realizing that incrementality-based measurement is a reflection of their ideological stance on where
00:04:13
Speaker
their products do well, but also what do they think it makes the most sense for their advertisers using their products, I guess. So it's kind of like, I would say they all do it pretty well. I would say Google with ghost ads is really great at showcasing the value of the ad unit and creative. While, you know, if you're a little bit before the pre-algo ranking, when you're opportunity logging, it shows the value of the algorithm as well as the ad unit and as well as the creative.
00:04:42
Speaker
So I don't know if the it's an absolute like one is better than the other, it's more they're telling you different things. And I think the concerning piece with that is if one's doing frequentist and one's doing Bayesian, one's opportunity logging early, one's opportunity logging later, you know, you're getting very different answers to or you're getting the same answer to very different questions that you're asking.
00:05:04
Speaker
And I think the understanding under the hood of all these questions are really saying like, this is what this lift is actually representing, as opposed to just here's your lift value run with this. Right, which leads us to lift flab.
Lift Lab's Experimentation Techniques
00:05:21
Speaker
And, you know, what is the flab at a high level? And what are you responsible for there?
00:05:28
Speaker
Yeah. So, Lift Lab is a company that helps marketers become smarter by understanding the difference between their growth and their profitability.
00:05:38
Speaker
We have two major stacks. One is an experimentation stack, which uses geo experiments. And then the other is an agile mixed model, which ingest their data, gives the holistic full picture. But as we know, anything that's correlative is not causal. And because of that, you can sometimes get some wacky results. And I think where Lift Lab is different is we don't try to average or just make the results look believable to the advertisers. We say, this is what the data is telling us. We both know it's wrong.
00:06:05
Speaker
Let's run a test and actually refine our assumptions with causal data to inform the model further so that we can say, hey, you don't have to trust this whole model yet. Let's pick out these pieces, run the tests, put it in there. And then we could both feel like we're not just averaging or putting the art in our hands, but rather in the client's hands.
00:06:26
Speaker
Sure. I like that you differentiate between calling it really an agile mixed model. Just to give a sense of the scale that the flab is at, do you have any kind of indication of how much either annual or monthly spend is being optimized or analyzed on the platform, and then also what verticals you kind of tend to skew into?
Clientele of Lift Lab
00:06:54
Speaker
Yeah. I mean, I could give you a list of some of the clients and I think you can infer from that. I, you know, I don't know.
00:07:00
Speaker
the exact number so I don't want to be inaccurate. But with Turbo Tax, Intuit, Skims, Pandora Jewelry, Sephora, Tory Burch, Express, Leslie's Pools. So we're kind of all over the map. And we have a lot of different clients. And those are just top of my head. We have around quite a few more than that.
00:07:25
Speaker
So you put that all together. I mean, just the first client I mentioned is a huge amount of spend. So it's quite large. Yeah.
00:07:35
Speaker
Yeah, sure. And I'm sure also that, so some larger clients you mentioned, is there solutions for maybe smaller startups or folks that are going from maybe they're spending 100K a month on a few channels? Does it make sense for somebody
Suitability of Lift Lab's Services
00:07:56
Speaker
like that? Or is it really more the enterprise clients?
00:07:59
Speaker
I would say it's mid between, you know, we definitely want to make sure that the optimization, you know, decisions that we're helping them make pay for the cost of lift lab. So as you get larger scales and larger clients, you know, a 5% improvement.
00:08:14
Speaker
pays your lift lab bill three times over with your first test. So to us, it's a very easy, you know, value prop for us, which is saying, you know, if you're probably spending probably 5 million plus annually, which is which is mid tier, you know, we could do kind of a check to see is this the right fit or not, because sometimes clients are just too small, or sometimes clients are, you know, working through their growth strategy, and they're not, you know, completely optimized.
00:08:44
Speaker
So, well, not completely optimized, but they're working towards optimization where they're making such big changes in their account. We measure the best play that's on the field. So if you switch everything up, then it's going to not have the historical data or information to say, how do you improve off of a very shaky base? So there's a couple of clients that a little too soon, a little too small, and then eventually they come back around and then they become our clients.
00:09:06
Speaker
So it's definitely a timing thing on the growth. But 5 million plus spend in revenue is usually a good rule of thumb to start. OK. Yeah, that's very helpful. And I wonder, too, of course, you kind of run the gamut. One of the things, as you scale a media mix and you get more complex, you're running multiple channels, multiple attribution methodologies.
00:09:33
Speaker
You kind of come across like, how are we measuring and setting the right acquisition goals for our media mix? A lot of folks might say, okay, we're going to try to optimize towards a blended CAC goal, and that's kind of what we're doing. Then we go into our weekly meetings. Oh, CAC is up. We have to reduce.
00:09:51
Speaker
I know we were chatting before talking about like, hey, is CAC to LTV the best framework to look at?
Critique of CAC to LTV Framework
00:10:00
Speaker
You know, when you look at clients and you look at folks using LIF Labs and think about goal setting, do you have any kind of framework or principles that you think through that make more sense?
00:10:11
Speaker
Yeah, and I think it's different for every client. I think the CAC to lifetime value is really great for contracts like credit cards, banks, phone carriers, life insurance. All those pieces, you don't have to put more ads in front of them to purchase again. Maybe it'll reduce churn. Maybe it'll allow them to
00:10:33
Speaker
I don't know upgrade cross-sell upsell what not but for the most part it's that first purchase that they're looking at so the cacti lifetime ratio i think is really good for subscription based businesses that don't change however there's been a lot of startups that i've seen applying cacti lifetime because investors like to see it despite it's not the reality of how marketing works.
00:10:54
Speaker
You know, if someone has a company that's a product, like you can use faders by till you die model to say, is this person still alive? What's their average order of value? How many orders do they get before they fall out of the funnel? But that still requires advertising.
00:11:10
Speaker
You're remarketing funnels and all these other pieces. You're only counting the costs of the initial acquisition, but it still requires investment to keep them live. And I think it undervalues prospecting on customers that have already converted.
00:11:26
Speaker
which is already the more likely person to convert because they've already been through the process, they have experience with your product. And you're undervaluing the amount of advertising that you're putting to get your easiest people in the door again. So for me, the CAC to lifetime ratio is a good investor metric, good for subscriptions to say the overall health of the business, but it's not a very good marketing metric because of all the reasons why you undervalue that.
00:11:53
Speaker
Right. And I don't want to put words in your mouth, but from my understanding, you're more in kind of like the marginal IROAS, I-R-O-I kind
Understanding Incremental ROAS
00:12:02
Speaker
of camp. Can you explain a little bit about that? Sure. So with IROAS, that's incremental revenue on ad spend. And that's what more happens when you spend on advertising. So the counterfactual is, I spend nothing. Here's my revenue. I spend something. Here's the extra. So that's the incremental.
00:12:20
Speaker
Now, I saw with experiments on publisher side, I see it pretty much across the board that the first couple dollars you spend is really efficient. It does really well on a publisher. But those last couple dollars, because they have a propensity score, a likelihood that you're going to convert or purchase a good.
00:12:38
Speaker
Those in the beginning are really, really likely. And then as you continue to scale and increase the size of your campaign, you get weaker and weaker propensity scores. So what happens is you end up with this diminishing returns curve where your first couple dollars are super efficient near last or not. So I ROAS to me is an average metric. It is what did your average dollar do in terms of driving the average amount of incremental revenue.
00:13:03
Speaker
So this is why I prefer marginal iROAS as opposed to iROAS or any sort of just ROAS. And a really good example that I use to explain advertisers is if I'm a publisher and you give me a dollar and I give you $100 back, that's an iROAS of $100 and a marginal iROAS of $100. Now you give me a second dollar and I give you nothing back.
00:13:25
Speaker
Your IROAS is 50, but your marginal is zero. So why should you keep spending bad money until the average goes below your KPI? You should stop as soon as it's not working for you. And that's really what we try to do in measure is say, when does your spend become inefficient?
00:13:43
Speaker
Um, let me ask. So it makes sense to me with, with folks like e-comm where it's like, okay, I spent this thing on Facebook and then, you know, someone had a average online value of X and I can calculate the, the, um, I wrote a row as you can calculate the marginal as well. Um, for folks with different revenue models, right? Um, I know you may be like,
00:14:13
Speaker
a subscription model or B2B where you might not be seeing that revenue for six months, 12 months down the line. How do you think about utilizing a metric like marginal IROAS with some of those brands where it's not as clear when you're getting that revenue?
00:14:30
Speaker
Yeah. And that's an excellent question that we get asked a lot. A high value, high consideration purchase that happens infrequently. There's a lot of things that move you towards that decision, but ultimately you shouldn't be giving credit to the spend that just happens at the end there. So there's a couple of things that we look at. One is ad stock effects. So what you could do is run an auto correlation function or a cross correlation function and take a look to see
00:14:58
Speaker
the decay, right? If you do a big spike, how much does the day before investment impact the conversions the next day and the next day and the next day? So they do this in television. Procter & Gamble is really good at doing this pulsing where they pulse on, they let it decay, they pulse, they let it decay.
00:15:12
Speaker
And that's one way and that's more of the ad effectiveness going down over time. The other pieces is the consideration of zooming out and saying, what is your investment over longer periods of times or wider? With our agile mixed models daily. So with daily spend, we get daily results and we see a lot of day a week effects and saying, oh, more people happen to be in market.
00:15:35
Speaker
You're not necessarily using Canva, another one of our clients, on weekends or purchasing because you don't want to suddenly do a PowerPoint or a Canva presentation on a weekend. Same thing with tax season, with your pools. It's all very seasonal and very heavied up where
00:15:52
Speaker
how early do you have to spend before that considered purchase? I guess this is a winded way of saying, it really depends on the business unit and it really depends on what their seasonal trends are, how they're qualified, and helping them make those decisions of when do you make your investments heavy and when do you make them lighter, when do you make them super targeted, and when do you make them super broad. That's the feedback that we get as we do those quarter over quarter remodels or our weekly re-scoreings.
00:16:22
Speaker
Yeah, one of the things that I've been thinking about on the pay team is we want to optimize towards metrics that we can directly impact and sometimes the lower funnel metrics
00:16:38
Speaker
are more heavily impacted by things like product, life cycle.
Optimizing Metrics vs. Product Lifecycle
00:16:42
Speaker
If we optimize towards those metrics on a weekly basis and there's a lot of volatility, it could be that there was a product bug or something that happened with return customers.
00:16:56
Speaker
While the ideal is that we're trying to say, hey, let's get as many customers as possible, it might make sense for the paid team to say, well, we're really trying to optimize towards like net new email submits because we know that we can impact that and then let LifeCycle do the job. Any thoughts on that type of approach?
00:17:17
Speaker
Yeah, no, I do like having campaigns that are targeted to a specific business unit or goal or being narrow with your audiences. The thing, though, is the targeting is not always there to the narrowness of your conversions. So you're going to end up with conversions on other types of customers and other pieces of your business units. I remember when I was on Dell, Katie on my team, she plugged in all the
00:17:45
Speaker
the scanner codes of mouse, like a consumer mouse. And someone came in on the consumer mouse and then bought, I think, $300,000 or something server solution for the company. It was an IT person reordering. So it's kind of like, okay, you're marketing to consumers a mouse, but then they end up buying a server solution, which is a completely different business unit of Dell. So this was something where it's like,
00:18:08
Speaker
you can put your advertising out there but you can't tell them to buy in retail stores or to buy online you know if they're a new customer an existing customer and maybe you might heavy up with them and we see sometimes like new customer promotions for certain subscription models might you know pop more you might have more of a lift or more of a
00:18:27
Speaker
an impact when you do a pricing change for a new customer like Thrive Marketplace, another one of our customers, would probably see that kind of new versus existing when they are targeting that new. But there's always halos to everything. So it's why wouldn't you count the halo if it helps your business?
00:18:44
Speaker
Right, right, right. Yeah, that makes sense. Cool.
Attribution Models Challenges
00:18:49
Speaker
I want to talk a little bit about kind of the attribution landscape today. I know that you are in the top-down Bajan camp, right?
00:19:01
Speaker
a framework that you had shared, you shared a deck with me before the call with the flab, kind of talking about different attribution methodologies being kind of tops down versus bottoms up. And I hadn't seen that framework before and it
00:19:18
Speaker
kind of made sense to me, you know, even thinking about something like last click. It's like, yeah, it's bottoms up that if you sum all of the attributed conversions, it is not actually going to add up to how many conversions you got. And there's a lot of double counting there. Do you want to kind of talk through that at a high level?
00:19:39
Speaker
Sure, yeah. So bottom up is generally click-based, user-based, super granular. You take those as your fact, and then you try to project up. And with these projections, when we had 100% match rate or a very high match rate, those projections up were very good. But with iOS 14.5,
00:20:02
Speaker
GPR, CCPA, IP relay, all these things coming out, the match rates from these publishers are getting smaller and smaller. And because of that, your projections have to be on a larger and larger percentage of your sample.
00:20:16
Speaker
So with that, you're starting with a fact that's a much smaller percentage of your foundation and then trying to extrapolate that up to extrapolate up to what you think it's actually doing for their business. Now, with a shake your foundation and higher levels of assumptions, you know, Android is doing this. We know the ratio of Android to Apple is that.
00:20:35
Speaker
so therefore we think apple's doing this or all those kind of fuzzy math logic kind of things where you're no longer trying to understand why something happened and more what happened through your forecasting so when you start bottom up you sometimes get really wonky numbers because one assumption between you know your your truth and what you're saying
00:20:56
Speaker
there's a lot of assumptions in between. Now, a top-down approach is, let's start with the revenue and the spend. This is what happened. This is what the shareholders care about. This is what your stock price is. Well, it's not based off your stock price. But you understand. And now it's decomposing. So it's splitting it up and saying, we have the size of the pie. Now let's figure out each wedge. And because of that,
00:21:18
Speaker
there's less assumptions between what you're projecting down. And if you're doing a multi-layered or a hierarchical model where you say, okay, let's get it right at the channel level, let's get it right at the tactic level, let's go down to the campaign level, your bans for error become a lot smaller, right? Because if something's out of whack, it's only going to go out of whack to the level above it.
00:21:39
Speaker
And that's kind of where it limits the amount of of error that you can make overall with bad data. Now, clients come to us with a variety of good and bad data. And, you know, if you have flat, even spend, it tells us nothing. We don't know what's your baseline or what that's doing for you.
00:21:58
Speaker
So you have to create some better data because, you know, I'm of the opinion that better data is better than any model. You can throw a model, like you could throw any model at great data and get roughly the same results. And if you have bad data, all the models will be okay and then require like a PhD, making a whole bunch of assumptions, but it wouldn't be any better assumptions than our clients who have the market context. So this is kind of where,
00:22:23
Speaker
If you're bringing in bad data, or if you're having data that doesn't tell us much, it's, it's, you know, how do we create better ones, but how do I miss the part of how I got here.
00:22:33
Speaker
Right. Yeah. And I think there's a lot of chatter around the negatives of last click. For certain analyses, it is the best source that you have. So for example, if you're running a very clean A-B test on creative,
00:22:56
Speaker
in Facebook, you're going to use the pixel to determine which did better. And I guess maybe you might be using click-through rate or something else. But there are places that you're doing that, you're using that. And there's times where maybe media mix modeling is too high level that you don't want to use that.
00:23:20
Speaker
I like the approach and it's very interesting how you talk about this kind of combination of running lift tests to inform your model and being kind of cyclical in that way.
00:23:35
Speaker
Can you kind of go into a little bit more detail on tactically what that looks like? So if a client's onboarding to Lift Lab and they're like, hey, listen, we just want to test a bunch of things, right? Like we want to understand where we're wasting spend, where we can optimize. Can you help us create a budget testing framework to do that? How do you kind of approach that?
Designing Testing Frameworks
00:23:59
Speaker
So I think the first thing is like with last click, I like it for one thing, which is determining how qualified an audience is. That's it. You know, it doesn't tell you how well the ads work. It doesn't tell you anything except for how qualified is this audience. If you're going broad mass reach, probably less qualified, lower click-through rate. If you're doing your branded, you know, exact match keywords on Google search,
00:24:22
Speaker
probably highly, highly qualified. So last click is great for that. Now if you have equal qualified audiences, which you could do on platform because they do their randomized through ghost ads or through intent to treat, you can use, you know, click through rates provided that the A and B groups are matched pretty well to make those optimization decisions. Now,
00:24:44
Speaker
With Bayesian models and geoteasts, they're a little bit more of a sledgehammer in a good way, right? Because you want to know what impacts your revenue the most, so you want to move this to be able to see what happens. And you want to do so in a fair and comparable way.
00:24:59
Speaker
With these different platforms testing different ways, different like as I mentioned earlier, it's apples and oranges, the outcomes for a lift test. If I have an incrementality test on Google and an incrementality test on Facebook, the methodologies that get to that lift value are vastly different. So you can't just say I got higher lift on this than that, therefore I should shift my budget. So the first thing is the consistency between channels at a macro level.
00:25:23
Speaker
And this is where agile mix model, marketing mix models, all these things are kind of those big sledgehammer type measurements to say, if I move this, I don't care about all the details, what happens? Now, if you do care about the details, because you're working on a specific publisher, you're a digital marketer, where you're focusing solely on social, solely on search, whatnot, then you care all about the details. And that's how do you optimize once you get your budget for your
00:25:49
Speaker
book, how do you make it as efficient as possible? Now, with Lift Lab, we're assuming that your play on the field is the best one you have, right? So we draw the curve, we figure out the diminishing returns curve, and we say, what is the efficiency of your investment given all your settings?
00:26:06
Speaker
Now, you're going to want to optimize those settings and make them better. And a lot of our clients figure out ways on platform to say, hey, since all the measurement on this one platform is even and the audience could be even, we can test super granularly and use identity level signals.
00:26:21
Speaker
because their match rates are the same in both groups, the AB groups, the control expose groups, all that, which is great because we're not comparing it to someone else and the assumptions for both are the same. So for comparative things on platform and below, I do love platform testing because, you know, identity level stuff will allow you to have larger samples than geos.
00:26:43
Speaker
And because of that, you can have more resolution. So I think really my cutoff is a tactic or above, and a tactic could be like Facebook prospecting versus Facebook remarketing. Those are fundamentally different audiences. And because of those, that's the level where market mix model and above sings
00:27:06
Speaker
Below that, we do have campaign level reporting and with enough historical data, enough variance, et cetera, we do get reads on that. But if you want quick, you know, creative level tests, you should do that on platform because running a geo test, creative A versus creative B seems to be a little bit of an overkill for kind of what you're trying to solve for. And you'd rather have rapid iterations. Right. Um, at the same time, I think like.
Optimizing Media Mix and Funnel Investment
00:27:33
Speaker
making those big swings are also going to be the things that make the big difference. The larger of a change that you can make that has a positive result will have a larger positive result for your business. As you think about an optimal media mix,
00:27:54
Speaker
And the way that I think about this is these individual areas have diminishing returns. Like you said, Facebook retargeting audience has is on a diminishing return curve. Facebook prospecting when you have paid search, you have upper funnel media. What do you think a composition of a healthy media mix looks like? And I know that runs the gamut, but, you know, just some I would love just like your hot take on this, you know,
00:28:25
Speaker
Yeah, I mean, you're basically priming for the typical analyst answer, which is it depends. I think that every circumstance is different, every product is different. So maybe like sub-categorizing things, you know, if you're doing pricing and promotional, not even then, every time it's like, it's really custom to the business and I'm not going to pretend that I know a business more than my clients do.
00:28:55
Speaker
For me, I don't want to tell a marketer how they should market because they know their audiences, they know what's efficient, they know upper versus lower funnel. And what we do is provide evidence to that. What I've noticed is pretty common is they over-invest in lower funnel activities because it's the most trackable, especially with all the tracking concerns now. Because it's easy to measure, it's a streetlight problem. Are you familiar with the streetlight problem?
00:29:24
Speaker
There's a drunk guy and he's on his hands and he's looking for his keys underneath the street light. And a cop comes up to him and goes, hey, what are you doing? He's like, I'm looking for my keys. And then the cop goes, well, how do you know it's under the street light? He's like, I don't know what's under the street light. This is the only place I can see.
00:29:41
Speaker
So the measurement of lower funnel is the is the street light. It's the spot that they can see. So therefore they over optimize or overlook into those places that they don't overlook. They overlook into that, which means that they undervalue what the upper funnel
00:30:00
Speaker
less easy to track things. And this is something that we typically see when clients onboard with us is a shift of credit from the lower funnel to the upper funnel, because they know intuitively as marketers, when I turn off Facebook prospecting, all of a sudden, my my bottom of funnel remarketing drops. And the marketing team knows that well. But then you look at the finance team and they're like, well, this is super efficient. And this is super inefficient. So I need to shift more here. And it's a much more complicated and nuanced discussion, which is why
00:30:30
Speaker
A lot of the tools that we're coming out with are about forecasting being able to say i'm not gonna hit your your your quota or your goals based off of this level of spend or at any level of spend your goals are just too crazy diminishing returns like we're never gonna go like that no marketing continues to go like that indefinitely.
00:30:49
Speaker
So a lot of what we're doing is helping CMOs speak CFO. And I think that that's kind of a really helpful thing where we've heard from a couple of our clients. Our work goes into the boardrooms and says, yeah, we're going to have to adjust these because this is not going to work. And I think that that's kind of where this is going, which is how do we educate the finance team in the marketing team's world that they know as their reality?
00:31:14
Speaker
Yeah, let me ask more granularly then, if you take a traditional media mix and you say, well, we're spending the majority of our money on performance and this is what we can measure the best. Oh, we actually think that we should be investing more on upper funnel TV or this offline channel, what have you.
00:31:43
Speaker
If you might run an incrementality test on that, it may be that the marginal incremental ROI is lower because increasing spend in TV
00:32:01
Speaker
if you only look at it in a one week period or a shorter timeframe, you're not going to see that it's actually more impactful. You have to look farther out. So, um, I, when you think about like training the CMO to think like a CFO, how do you think about the patients required to invest in these things that have a longer time horizon?
Balancing Brand Equity and Promotions
00:32:25
Speaker
That's a great question. And I was talking internally with a couple of colleagues about this, about the prisoner's dilemma of CMOs. And this kind of comes to brand equity versus promotions. CFOs love promotions. And I'm talking about generalizations. If there's any CFOs,
00:32:44
Speaker
I know not all of you, especially if you're the CFO of Apple or Nike, you don't do very many discounts. But there's a cost of overdosing on promotions. My favorite example when I was on food beverage, Olive Garden was one of my clients. And what I categorize their two campaigns in is the soup salad and breadsticks, $5.99 lunch special, or the when you're here, your family. Now I'm Italian.
00:33:14
Speaker
Olive Garden Italian also, but you know, say what you will. So with the $5.99, like it lowers the price. It makes a spike in revenue and the CFO see that it works. You can take that to the bank. The problem is if you run too many promotions, you're now eroding your brand equity because why would I pay $11 for the soup salad and breadsticks? $5.99 that's been drilled into my head for so long.
00:33:38
Speaker
So this is where you have to strike that balance of the long-term brand equity building versus the short-term promotional driven revenue driving kind of piece. And the prisoner's dilemma is if a competitor brand is lowering their prices, you all of a sudden get less sales. But if you both don't lower your prices, you can maintain that brand equity.
00:33:59
Speaker
If you both lower your prices, now you're competing and now the ceiling of what you can charge from a margin is far too low. So this is something that, you know, CMOs and CFOs, I think, have these conversations and they're aware of this, but quantifying it in a way that's useful is something that we definitely support and kind of want to make a little bit more aware of.
00:34:26
Speaker
Yeah, so without going too far into it, what we'll do is we'll have promotional groups and flag individual days, and then we'll look at the error in our model and then say, these days that had the extra relative to what we projected in the investments and pieces there, that extra error is due to the promotion because you flagged that day. And if you run the same promotion several days, you can get an average impact of this promotion.
00:34:54
Speaker
Now what's really interesting about that is you could see how some promotions, the same promotion might perform differently over different days. So that's already creating a level of insight to advertisers to say, Hey, I think that this did better, you know, because of X, Y, and Z. What do you guys think? And these conversations really help. So we group those promotions and then we say, okay, based off of these outcomes and these errors and whatnot, um,
00:35:21
Speaker
do we think that we're helping or are we cannibalizing what we're doing, right?
Impact of Promotions on Business Performance
00:35:25
Speaker
So this is kind of where we have those discussions and we look at average order values. Well, they look at average order values, but we call out the specific times and what it did to their aggregate business because they think a lot of people get, you know, in the details and in their silo where it's like, what happened to our total revenue when this happened?
00:35:41
Speaker
And was this promotion impactful? Yes, but for 2% of the business, it was 50% off. So that was 1% of your business. So you have a ceiling of how well this promotion can help your business, as opposed to 50% off site wide, very different promotion. So I think there's a lot that we learn through the promotional piece that helps educate how that impacts our bottom funnel conversions.
00:36:07
Speaker
Cool. If you take a step back and you look at the kind of paid media landscape more broadly,
00:36:17
Speaker
When I joined my current company in June 2020, a lot of brands were scaling, VC money, widely available. Everyone's talking about scale. Then COVID hit, and then it became about CAC reduction at the same time as we're losing more attribution.
00:36:40
Speaker
But folks are much, much more performance focused at this point. And I think it's also why things like media mix modeling are now more popularized and in vogue, I would say.
00:36:56
Speaker
However, if you talk about the prisoner's dilemma and you play that out and you see more and more brands, you know, playing this game of like, we need to optimize to performance. We have to reduce price, like these type of things to compete. And then you're missing out on brand. It feels like
00:37:15
Speaker
the brands that invest more in the upper funnel into when you're here, your family might actually do well. You mentioned that this is like a generalization you see within your client base, the fact that folks are over indexed in performance. What are your thoughts on that? And do you have any like tactic, like specific examples of folks that had gone the other way and tested more into these other areas and saw success from that?
00:37:42
Speaker
Yeah, and I would say that's more of like an onboarding, like, I was last click, but now I'm reformed kind of scenario, not all my clients, some of my clients come in with, with, you know, the very well optimized in terms of bottom versus upper file, and we provide optimizations within that.
00:38:00
Speaker
I'm going to say not all clients. But yeah, we've had some clients run tests on upper funnel activities, specifically like new audiences, new reach driving vehicles that maybe they haven't explored before. And what we found was very surprising was very non-endemic publishers did really, really well for some clients where they ended up quadrupling spend and blowing out quota.
00:38:28
Speaker
That was one that we had there. And it was, I wasn't expecting those results, but the test showed that. Then we plugged it in the model.
00:38:38
Speaker
refined our estimates. We then said, you need to spend up by this much. And they're like, that much? I'm like, I mean, yeah, but let's work our way up there. Let's not jump immediately there. But they saw it proven out. And now that's like their new normal, their new BAU is four times higher than what they had before. So it's those kind of things that surprised me that stick out. And
00:39:02
Speaker
More often than not, it's the undervaluing of upper funnel almost across the board because it's more tractable at the bottom funnel.
00:39:11
Speaker
Yeah, and it makes sense too. If we play it out again, if all of the competitors are basically on Facebook and Google, and that's where the majority of their spend are, and you test these maybe smaller publishers, you're gonna see lower CPMs, which is gonna lead to more impressions, which is gonna probably lead to more incrementality. I have thoughts on some of the networks.
Benefits of Goal-Based Bidding Strategies
00:39:33
Speaker
When you see really low CPMs, what's the quality of the impression, depending on if it's an app network, certain ones have seen
00:39:39
Speaker
Certain programmatic vendors just do some odd things, but I think it's kind of similar with the upper funnel. If you're going in these areas that your competitors are not, you're going to be finding wins from that.
00:39:53
Speaker
Yeah, what I find most interesting when you're doing incrementality testing and you're using a goal-based bid, a GBB of purchase, your pixels with everyone else's pixel as a publisher. So now you're just raising the cost of that individual person across the same. And the ones that come out most incremental are the ones that got the most dollars thrown at them.
00:40:18
Speaker
The one who happened to get the last impression before the last click, if you're doing that, is the one that gets attributed the credit if you're doing the last click, 100%. So there's a impression share, a game that gets played over a finite group of people based off the pixel firing for purchase base. And maybe they'll do some lookalikes to expand the audience. And that's unique to a platform, which is great. But what I found was really interesting
00:40:43
Speaker
through some of the tests is the goal-based bids that no other publisher has access to are the ones that tend to be more incremental or like cheaper and incremental, a good deal because they have different surfaces. So what I mean by that is if you're watching an ad on YouTube, the surface is your YouTube engagement.
00:41:03
Speaker
If you're, and it could be a like, a comment, a share, a subscribe, those are different signals that show intent of a given category, which then can be used. So if you're, you know, taking action on an ad, lingering longer, having a swipe up function for more information, those kinds of actions provide signal to the algorithms that this person is more interested. And those signals are on publisher platform, as opposed to on the
00:41:32
Speaker
on the client side. So because of that, those differences create larger reach when used in aggregate across all the different platforms. So I always like to look at goal-based bids that have different signal sources or different types of intent to qualify people further and move them down funnel. So that's kind of like one thing that I have noticed is an undervaluing of
00:41:59
Speaker
They are vanity executions. Remember when Facebook was cost per likes and things like that. I don't agree on that because then you end up with just likey people. But I think there are things where if they're expressing a level of intent and targeting the people that do that, it could further qualify them for further down funnel. And I've seen some of those smaller publishers have those unique goal-based bids and have really good performance, which surprised all of us too.
00:42:29
Speaker
So just to play that back to make sure that I'm understanding you properly, what you're saying is to bid to something less obvious or maybe more upper funnel and doing that will lead to more incrementality because you're not fighting the competitors that are all bidding for purchase. Is that kind of what you're saying? When the pixel fires, everyone fights for that guy or that gal, right? If you are lingering on an ad, only the publisher knows that, that you lingered on that.
00:42:59
Speaker
So those kinds of signals to me are unique to publisher. So if you can use a percentage of your spend, don't put all your spend in bottom funnel. Don't put all your spend in impression based. I have my impressions, opinions on impression based, but those middle ones, you know, are providing you, yes, clicky people, yes, likey people, yes, whatever kind of outcome, but it's also a matter of like providing the, the algorithm or the surfaces that they have.
00:43:28
Speaker
you know, something that you're going mass reach, so you might as well capture some other unique audiences instead of just having a 15 frequency on purchase, maybe have, you know, a two frequency on a clicky person or a likey person, because if they're engaging with it longer, that's a more important higher value impression.
00:43:46
Speaker
For sure, and you can still measure the downstream effect of that too. Yeah, exactly. Yeah, that's one of my big thoughts is just like, there's a difference between, hey, we wanna bid on this specific event versus here's what our business goal is that we're like optimizing towards, and you could bid towards like a click, but you can still say, hey, well, it's gonna be a lot of leads. So that's really interesting, cool.
Efficient vs. Inefficient Ad Spends
00:44:12
Speaker
I know we have a couple of minutes left,
00:44:14
Speaker
I ask kind of all my podcast guests this question. This is the Efficient Spend podcast. In your experience, what is the most efficient spend that you've done in paid ads and the most inefficient?
00:44:28
Speaker
Uh, the most efficient spend was probably that mouse click that Katie, uh, uploaded. I mean, that was like, you know, uh, that was like a 11 cent click or something that, you know, now, is it the click that did that? That's the last, that's the last click attribution, uh, speaking. He probably was already pre-qualified for that. Um, but in terms of most efficient spend, um,
00:44:55
Speaker
I would say we're seeing some interesting results that are panning out for some B2B leveraging LinkedIn. I find that to be a pretty interesting because that's like, if you're doing something targeting another business, that's kind of center of your bullseye. And I think it's a little underdeveloped in terms of the features, the tracking and all those relative to your Googles and Facebooks, but that's, you know, understood, but the, the,
00:45:24
Speaker
the qualification of the audience and how you could tailor the ads to that. Like if you do it on your side and then upload it and target those specific people, I think is really undervalued. So I think that would probably be one of the more efficient spends. My least efficient spend was when Google consumer surveys was coming out. I wanted to settle a bar bet with another analyst, Sarah. She thought that Beyonce was a better dancer than Michael Jackson.
00:45:54
Speaker
I disagreed with her highly. So we ran a Google consumer survey because we had the credits for it to run it. And it came back, I think it was like 90 10. But the Google consumer survey people were like, you're wasting our money. And I'm like, no, I'm gonna use it in pitches. I'm gonna, you know, so when we went down to Frito-Lay, I was explaining the product being like, Oh, see more women in Houston think Beyonce is a better dancer than Michael Jackson.
00:46:20
Speaker
And they ended up signing up for a contract. So I got a pass on that one. But that was definitely my least efficient spend that I had, which was publisher side Google consumer surveys. And Sarah still owed me two Michael Jackson karaoke songs for losing, and she hasn't paid up yet.
00:46:37
Speaker
Sarah, get on that. Yeah, you gotta love added value. I love a good added value. Anytime, like any other reps, TikTok, Facebook, whatever, they're like, hey, we have this thing and we'll give you money for it. I'm like, I'm listening. So cool. Thank you so much, John. And finally, where can people find you?
Connecting with John Online
00:46:58
Speaker
Find me on LinkedIn, Jonathan Lorenzini, John Lorenzini. I have a very Google-able name, so you can pretty much find me anywhere. And tomorrow I host Beers and Data. It's, well, this is going to come out a month later, but the first Tuesday is of every month in Manhattan.
00:47:13
Speaker
I have a meetup group that has around 15,000 members. It's called Advertising and Marketing Analysts. I've been doing it for like eight years now, and it's grown to quite a bit. So, Beers and Data first Tuesday of the month, from six to nine, generally at Mustang Harry's. Otherwise, online, you can reach me. Just Google my name. Cool. Thank you so much, Sean. Awesome. Thank you very much. Appreciate the time and the conversation.