Oh hi there, Haveaclue! It’s me, Barry. If this is your first newsletter from me, WELCOME!!! Sorry for the ~3 month hiatus. I hope you’re doing well and had a killer Q4!

I’ve been buried with work and fatherhood, and have found I don’t always have time to manage a weekly newsletter with the quality I want to put out. So, I’m going to keep using this for sharing some longer form thoughts I think you’ll enjoy (when I have them).

If you read my last newsletter, you might remember I mentioned spending a bunch of time with my friend and badass TikTok/YouTube star Dara Denney recording a little (~2 hour) podcast thing in studio, that thing is now live and you should definitely watch it right here:

Btw get Dara’s incredible Performance Creative Master Course here and use code UGLYADS (duh) for $100 off! After you order through my link, please reply “DARA” to this email to let me know and I’ll send you a free Make Ugly Ads hatt of your choice.

Ok, let’s get into today’s topic:

Cost Caps VS Lowest Cost: The Ultimate Media Buying Showdown

I came across this interesting post about lowest cost vs bid/cost caps from Olivia Kory of Haus and I had to respond:

I started writing, and kept finding more and more caveats and nuance that I had to dig into more. I suddenly found what I was saying was too long and too nuanced for Twitter (I really don’t want to call it X), so here we are.

As a guy who famously loves to wear many (ugly) caps, you’d maybe think I’m more of a cost/bid cap kind of guy, but if you follow me on Twitter, you probably already know I tend to prefer the lowest cost/highest volume side of things.

ChatGPT made this on my first prompt

For the sake of simplicity, for the rest of this newsletter, I’m going to try to refer to lowest cost (now called “highest volume”) as “lowest cost” and bid/cost caps (or “cost per result goal”) as “caps” even though bid caps and cost per result goal are different. (We won’t be talking about ROAS goal here, sorry, maybe another time, buuut probably not)

Before We Dig In

Everything you’re about to read is based on my 15+ years of experience with buying Facebook ads and mostly from my last ~5 years in the DTC ecom space working with a wide variety of brands in a variety of capacities.

Much of what I like to talk about is theoretical or philosophical and would be difficult to prove with (unbiased) data. I also acknowledge that while I try to be unbiased and impartial, I am human and have my biases from my experiences and these may be very different from your experiences.

We all fall into social bubbles (especially on Twitter) and it’s easy to get into a cycle of self-fulfilling prophecy.

I’ve worked very hard throughout my career to challenge myself and my own beliefs, and I recommend you do the same.

Ok, now that that’s out of the way, let’s dig in:

I generally agree with Olivia’s concerns of caps scraping the least incremental sales that are already bound to happen, especially for larger brands like Jones Road Beauty here (even moreso now that JRB has recently expanded their advertising channels).

My Experience With Lowest Cost and Caps

I haven’t always been on team lowest cost. I used to swear by bids ~4 years ago and then I switched to mostly using lowest cost when I was doing lots of creative testing at large scale.

I’ve found that caps cause the system to be too conservative and restricting when testing net new creative that doesn’t resemble other current ads. On the other hand, lowest cost buying allows the system to take more risks and to find new winners that don’t work like other previous winners.

For example: if my recent best ads have been doing well for men aged 18-34 on Instagram Feed, but I launched a new batch that would work well for women 35+ on Facebook Feed (without changing the actual targeting):

  • Lowest cost would likely keep trying the new ads on different users in different placements until it finds something that works.

  • A cap could prevent the system from ever finding the right audience because it’s trying to quickly get conversions below the set cap. If it doesn’t get one in time, it will stop spending, never having had enough time/space to find that correct audience.

    • If the CTR or CPC (or some other metric Meta uses) is worse than existing winners then it might stop spending since the system is predicting that spending more on it won’t lead to more conversions (even though the potential CVR might be higher due to higher relevance).

This is alone is super nuanced and impossible to know, but that’s been my experience with caps vs lowest cost for many accounts.

Thanks to the persistence, patience, and help from Andrew Faris and Dave Rekuc on Twitter, have recently opened back up to bid caps for lower budget, more short-term cost-conscious accounts. I see it working well for keeping costs down at small scale, and I would probably recommend it for most smaller, struggling brands, but with some of the above caveats.

No Spendy, No Learny

While I’m happy that I’m able to better manage and keep costs down with bids (I prefer bids over cost caps where possible) for the clients that really need that, I still find it incredibly frustrating when new ad sets are extremely slow to spend.

I launched new ad sets yesterday and each ad set has gotten less than $2 of spend.

The system cannot know if an ad is going to do well or poorly, but it can make predictions based on preliminary performance metrics. To be fair, the system is often accurate, far more accurate than any human could be, but that doesn’t make it infallible. It’s doing the best it can with the data it has and while it might be able to figure out relevance, it doesn’t have the additional understanding of the content or context of the ad.

If they’re not spending, I’m not able to learn what’s working or not for them.

Cap fans would say “that’s good that it’s not spending because the ads aren’t good. The system will only spend on what’s working”. But that’s not quite accurate or helpful if you’re trying to learn anything about why the ads aren’t good and aren’t spending.

If the system stops spending on it due to a low bid, that doesn’t mean it’s not a viable ad, that just means the bid stopped the system ran out of energy to keep trying.

For me, a human marketer who was involved in the creation of that ad, to learn how to improve that ad, avoid making bad ads in the future, and make better ads in general, I need to see if spend and see how and why it fails. Then I can go back and better learn what components might have worked and the others that might not have.

If you’re testing with caps, I’d strongly recommend setting very high caps for new concepts, see how much it takes to get at least a few purchases, and then pull the caps down as necessary.

Lift Testing Lowest Cost vs Caps

If you’re doing any sort of test like this, you need to prevent overlap between the two. If you run both lowest costs and caps at the same time, you won’t be able to see it, but the system will prioritize to one versus the other. The only “fair” way to try this would be doing holdout testing.

If you wanted to test which one of tests tactics is “best”, it would be complicated and nearly impossible to get a clear answer.

It's not just about lowest cost vs cost caps or bid caps:

Multiple overlapping and conflating factors will make testing this at scale difficult and unclear. All of it is due to the brand/marketer’s preconceived notions of how to run ads that impact things like:

1. The targeting/exclusions (or more importantly who you don't exclude 😬)

2. How much other traffic is being generated from other sources

3. The attribution setting used in-platform

4. The creative types

5. Where the traffic is being driven

For example:

Brand A:

-Not excluding existing customers or recent visitors

-Has multiple large traffic sources

-Using 7dc1dv attribution

-Using mostly super-branded product-focused content

-Sending traffic to a PDP

Brand B:

-Excluding all existing customers and visitors from the last 7 days

-Mostly focused on Meta ads

-Using 1dc-only attribution

-Using mostly less-branded, more problem-focused content

-Sending traffic to a "5 reasons why" listicle

Obviously there's more nuance than just these two options in reality, but I’m using these examples since each of these are realistic and will perform differently at scale when using lowest cost vs bid/cost caps.

Why does this impact performance? Because you've already made decisions based on assumptions about what can/will work.

How does this impact performance of a holdout test? I can’t be exactly sure, but I know that some ads, web experiences, etc perform better to warmer audiences versus colder ones. How the system optimizes/prioritizes delivery to warmer/colder audiences with warmer/colder relevant ads varies depending on all of these variables, and the more you need to spend, the more the system needs to find more users to convert and one might do much better than the other at larger scale.

This isn't even taking into account how the system can be lazy and/or cheat by delivering the cheapest possible impressions to the warmest possible users to get the most possible credit for conversions it didn't actually drive. With enough existing warm traffic, 7dc1dv, and no exclusions, the system has little incentive to get you net new incremental conversions.

My Other Problem With Caps

Another issue I see with lowest cost vs caps is that when something goes wrong I find it to be harder to catch and diagnose with caps than with lowest cost.

From my experience, if you're using caps, you might not notice the impact of something like a change to the website has on performance as well as you would if you were using lowest cost.

When using caps, some days are fluctuating spend higher/lower than others, so you just chalk it up to the caps doing their job and think “I can’t possibly know what’s going on or why”.

On the other hand, if you’re using lowest cost and using stable, moderate budget control over a longer period of time, you’d know an acceptable range of costs that you expect to see and you can probably make more sense of why a day is suddenly worse than expected. You’re still spending the same, but efficiency has taken a dive, now you can investigate (and/or make a manual adjustment to budget) quickly.

I have found countless unintended website bugs and errors because of this, and I find that it’s much easier to see and understand changes in performance and scale caused by controllable factors (like website changes) when using lowest cost.

The counter argument would be that “if there’s a bug on the site, I want the system to spend less and caps do that for me” which might be correct for a lot of marketers, but it’s also much harder to notice and figure out the root cause.

With lowest cost, it’s pretty obvious when something is off, with caps, it’s like a frog boiling in a pot.

The Bigger Issue

The biggest issue I have with caps is that they’re not based in reality. (Neither is the performance reported in-platform with lowest cost for that matter 😅)

I often see advertisers getting out their compass and protractor to chart out exactly what their in-platform performance needs to be, but it’s not a science and it’s going to keep changing as you keep spending, especially as you’re scaling.

You’re advertising to people. People change, evolve, warm up over time. The more you advertise to more people, the better you will do over time and the easier things will get overall. The better your ads are, the larger this halo effect will be, and the longer term impact they will have for your brand. (I’m not getting into what makes an ad “better” than another right now, but let’s agree that some ads are more relevant, engaging, and memorable than others)

If you had 10 sales today, your ads probably caused 50 others to seriously consider your product, but maybe they’re not ready to buy yet. Give yourself a buffer to account for this halo, but also do not spend more than you can afford to lose.

Your FB ads are going to overlap with your TikTok ads, your Google ads, your Pinterest ads, etc. You can’t control for that and you shouldn’t try to.

Depending on how you target/exclude (or don’t), the more you advertise, the more users will see more of your ads and suddenly you’re in this weird blend of hot/warm/cold users and can’t really tell where sales are coming from. (And you also can’t tell if the system starts to favor warmer users or not)

If you’re bidding super low to keep your in-platform numbers low, then you might also be holding back your scale potential over time.

I’m not saying you need to switch to lowest cost, but maybe consider loosening the grip around your daily FB ad performance. Look at weeks instead of days, months instead of weeks, quarters instead of months, or years instead of quarters.

An occasional day of bad performance should not sink your business. Failing to focus on making better, more relevant, and more scalable ads will sink your business.

There Is No Right Answer Here

I don’t care if you use cost caps, bid caps, or lowest cost. There’s a time and a place for them. You should use the one that best suits your business needs, financial situation, and risk tolerance. Or just use the one your favorite Twitter guru tells you is best 😅

Welcome To The “Deep End”

If you’ve made it this far, please respond to this email and say “deep end” (and anything else you want to say) and maybe I’ll pick someone to get a free hatt or 30 minute consulting call with me. Yes, even if you simply skipped all the way down here, you cheater.

Oh, and here’s a coupon code for $100 off my ad account audit template, knocking the price down to just $97: use code HAVEACLUES100.

Finally…

If you want to quickly and easily make a bunch of ugly ads using my 50 favorite ugly ad templates, you need to get my Creative OS Expert Volume.

You’ll be able to use these ugly ad templates immediately and get them live in your account in minutes. They also now have a monthly membership to get access to all of their existing templates and 10 new ones that get added every week.

That’s all for now! Thanks again for subscribing. Have a high-performing week!

Hott regards,
Barry Hott

Keep Reading