The Situation
Better Place Forests operates in a high-consideration memorial category. The account was running at significant scale, with monthly Meta spend reaching approximately $500,000.
At that budget, the account was not stable. CPL was drifting upward, decision-making was reactive, and the team could not confidently identify which creative, audiences, or landing pages were driving the results they were getting. Spend was moving. Insight wasn’t.
Key Outcomes
- 30% reduction in cost per lead compared to the prior account structure
- Rebuilt testing framework that isolates one variable at a time
- Automated spend controls that intervene on underperforming ads in real time
The Primary Challenge
This was not a spend problem or a volume problem. It was a signal problem.
Multiple variables were being changed simultaneously across campaigns. A winning ad might have worked because of the creative, the audience, or the landing page. The structure made it impossible to tell. Meanwhile, underperforming ads were allowed to run past the point where they should have been paused, quietly bleeding budget at a rate that only becomes visible at half a million a month.
At this spend level, small inefficiencies stop being small. A 10% drag on CPL means 10% fewer leads on a $500K budget. That kind of inefficiency only becomes visible at scale, and only becomes fixable when the account structure lets you see it.
The Goal
Restructure the account so that every dollar of spend produced a readable result, and protect the budget from the kind of unchecked underperformance that compounds at scale.
Our Approach
Isolate One Variable at a Time
The reflexive move in a high-spend account is to test aggressively and trust volume to surface the winners. We went the other direction. We rebuilt the testing framework so that each test evaluated a single dimension: creative, copy, landing page, or audience. Testing this way produces fewer raw data points, but every one of them is usable. A clean signal on one variable is more valuable than a muddy signal on four.
Automate the Kill Switch
At half a million a month, waiting a day to pause a losing ad is not a workflow issue. It is a risk management issue. A single underperforming ad left running overnight can burn through thousands of dollars before anyone sees it. We implemented automated rules that pulled ads the moment they crossed defined performance thresholds. Losses were capped by the system instead of by whoever happened to check the account that morning. At this spend level, delay is the single most expensive thing an account can tolerate.
Build the Feedback Loop
With variables isolated and spend protected, the account started producing insights the team could actually act on. Instead of debating which of four changes drove a shift, they could point at a single variable and move. Winners scaled sooner. Losers died faster. Budget concentrated on what worked, and the savings compounded into a 30% CPL reduction.
Why This Worked
High spend does not create clarity. It amplifies whatever system is already in place.
Before the restructure: overlapping variables, untraceable wins, losers running too long, budget paying for all of it. After the restructure: clean tests, clear answers, capped losses, savings compounding month over month.
A 30% CPL reduction at $500,000 a month is not a creative win. It is a structural one.
Strategic Takeaway
At scale, performance is less about finding winners and more about eliminating uncertainty.










