Your branded search is probably stealing credit. Here’s how to find out.

Branded SEA looks like easy money. Cheap clicks, high conversion rates, great ROI. But a lot of those conversions were going to happen anyway, and your attribution tool is almost certainly taking credit for them.

Most marketing teams never question branded search. The cost-per-acquisition looks great. Leadership is happy. The budget stays, or grows.

But there is a question worth asking. How many of those conversions happened because of branded search, and how many just happened through it?

Picture this. Someone already knows your brand. They already decided to buy. They Google your name, click your ad, and convert. Your attribution tool reports a branded search conversion. But that person was coming regardless. You paid for a click you did not need.

The gap between what attribution reports and what actually happened is called incrementality. Measuring it shifts the conversation from “branded search performs well” to “branded search earns its budget.” That is a very different standard, and it matters when you are defending spend.

The core question

Not “how many conversions came through branded search?” Ask instead: “how many conversions happened because of branded search that would not have happened otherwise?”

Why attribution cannot answer this, and what can

Attribution models track the path to conversion. Last click, data-driven, even multi-touch attribution all assign credit to touchpoints along that path. They do that job reasonably well. What they cannot do is answer counterfactual questions.

A counterfactual question sounds like this: what would have happened if this touchpoint had never existed? That is precisely what you need to know about branded search. And attribution has no mechanism to answer it.

So you need a different method. Difference-in-Differences, or DiD, is the right tool here. Economists have used it for decades to study minimum wage effects, tax policy changes, and public health interventions. It works in messy real-world settings where running a clean controlled experiment is impossible. Marketing fits that description perfectly.

The logic runs like this. You split your markets into two groups. In the test group, you reduce or pause branded search spend. In the control group, everything stays the same. You measure what happens to your key business metrics in both groups before and after the change. Then you compare the two changes. That comparison gives you your causal estimate of what branded search was actually contributing.

“A before-and-after picture with a built-in counterfactual.”

If the test markets barely notice the change and sales hold steady, branded search was not driving incremental demand. If test markets drop meaningfully against the control, it was genuinely pulling in conversions you would otherwise have lost.

How to design the test without disrupting your business

A geo-split is the cleanest setup. You divide your markets geographically, cut branded spend in the test group, and leave everything else untouched in the control. You can run this without touching national campaigns or triggering conversations you are not ready to have yet.

Picking your control group

Most DiD experiments break here. Your control group needs to closely mirror your test group. Same seasonality, similar competitive environment, comparable revenue baseline. If Amsterdam spikes every Q4 and Rotterdam does not, they make a poor pair.

When no single region matches cleanly, analysts build a synthetic control. You take a weighted blend of multiple regions that, together, track your test region’s historical behaviour. It takes more work upfront. It produces more reliable results.

What to measure

Do not measure branded clicks or branded conversions. Those will obviously drop in the test markets. That drop tells you nothing useful. Measure downstream business outcomes instead: total revenue, total conversions across all channels, new customer acquisitions. If those hold steady after you cut branded spend, the ads were capturing demand that would have found you through organic search or direct traffic anyway.

Running the analysis in three steps

01
Pre and post
Measure performance in both groups before and after
Track your chosen KPI in the weeks before and after the branded spend reduction. Do this for both test and control markets. You need a clean pre-period and a clean post-period.
02
The calculation
Calculate the difference in differences
Subtract the control group’s change from the test group’s change. This removes the noise that hit both groups equally: seasonality, macro trends, competitor moves. What remains is the isolated effect of your branded search change.
03
The verdict
Interpret the gap
A meaningful drop in the test group versus control means branded search was driving incremental demand. Little to no gap means those conversions were arriving regardless. For more rigour, run a regression to get a confidence interval.

Four things that can break your results

Parallel trends do not hold
DiD assumes test and control would have moved together if nothing changed. If they were already on different trajectories, your results will mislead you. Always plot pre-period trends before drawing conclusions.
Spillover between markets
A national TV campaign, a PR moment, or geographic bleed can influence both groups simultaneously. Stay alert to anything that lands in market during the test period.
The test runs too small or too short
Testing in one city for two weeks gives random noise room to swamp your signal. More markets, longer duration, and higher baseline volume all increase statistical power.
Other channels absorb the drop
When you pause branded search, organic and direct traffic often pick up the slack. Always evaluate total channel performance, not branded channel performance in isolation.

What to do with the answer

You run the test. You get a number. Now use it.

If incrementality is high, say 60 to 70 percent of branded conversions are genuinely additional, you have a strong and defensible case to protect that spend. Bring the number into your next budget conversation. Let the evidence speak.

If incrementality is low, around 20 to 30 percent, you are looking at a meaningful pool of budget that could move to channels that actually generate demand. Shifting even 30 percent of branded spend into prospecting or upper-funnel activity could materially improve your overall return on investment.

High incrementality (60–70%)
Defend the spend.
You have a strong and defensible case to protect that budget. Bring the number into your next budget conversation. Let the evidence speak.
Low incrementality (20–30%)
Reallocate the budget.
You have found a meaningful pool of budget to move to channels that actually generate demand. Shifting 30% into prospecting could materially improve overall ROI.

Either result is a good result. You replace a gut feeling with a number you can act on. That is the whole point.

The bottom line

If branded search earns its budget, this test proves it. If it does not, you just found money to reallocate.

Difference-in-Differences gives you a practical, defensible way to answer a question most teams never ask. Pair it with MMM or lift studies for additional lenses on the same problem. But for branded search specifically, a well-run geo-split DiD gives you the clearest answer available.

Run the test. Know your number. Make the call.

Picture of Jelle Casper van Santen
Jelle Casper van Santen
Marketing data analyst with a MSc. in Marketing & Business Analytics. Interested in all things related to attribution, marketing mix modelling, and experimentation.