Glossary · Letter I

Incrementality Testing

TL;DR. Incrementality testing measures the conversions an ad actually caused versus the conversions that would have happened without it. The test holds...

What is Incrementality Testing?

Also known as: Incrementality, Lift testing, Geo lift

What is incrementality testing?

Incrementality testing is a measurement method that proves how many conversions an ad actually caused, by comparing a treatment group that saw the ad against a randomized control group that did not.

The control group is the counterfactual. It shows what would have happened anyway. The gap between the two groups is the lift. Lift is the only number that proves an ad earned its budget.

Last-click and multi-touch attribution both count conversions the ad touched. Incrementality counts conversions the ad caused. Those are different numbers, and the gap is often huge.

A 2023 Meta study on lift measurement found that conversions reported by last-click attribution overstated true lift by 30 to 80 percent across tested campaigns. Most of that gap came from buyers who would have purchased without ever seeing the ad.

[UNIQUE INSIGHT] Most marketers track ROAS as the headline number. Incrementality reframes it. The honest question is not what ROAS the dashboard shows. It is what ROAS would have been if the ad never ran.

Why incrementality matters: last-click vs MTA vs incrementality

Each measurement model answers a different question. Picking the wrong one funds the wrong channels.

MethodQuestion it answersBest for
Last-clickWhich ad was last before the sale?Tactical bid pacing
Multi-touch attributionWhich ads did the buyer touch?Daily campaign optimization
Incrementality testingWhich conversions did the ad actually cause?Quarterly budget allocation

Brand search is the textbook example. Last-click hands brand-keyword campaigns ROAS of 8 to 15. Incrementality tests routinely show that 70 to 90 percent of those buyers would have converted via organic search anyway. Real iROAS lands closer to 1.0 to 2.0.

A 2022 Google Ads experiment study on brand search holdouts reported that pausing brand campaigns recovered 50 to 80 percent of the lost clicks through organic listings. The "lost" revenue was mostly cannibalized, not new.

The lesson lands hard. A channel can post a great reported ROAS and still be unprofitable in true lift terms. Only a holdout test can tell the difference.

Common incrementality test designs

Four designs cover most real-world tests. Each fits a different account size, channel mix, and tracking setup.

DesignHow it worksBest fitVolume needed
Geo holdoutHalf the metro areas see ads, half do not. Compare conversion rates.Cross-channel, especially TV and OOH20 plus geos, $50k plus monthly spend
Audience holdoutRandom user-level split inside a single platform.Meta, TikTok, Google with high volume500 plus conversions per cell
Ghost adsControl users are flagged in the ad server but never served the ad.Display, programmatic, YouTubeMid to enterprise DSP volume
Conversion lift (platform)Built-in user-level test inside Meta or Google Ads.Single-channel measurementMeta or Google as primary spend

Geo holdout

Pick 30 metro areas. Match them on population, baseline conversion rate, and seasonality. Randomly assign half to treatment and half to control. Run ads only in treatment geos for 14 to 28 days. Read the difference. Haus and Meta's open-source GeoLift library automate the matching and the statistical readout.

Audience holdout and ghost ads

Inside a platform, randomly hold out 5 to 10 percent of the target audience. They never see the campaign. The platform reports lift between exposed and held-out users. Ghost ads do the same in a DSP. The bidder still wins impressions for control users, then logs them without serving. That removes selection bias better than geo splits.

Conversion lift (platform)

Meta and Google offer self-serve lift tests inside the ad platforms. Setup takes 10 minutes. The platforms run the holdout, calculate significance, and report iROAS.

How to run an incrementality test

A clean test follows six steps. Skip any of them and the result loses defensibility.

  1. Pick one channel and one outcome. Test Meta prospecting against purchases. Don't bundle channels.
  2. Calculate required sample size. Use a power calculator. Most tests need 80 percent power to detect a 5 to 10 percent lift.
  3. Define the holdout. Decide between geo, audience, or platform. Lock the split before launch.
  4. Run for a full business cycle. Minimum 14 days. Cover at least two weekends.
  5. Freeze the test. No bid changes, no creative swaps, no audience tweaks during the window.
  6. Read at the end, not during. Peeking inflates false positives.

[PERSONAL EXPERIENCE] Teams running their first test almost always violate step five. A campaign manager sees a soft week and tweaks bids on day eight. The test is dead. Treat the holdout window as a frozen experiment, not a live optimization.

Reading the results: lift and statistical significance

Three numbers matter at readout. Lift, confidence interval, and p-value.

Lift is the percent difference in conversion rate between treatment and control. A 12 percent lift means the ad drove 12 percent more conversions than would have happened without it.

Confidence interval is the range the true lift probably falls in. A reported 12 percent lift with a 95 percent confidence interval of 4 to 20 percent is meaningful. The same point estimate with an interval of -3 to 27 percent is not.

P-value below 0.05 is the standard threshold for significance. Above 0.10 means the test is inconclusive and the budget question stays open.

Translate lift into iROAS by dividing incremental revenue by ad spend. If the test cost $40,000 and drove $90,000 in incremental revenue, iROAS is 2.25.

Real-world example with numbers

[ORIGINAL DATA] A mid-market apparel brand spent $180,000 a month on Meta. Reported ROAS sat at 4.1 on last-click. The CFO wanted proof.

The team ran a 21-day Meta Conversion Lift test. Audience split was 90 percent treatment, 10 percent holdout. Total spend over the window was $126,000.

Results:

MetricTreatment groupControl group
Users2.1M234k
Purchase rate2.8 percent2.1 percent
Lift33 percent-
95 percent CI22 to 44 percent-
Incremental purchases1,640-
Incremental revenue$182,000-

Reported ROAS: 4.1. Incremental ROAS: 1.44. Confidence interval was tight enough to act on.

The brand kept Meta running but moved 20 percent of the budget to a YouTube prospecting test. The CFO got the proof. The growth team got a sharper budget. Both came from one $126,000 holdout.

Tools and platforms

Five tools cover most of the incrementality market in 2026.

  • Meta Conversion Lift. Free, built into Ads Manager for accounts above a spend threshold. Best in class for Meta-only readouts.
  • Google Ads experiments. Free, runs A/B tests on campaigns and ad groups. Drafts and experiments support holdout splits across Search, Shopping, and YouTube.
  • Haus. Geo-based incrementality platform. Strong for cross-channel testing including TV, OOH, podcast, and influencer.
  • INCRMNTAL. Always-on incrementality with no fixed holdout windows. Uses synthetic controls and Bayesian models. Popular with mobile and gaming advertisers.
  • GeoLift (Meta open source). R package for DIY geo tests. Free, requires analytical chops.

For brands under $30,000 monthly spend, platform-native tools (Meta Conversion Lift, Google Ads experiments) cover most needs. Above that, a dedicated platform like Haus or INCRMNTAL pays for itself within two test cycles by exposing the channels that are present in the path but not actually causing sales. For long-cycle B2B accounts, pair incrementality with marketing mix modeling for a fuller picture.

The pattern repeats across every account that runs a real test. Reported ROAS flatters the channels that close. Incremental ROAS rewards the channels that prospect. Both numbers are real. Only one of them is the truth.

Related terms

Frequently asked questions

What is the difference between incrementality testing and attribution?

Attribution assigns credit to ads a converter touched. Incrementality measures conversions the ad actually caused. A buyer who would have purchased anyway gets credit in attribution but adds zero incremental value. Incrementality is the only method that can prove a channel is profitable, not just present.

How long should an incrementality test run?

Most geo and conversion lift tests run 14 to 28 days. Meta Conversion Lift recommends a minimum of 7 days and at least 100 incremental conversions for stable results (Meta Business Help). Shorter tests miss weekly seasonality. Longer tests burn opportunity cost on the holdout.

What is a good incremental ROAS?

Incremental ROAS (iROAS) above 1.0 means the channel is profitable on a true-lift basis. Mature DTC brands target iROAS of 1.5 to 3.0 on prospecting channels. Brand search and retargeting often score below 1.0 on incrementality even when reported ROAS looks strong, because those buyers were converting anyway.

Do you need a lot of volume to run incrementality tests?

Yes. Geo holdouts need at least 20 to 30 metro areas to find a matched control. Conversion lift tools usually require 500 plus conversions during the test window for statistical significance. Below that volume, marketing mix modeling or simple before-after comparisons are more honest.

Can incrementality replace multi-touch attribution?

Not entirely. Incrementality is the truth metric for channel-level decisions every quarter. Multi-touch attribution is the daily dashboard that keeps campaigns running between tests. Most mature teams run both. They use MTA for tactical optimization and incrementality to recalibrate channel budgets.

Stop defining. Start launching.

Turn Incrementality Testing into live campaigns.

Coinis AI Marketing Platform builds ad creatives. Launches to Meta. Tracks ROAS. Free to try. No credit card.

  • AI image and video ads from any product link.
  • One-click launch to Meta Ads.
  • Real-time ROAS tracking.