Attribution Models

Incrementality testing in CTV: the gold standard for measuring true impact

Incrementality testing answers the only question that matters for advertising effectiveness: did this campaign cause more conversions than would have happened without it? Not correlation. Not shared credit. Causation. A properly designed incrementality test creates a control group — people who would have been eligible to see your ad but were deliberately excluded — and compares their conversion rate to the group that was exposed. The difference is the incremental lift. It is the closest thing to a controlled experiment that advertising measurement has.

For CTV specifically, incrementality testing has become the gold standard because it does not require cross-device identity resolution — the biggest technical barrier to other attribution approaches. This article explains how holdout tests work, how to design one for a CTV campaign, what the results tell you, and how to navigate the specific constraints of running incrementality tests in India.

What is incrementality and why does it matter?

Incrementality is the causal impact of an ad — how many additional conversions occurred because of the advertising, versus how many would have occurred anyway from organic behaviour. Every conversion has a baseline: people who were going to buy regardless of whether they saw your ad. Incrementality strips out the baseline and counts only what the ad actually caused.

Why this matters: most attribution systems measure correlation, not causation. Last-touch attribution gives credit to the search click that captured demand the TV ad created. MTA distributes credit across the path without knowing which touchpoints actually influenced the decision. Incrementality sidesteps all of this by measuring the conversion behaviour of people who did not see the ad — and comparing it to people who did.

How holdout tests work in CTV

The mechanics of an incrementality test in CTV:

  1. Define the target audience: Identify the audience pool you plan to target with your CTV campaign. This should be a clearly defined segment — a custom audience, a behavioural segment, or a demographic target.
  2. Randomly split the audience: Before the campaign runs, randomly assign the audience into two groups. The test group will receive CTV ads as planned. The holdout group (typically 10–20% of the total audience) will be excluded from all targeting for this campaign.
  3. Run the campaign: The test group sees your CTV ads. The holdout group sees nothing from this campaign (though they may see other unrelated advertising).
  4. Measure conversion rates in both groups: Track the conversion event (app installs, purchases, registrations, store visits) in both groups over the campaign period and a defined post-exposure window.
  5. Calculate lift: Incremental lift = (conversion rate in test group) - (conversion rate in holdout group). Express as a percentage uplift or as incremental conversions and incremental cost per acquisition.

Ghost bidding: the programmatic implementation

In programmatic CTV buying, holdout groups are often implemented via ghost bidding. The DSP bids on inventory for the holdout group but deliberately does not serve the ad — the impression opportunity is won, but a blank or house ad is served instead. This ensures the holdout group is exposed to the same media environment as the test group (same daypart, same content, same competitive auction conditions) without seeing the test campaign's ad. Ghost bidding is available through some DSPs and measurement vendors — confirm this capability before designing your test.

Why CTV incrementality testing does not require cross-device identity

The key advantage of incrementality for CTV: it works at the group level, not the individual level. You do not need to know that a specific person who watched your TV ad then purchased on their mobile phone. You only need to compare aggregate conversion rates between two randomly constructed groups. If the groups were constructed properly (random assignment, sufficient size), any difference in conversion rates is attributable to the campaign, not to pre-existing differences between the groups. This makes incrementality testing robust to the identity fragmentation that breaks MTA in the CTV environment.

Designing an incrementality test for CTV: key decisions

Holdout size

A larger holdout gives more statistical power but reduces the number of impressions available for the test group, which may increase effective CPM or reduce campaign reach. A typical holdout is 10–20% of the total audience. For small audiences, 20% holdout may be required to achieve statistical significance; for large audiences, 10% is usually sufficient.

Minimum detectable effect (MDE)

Before running the test, calculate the minimum lift you need to detect for the test to be meaningful. If your baseline conversion rate is 2% and you need to detect a 10% relative uplift (to 2.2%), you need a much larger sample than if you are trying to detect a 50% uplift (to 3%). Most CTV incrementality tests require at least 50,000 people in each group to detect a meaningful effect at standard statistical significance levels (p < 0.05).

Conversion window

Define how long after ad exposure you will track conversions. For fast-moving consumer goods or app installs, a 7-day window may be appropriate. For high-consideration purchases (insurance, vehicles, home appliances), the window should be 30–60 days. The longer the window, the more organic conversions accumulate in the holdout group, which can make the lift signal harder to detect in short campaigns.

Campaign duration

Short campaigns produce noisy results. A campaign running for less than two weeks typically does not generate enough exposure and conversion events to produce statistically stable incrementality numbers. Four to six weeks is a more reliable test duration for most categories.

Reading the results: what incrementality tells you

An incrementality test produces a set of numbers:

  • Incremental conversion rate: The difference in conversion rates between exposed and holdout groups. If the test group converted at 3.5% and the holdout at 2.8%, incremental lift is 0.7 percentage points, or a 25% relative uplift.
  • Incremental conversions: Total conversions attributable to the campaign — not all conversions in the test group, only the incremental ones above the holdout baseline.
  • Incremental CPA: Campaign spend divided by incremental conversions. This is the true cost per acquisition from the campaign — typically higher than the platform-reported CPA because not all platform-attributed conversions are incremental.
  • Statistical significance: The p-value tells you the probability that the observed lift could be due to random variation. A p-value below 0.05 is the standard threshold. Below 0.10 is acceptable for CTV tests with limited scale.

Cost and scale requirements

Incrementality testing is not free. The holdout group represents impressions you are not serving to potential buyers — there is an opportunity cost. At a 15% holdout on a ₹50 lakh campaign, you are effectively allocating ₹7.5 lakh of audience coverage to the control condition rather than the campaign. This is the cost of measurement. For large brands, this is a small price for reliable data. For smaller budgets, the cost of the holdout becomes a more significant fraction of total spend.

Most measurement vendors who offer managed incrementality testing charge either a percentage of media spend or a flat platform fee. Expect vendor costs ranging from ₹3–10 lakh for a managed incrementality study in India, depending on scope and vendor.

India-specific considerations for CTV incrementality testing

Platform coverage

Holdout groups can only be constructed within platforms that support audience exclusion at the campaign level. Most programmatic CTV buying in India runs through DSPs that offer this capability. Direct IO buys with individual publishers may not support holdout construction, as it requires the publisher or ad server to implement the exclusion logic.

Conversion tracking in India

Incrementality requires robust conversion tracking. E-commerce conversions (Flipkart, Myntra, Amazon India) are generally well-tracked. App installs are tracked through AppsFlyer or Adjust. Offline conversions — store visits, dealership appointments — require third-party footfall data or survey-based measurement, which is less reliable in India than in Western markets.

Scale thresholds

India CTV campaigns often run at smaller scales than equivalent US campaigns, which can make it harder to achieve statistical significance. A brand spending ₹20 lakh on CTV over four weeks may not generate enough conversion events in each group for a statistically valid test. In these cases, extend the campaign duration, broaden the target audience, or use a less precise directional measure (branded search uplift) instead of a formal incrementality test.