Measurement · Audit Tool

CTV Measurement Audit

Score your campaign measurement maturity across five domains. Find the gaps that are costing you insight — and the fixes that will close them.

20 Questions
5 Measurement Domains
~6 min To complete
Free No signup needed
What this audit covers

Five domains of CTV measurement

CTV measurement breaks down across five distinct layers. A gap in any one domain distorts your read of campaign performance. This audit identifies where your stack holds up and where it doesn't.

📡
Delivery & Tracking
5 questions · 20 pts
🎯
Attribution & Outcomes
4 questions · 16 pts
👁️
Viewability & Completion
4 questions · 16 pts
🛡️
Brand Safety & Suitability
4 questions · 16 pts
🇮🇳
India-Specific Readiness
3 questions · 12 pts

Measurement Audit — Self-Assessment

Answer for a specific campaign or your standard measurement setup. Be honest — this is for your eyes only.

0 / 20 answered
📡
Delivery & Tracking
5 questions · 20 points maximum
— / 20
How is impression tracking implemented on your CTV buy?
The method determines accuracy. Server-side pixels are far more reliable than client-side on TV hardware.
1
No impression tracking
2
Client-side pixel only
3
Server-side VAST pixel
4
SSAI + 3rd-party verification
Can you reconcile delivered impressions between your DSP, the publisher, and a third-party ad server?
Reconciliation gaps above 15% indicate a measurement problem, not just discrepancy. Three-way reconciliation is the industry standard.
Yes — three-way reconciliation in place
No — relying on one source only
What is your average campaign-level impression discrepancy between DSP and publisher reports?
Under 10% is acceptable. Over 20% means a significant portion of your reported delivery is unverified.
1
Over 25%
2
15–25%
3
5–15%
4
Under 5%
Are frequency caps enforced and verifiable at the household level across publishers?
Without cross-publisher frequency management, heavy users see the same ad 20+ times. This is one of the most common CTV waste drivers in India.
1
No frequency cap
2
Cap set, not verified
3
Verified per publisher
4
Verified cross-publisher
Can you deduplicate reach across CTV, mobile, and desktop within a single campaign?
Cross-device deduplication is the foundation of true unique reach reporting. Without it, you are likely double-counting the same households.
Yes — cross-device deduplication is active
No — devices are measured independently
🎯
Attribution & Outcomes
4 questions · 16 points maximum
— / 16
What outcome measurement is connected to your CTV campaign?
CTV sits at the top of the funnel — connecting it to business outcomes requires specific tooling beyond standard video metrics.
1
Video metrics only (VCR, CPM)
2
Site visits / app installs
3
Online conversions tracked
4
Full funnel incl. offline lift
Have you run an incrementality or brand lift study specifically for CTV in the last 12 months?
Without a holdout or matched-market test, you cannot separate CTV-driven outcomes from organic baseline activity. Last-click attribution massively undervalues CTV.
Yes — lift study conducted
No — not yet
How are you attributing CTV-driven web or app traffic when a user converts on a different device?
The TV → phone conversion path is the most common CTV attribution challenge. IP-based matching is standard; ACR-based is more accurate but less common in India.
1
Not tracking this
2
Last-touch / heuristic
3
IP-based household match
4
ACR or deterministic match
Can you measure the marginal reach contribution of CTV vs. linear TV within the same campaign?
For advertisers still running linear, this is the headline measurement question for CTV investment justification — especially relevant for FMCG and auto categories in India.
Yes — incremental reach vs. linear is measured
No — CTV and linear are measured separately
👁️
Viewability & Completion
4 questions · 16 points maximum
— / 16
What viewability standard are you applying to your CTV buy?
MRC's CTV standard (100% pixels in view, 2 consecutive seconds) differs from desktop. Many India publishers still report against desktop standards — creating inflated viewability scores.
1
No standard applied
2
Desktop MRC standard
3
CTV MRC standard
4
CTV MRC + 3rd-party verified
Are video completion rate (VCR) quartile events (25/50/75/100%) tracked by a third party independent of the publisher?
Publisher-reported quartiles without independent verification are unauditable. Independent tracking is required for any post-campaign report that will be shared with a client.
Yes — third-party quartile tracking active
No — publisher-reported only
What is the typical video completion rate on your CTV campaigns?
Industry benchmark for non-skippable CTV in India: 85–95%. Below 80% usually signals IVT, poor content adjacency, or skip-rate measurement confusion.
1
Under 70%
2
70–80%
3
80–90%
4
90%+
Do you have invalid traffic (IVT) detection and filtering in place for your CTV inventory?
CTV IVT rates in India are higher than global benchmarks due to emulated device traffic. GIVT filtering is a minimum; SIVT detection requires specialist vendor support.
1
No IVT filtering
2
GIVT filtering only
3
GIVT + SIVT detection
4
Full IVT + MRC accredited vendor
🛡️
Brand Safety & Suitability
4 questions · 16 points maximum
— / 16
What content-adjacency controls are in place for your CTV buys?
Content adjacency on CTV is harder to verify than on digital — many Indian AVOD publishers cannot provide show-level or scene-level adjacency data.
1
None — run-of-network
2
Genre-level blocklists
3
Show-level controls
4
Scene-level / real-time verification
Have you verified that your ads did not appear in UGC (user-generated content) or pirated streams in the last campaign?
Pirated content delivery is a known risk in India's CTV ecosystem. Several DSP-served campaigns have been found serving against unmonitored UGC on grey-area AVOD platforms.
Yes — verified clean post-campaign
No — not checked or unknown
Are you using a GARM-aligned (or equivalent) brand suitability framework for CTV?
GARM's brand suitability framework provides a structured floor/ceiling approach to content adjacency. It's increasingly a client reporting requirement for agency teams in India.
Yes — GARM or equivalent framework applied
No — ad hoc blocklist approach only
Can you produce a post-campaign brand safety report with publisher-level and content-level detail?
Brand safety reporting at placement level is a basic accountability requirement. Without it, you cannot demonstrate responsible media buying to a client or internal compliance team.
1
No safety reporting
2
Summary flag only
3
Publisher-level report
4
Publisher + content-level report
🇮🇳
India-Specific Readiness
3 questions · 12 points maximum
— / 12
Does your measurement partner have India-specific CTV panel or calibration data?
Global MRC-accredited vendors calibrate against US and EU panels. Without India-specific calibration, reach and frequency estimates for Indian audiences can be off by 30–60%.
1
Global panel, no India data
2
India mobile panel only
3
India CTV panel (limited)
4
India CTV + BARC-calibrated
Is your campaign measured against BARC EKAM (or equivalent co-viewing-adjusted) audience data?
CTV in India is predominantly co-viewed — 2.4 viewers per screen on average. BARC's EKAM panel accounts for co-viewing. Not adjusting for this inflates unique reach by 1.5–2×.
Yes — co-viewing-adjusted audience data used
No — device-level impressions only
Can you break out measurement by language (Hindi vs. regional) and geography (metro vs. Tier 2/3) for your CTV campaign?
India's CTV audience is not homogeneous. Without language and geo segmentation in measurement, you cannot optimise spend allocation or prove regional market penetration to clients.
1
No segmentation in reporting
2
Metro vs. non-metro only
3
Language or geo — not both
4
Language + geo + tier segmentation
Your Measurement Audit Score
/ 80
Measurement Maturity
01
Foundational
0 – 24 pts
02
Developing
25 – 44 pts
03
Mature
45 – 62 pts
04
Advanced
63 – 80 pts
Score by domain
India Readiness Score
Priority gaps

Where to focus next

Based on your answers. Ordered by impact on measurement accuracy.

Capability readiness

What your stack can support

These are the measurement outputs your current setup enables — and what's still out of reach.