Here’s something that keeps me up at night: we’ve spent half a decade pouring billions into Connected TV advertising while using a measurement playbook that fundamentally misunderstands what we’re actually buying.
The industry migration from linear to streaming promised us the best of both worlds-the impact of television with the accountability of digital. Instead, we’ve gotten something far messier: digital metrics awkwardly stapled onto a brand-building medium, creating the illusion of precision while obscuring actual insight.
And the worst part? Almost everyone in the room knows it, but we keep playing along anyway.
The Performance We’re All Putting On
Picture the last CTV campaign review you sat through. The deck probably showcased completion rates north of 95%, viewability numbers that would make any display buyer jealous, and attribution paths showing clear lines from ad exposure to website visits.
Impressive stuff. Also mostly theater.
These numbers aren’t technically wrong-they’re just answering questions nobody should be asking. We’ve built an entire measurement ecosystem designed to justify budget allocation to finance teams, not to make campaigns actually work better.
Think about what we’re really claiming: we know that 94.7% of your ad played on someone’s screen, but we have no clue if anyone was in the room. We can tell you someone hit your website two days after seeing your spot, but we can’t tell you if they remember a single frame of your creative. We’ve traded meaningful insight for measurable certainty.
How We Imported the Wrong Playbook
The root problem is simple: we took measurement frameworks built for performance channels and jammed them onto a medium that works nothing like performance channels.
Attribution Theater
CTV attribution modeling is basically creative writing with better data visualization. When someone sees your ad Thursday night during a streaming binge and converts on Saturday morning, drawing a straight line between those events requires some aggressive assumptions about causation.
CTV works like its linear TV predecessor-building brand awareness, creating mental shortcuts, shifting perceptions gradually over time. But because we can technically track cookies and device IDs, we’ve convinced ourselves we’re measuring direct cause-and-effect when we’re really just documenting coincidence with precise timestamps.
The Completion Rate Mirage
Let’s be honest about what a 97% completion rate actually means: 97% of people didn’t grab their phone fast enough to bail on your ad.
Completion rates measure:
- Whether your ad ran before, during, or after the content
- If the platform allows skipping
- How badly viewers want to get back to their show
- How close the remote is to the couch
Completion rates don’t measure attention, engagement, memorability, or impact. We’ve confused “the ad finished playing” with “the ad actually worked.”
Cross-Device Attribution Wishes
View-through attribution on CTV asks us to believe in magic: that we can reliably connect what a household watches on their TV to what an individual does on their phone three days later.
Making this work requires believing that household viewing maps neatly to individual behavior, that probabilistic device matching works reliably in a world without cookies, and that IP addresses accurately connect big screens to small ones. We’ve built an elaborate credit-claiming system and called it measurement.
What Smart Measurement Actually Looks Like
CTV is a lean-back, high-attention environment where people watch on big screens from their couches. It builds brands. If we’re going to measure it properly, we need metrics that acknowledge this reality.
Attention Quality, Not Just Exposure Quantity
Stop celebrating that 95% of people saw your whole ad. Start measuring what was probably happening when it ran:
- Context matters: An impression during prime-time appointment viewing with multiple people in the room is worth exponentially more than background noise at 2 AM
- Engagement signals tell stories: Did viewers pause after your spot? Rewind the content? These behaviors indicate you actually broke through
- Subsequent behavior reveals attention: When households keep watching related content instead of bouncing, you know you had their focus
The technology to measure these signals exists. We’ve just been too busy measuring the wrong things.
Brand Velocity Over Attribution Fantasy
CTV excels at building awareness and consideration efficiently. Instead of forcing the data to confess to driving conversions, measure the things CTV actually moves:
- Search lift patterns: How fast does branded search climb when campaigns run, and how long does the lift sustain? This shows real brand-building impact
- Consideration momentum: How quickly do consideration scores shift during flight periods versus when you’re dark?
- Competitive dynamics: Are you gaining share-of-search specifically against competitors, or just capturing existing demand?
These metrics accept that CTV’s job is making your brand mentally available when purchase decisions happen-not generating immediate clicks.
Creative Intelligence That Actually Teaches You Something
Here’s where CTV gets genuinely exciting: it enables creative testing that linear TV could never support. But most advertisers barely scratch the surface.
Proper creative measurement includes:
- Hook testing: Which opening five seconds stop people from mentally checking out?
- Emotional mapping: Which moments in your creative correlate with downstream behavior shifts?
- Format matching: Does your :15 work better than your :30 in specific content environments?
The goal isn’t just knowing what works-it’s understanding why, so you can systematically get better.
Why Incrementality Testing Changes Everything
The most sophisticated advertisers have abandoned traditional attribution entirely in favor of geo-based incrementality testing. Instead of asking “did this person convert after seeing our ad?” they ask “did markets with CTV outperform matched markets without it?”
This requires:
- Holding out markets: Finding matched DMAs and intentionally going dark in some while maintaining spend in others
- Building synthetic controls: Creating statistical twins of test markets to isolate CTV’s actual impact
- Understanding interactions: Measuring how CTV works with other channels, not trying to isolate its solo contribution
It’s harder and more expensive than attribution modeling. It’s also the only approach that tells you if CTV is genuinely worth the investment.
The Measurement Stack You Should Be Building
If I were building a CTV measurement system from scratch today, it would look nothing like the dashboards most agencies provide. Here’s the architecture:
Layer 1: Attention Modeling
Combine content type, daypart, household patterns, and device to score the attention probability of every impression. A spot during a family watching a season finale deserves a different quality score than one playing while someone falls asleep to reality TV.
Layer 2: Continuous Brand Tracking
Weekly or bi-weekly brand lift studies with large enough samples to catch small movements. Track how awareness, consideration, and perception shift over time instead of relying on clunky pre/post studies that miss the story between the bookends.
Layer 3: Always-On Incrementality
Dedicate 10-20% of budget to rotating holdout markets so you’re constantly generating incrementality reads instead of running one-off tests twice a year.
Layer 4: Granular Creative Analysis
Frame-level analysis of what’s working, combining completion curves with downstream behavior to understand creative effectiveness beyond “people watched it.”
Layer 5: Ecosystem Modeling
Marketing mix models that treat CTV as part of a channel ecosystem, measuring how it amplifies and is amplified by everything else you’re doing.
Why This Actually Matters
This isn’t academic navel-gazing. The measurement crisis in CTV is actively making campaigns less effective.
When you measure with the wrong metrics, you optimize for the wrong outcomes. Advertisers chase completion rates instead of building distinctive assets. They run performance creative designed for mobile feeds on 65-inch screens. They treat CTV like bottom-funnel search when it should be doing the heavy lifting of brand building.
The tragic irony: we finally have a TV-like medium with real measurement potential, and we’re squandering it by measuring things that don’t matter while ignoring things that do.
The Audit You Need to Run
If you’re running CTV right now, take an honest look at your measurement stack and ask:
- Are we measuring these things because they matter or because they’re easy to measure?
- Would we make different decisions if we focused on brand lift velocity instead of attribution paths?
- Are we optimizing for proving performance or improving performance?
- If we designed measurement to make campaigns better instead of justify budgets, what would change?
The answers will probably make you uncomfortable. That’s the entire point.
What Winning Looks Like
The advertisers who dominate CTV over the next five years won’t have the fanciest attribution models. They’ll be the ones who recognized that CTV demands a fundamentally different measurement philosophy.
They’ll measure attention over exposure. Brand building over conversion tracking. Incrementality over attribution. They’ll stop measuring everything and start measuring what matters.
We’ve built a measurement industrial complex optimized for telling executives what they want to hear instead of telling marketers what they need to know. Until we acknowledge that CTV is a brand-building medium requiring brand-building metrics, we’ll keep generating gorgeous dashboards that optimize for exactly the wrong things.
The data exists. The technology is ready. What’s missing is the guts to admit we’ve been measuring CTV like it’s something it isn’t-and the discipline to start over with metrics that reflect reality.
CTV works. The question is whether we’re brave enough to measure it honestly.