I had coffee with a CMO last month who said something that’s been stuck in my head ever since: “We’ve gotten so good at measuring marketing that we’ve forgotten how to actually do it.”
She laughed when she said it, but I could see the frustration behind her eyes. Her team had implemented a sophisticated multi-touch attribution model. They had dashboards that tracked every click, view, and interaction. They could tell you the exact customer journey from first impression to final purchase.
And yet, their marketing had become… cautious. Conservative. Predictable.
Here’s what I’ve come to believe after spending the last decade managing millions in ad spend: Our obsession with perfect attribution is killing the most effective marketing strategies before they ever get a chance to work.
How We Got Here
The promise of multi-touch attribution was beautiful in its simplicity. Customers don’t convert in a straight line anymore-they see an Instagram ad, visit your website three weeks later, watch a YouTube video, get retargeted on Facebook, and finally convert through a Google search. Attribution models would help us understand which touchpoints actually mattered.
So we evolved our measurement. First-click was too simplistic. Last-click gave all the credit to bottom-funnel tactics. We built linear models, then time-decay models, then U-shaped and W-shaped models. Now we have algorithmic attribution powered by machine learning that promises to finally reveal the truth.
But somewhere along the way, measurement stopped being a tool and became a religion.
The Real Problem With Perfect Measurement
Let me tell you about a brand I worked with last year. Their attribution model showed their podcast sponsorships were generating a 0.8x return-losing money on every dollar spent. Meanwhile, Facebook retargeting was crushing it at 4.2x.
The finance team’s recommendation was obvious: kill the podcasts, pour everything into retargeting.
Except there was something the numbers weren’t showing. When we interviewed customers, almost every single one mentioned the podcast as the reason they trusted the brand enough to make that first purchase. The company’s Customer Acquisition Cost had dropped 23% since launching the podcast strategy. Average order value was up 18%. Customer lifetime value had climbed 31%.
None of this registered in the attribution model because attribution tracks the last thing that happened before conversion, not the first thing that made conversion possible.
The finance team saw a line item losing money. They couldn’t see the gravitational field the podcast created that made everything else work better.
Three Things Attribution Will Never Capture
Market Creation
When you’re doing genuinely innovative marketing-creating new categories, reframing problems, introducing value propositions people didn’t know they needed-there’s no baseline to measure against. Attribution models don’t know what to do with this. They assign zero value or file it under “brand building,” which in most organizations is code for “unmeasurable and therefore suspicious.”
But market creation is often where the highest returns hide. You just can’t see them through a last-click lens.
Threshold Effects
Marketing doesn’t work on a smooth curve. It works in jumps and tipping points.
Run Pinterest ads at $5K/month and you’ll see underwhelming results. Bump it to $30K/month and suddenly you’re everywhere in your category’s consideration set. Your effective CPM drops by 40% because you’ve crossed the frequency threshold where the channel actually works.
Attribution models evaluate each incremental dollar, but they can’t tell you about the threshold you need to cross. This is why so many brands “test” channels at insufficient spend, see poor results, and conclude the channel doesn’t work-when really, they just never reached the point where it would have.
Long Time Horizons
Most attribution models use a 7-30 day lookback window because that’s what the tracking allows. But what about the customer who saw your YouTube ad in January, remembered your brand when the need arose in March, and googled you to convert?
Your attribution model gives 100% credit to branded search and 0% to the YouTube campaign that planted the seed. Over time, this creates systematic underinvestment in the top-of-funnel awareness that makes bottom-funnel conversion possible.
The Innovator’s Tax
Here’s where things get really problematic: The more innovative your marketing approach, the worse it performs in attribution models.
Attribution models are backwards-looking. They’re trained on historical data about how customers converted in the past. They literally cannot account for new behaviors, new pathways, or new touchpoint combinations that don’t exist in the training data.
Want to be the first brand in your category on TikTok? Your attribution model will show terrible performance for months because you’re building pathways that don’t exist in the model yet. By the time those pathways are established and the model catches up, your competitors have already copied you.
Want to invest in creating genuinely useful content that takes 18 months to compound? Your attribution model will punish you every single quarter until it suddenly doesn’t-and you’ll probably have killed the program by then.
The brands that win are the ones willing to pay what I call the Innovator’s Tax. But if you’re managing strictly to attribution, you’ll never write that check.
Data-Driven or Just Defensible?
Let’s be honest about what “data-driven marketing” has become in most organizations.
It’s not about using data to make better decisions. It’s about using data to make defensible decisions.
Nobody walks into the quarterly business review and asks, “Did we take smart risks that might pay off next year?” They ask, “What’s the ROI by channel and how do you know?”
Attribution models give you an answer. They give you a number. And that number provides air cover.
But there’s a huge difference between a decision you can defend with data and a decision that’s actually right.
Some of the best marketing investments I’ve seen looked completely insane from an attribution perspective:
- A B2B company spent $200K sponsoring a niche conference that “generated” only 3 deals according to attribution-but those customers had a combined lifetime value of $2.1M and became their flagship case studies
- A DTC brand invested in long-form content that showed a 0.3x ROI in the model-until 14 months later when it started ranking and became their most efficient acquisition channel
- An e-commerce company paused their “best” channel for a month and discovered 70% of that revenue just moved to branded search-the real driver was their Amazon presence, which wasn’t even in the attribution model
In every case, strict adherence to attribution would have led to the wrong call.
Attribution Theater
When we audit new clients at Sagum, we often find what I call “attribution theater”-the performance of data-driven decision making without the substance.
Here’s the pattern:
- Company implements a sophisticated attribution model at significant cost
- Data quality turns out to be compromised by iOS changes, cookie restrictions, cross-device issues, and incomplete tracking
- Team starts making “adjustments” to compensate for known gaps-essentially adding subjective judgment disguised as algorithmic output
- Decisions get made based on a model everyone privately knows is broken, but nobody wants to question because we’ve invested so much and “data-driven” is in the company values
- The company feels sophisticated about their measurement while systematically starving anything the model can’t track
I’ve seen this at companies spending $50K/month and companies spending $5M/month. The budget changes, the pattern doesn’t.
Which Channels Get Hurt Most
Attribution models systematically undervalue channels with these characteristics: long consideration cycles, indirect conversion paths, and brand-building components.
Pinterest Gets No Respect
Someone discovers a product while planning a wedding or home renovation, saves it to a board, comes back to it three months later, and googles the brand name to buy.
Attribution sees: Google search drove the conversion.
Reality: Pinterest created the demand that search captured.
We see this constantly. Pinterest often shows 1.5-2.5x worse ROI than it deserves because there’s a long middle period where customers go dark in the tracking data.
YouTube Takes the Hit
YouTube pre-roll excels at reaching people before they even know they need what you sell. The impact is awareness and consideration-planting seeds that sprout weeks later.
But attribution models are terrible at valuing awareness. They look for direct paths from impression to conversion.
What we’ve discovered through testing: brands that kill YouTube campaigns based on attribution often see branded search volume drop 10-25% over the next two to three months. That’s the real value-creating demand that other tactics convert. But it never shows up in the model.
TikTok Breaks Everything
Someone sees your TikTok ad, doesn’t click, but shares it with friends. One friend makes a duet. That duet gets 50K views. Someone from that audience converts via Google two weeks later.
Attribution sees: Google drove the conversion.
Reality: TikTok created a viral cascade that built awareness at scale.
TikTok’s distribution model is designed to break traditional tracking. That doesn’t mean it’s not working-it means your measurement approach is wrong for the platform.
A Better Way: Strategic Attribution
If perfect attribution is impossible and chasing it is counterproductive, what’s the alternative?
I call it Strategic Attribution-using multiple lenses instead of pretending one model reveals truth:
Lens One: Algorithmic Attribution
Still use multi-touch attribution. It’s not worthless, just incomplete. Treat it as one input among several. It’s useful for tactical optimization and spotting performance shifts.
Good for: Campaign optimization, directional insights, catching major changes
Bad for: Strategic decisions, valuing top-of-funnel, capturing un-trackable impact
Lens Two: Incrementality Testing
This is the gold standard nobody wants to do because it’s hard: actually turn things off and measure what happens.
Run geo-holdout tests. Pause channels and track total revenue impact, not just attributed revenue. This reveals which channels generate incremental demand versus which ones just capture existing demand.
Good for: Understanding true incrementality, validating assumptions, finding channel interactions
Bad for: Continuous optimization, day-to-day tactical guidance
Lens Three: Contribution Margin by Cohort
Stop looking at just revenue. Look at fully-loaded profit after accounting for:
- Product costs
- Fulfillment and shipping
- Returns and customer service
- Payment processing
- Lifetime value over 12-36 months
Channels that look mediocre on ROAS often deliver much better customers. They cost more to acquire but buy more, return less, and stay longer.
Good for: Customer quality assessment, strategic investment decisions, profit optimization
Bad for: Quick decisions, new channels without LTV data yet
Lens Four: Leading Indicators
Track signals that predict future performance:
- Branded search volume trends
- Direct traffic patterns
- Email signup rates by source
- Engagement metrics
- Category share of voice
- Traffic spikes after campaign launches
These indicators often signal effectiveness weeks or months before it shows up in conversions-especially for awareness channels where attribution struggles.
Good for: Early warnings, measuring top-of-funnel impact, catching threshold effects
Bad for: Satisfying finance teams, definitive profitability statements
Lens Five: Customer Conversations
Talk to your customers. Ask them:
- “How did you first hear about us?”
- “What made you decide to buy?”
- “Where do you typically see our brand?”
The answers won’t be statistically perfect. But you’ll discover touchpoints that never show up in tracking because they can’t be measured.
We had a client discover through interviews that YouTube creators organically mentioning their product was their best acquisition source-something showing zero value in attribution. That insight completely changed their creator strategy.
Good for: Finding unmeasured channels, understanding perception, validating assumptions
Bad for: Precise measurement, continuous monitoring, convincing data purists
What I Actually Believe
After a decade in this industry, here’s my conviction: The brands winning right now aren’t the ones with the best attribution models. They’re the ones with the courage to invest in things their attribution can’t fully measure.
They’re running podcasts that “don’t convert.” They’re investing in TikTok before the ROI is clear. They’re creating tools and content that won’t pay off for 18 months. They’re building brand equity that attribution categorizes as “unmeasurable.”
And then-seemingly overnight-they hit an inflection point. CAC drops across all channels. Customer quality improves. Pricing power appears. The moat gets wider.
But if you demanded attribution justification for every dollar along the way, you’d never get there.
Five Questions Worth Asking
1. Are We Measuring to Learn or Measuring to Justify?
Measuring to learn means being open to surprises. Measuring to justify means looking for data that supports decisions you’ve already made or proves you’re being “scientific.”
2. What Would We Do With Zero Attribution Data?
Run this thought experiment. If you only had total revenue and total marketing spend-no channel breakdowns-how would you allocate budget? The answer often reveals strategies you’re suppressing because they can’t be perfectly measured.
3. What’s Our Innovation Budget?
Every organization should allocate 10-20% to things that will look bad in attribution for the first six months. This is your innovation budget-testing new channels and formats before they’re legible to measurement systems.
If you don’t formally ring-fence this money, attribution will kill it every time.
4. Which Channels Are We Undervaluing?
Look for channels where consideration cycles are long, conversion paths are indirect, impact is primarily awareness-driven, platforms are new, or competitors aren’t invested yet. These are where attribution is most misleading.
5. How Much Weight Does Attribution Actually Get?
Be explicit. Maybe channels get evaluated 70% on attribution and 30% on strategic factors like reaching new audiences or building brand equity. The key is acknowledging attribution shouldn’t be the only lens.
How We Actually Work
At Sagum, we practice what I call “strategic rigor without attribution tyranny.”
We track everything trackable. Our BI dashboards give clients real-time visibility into performance. We use algorithmic attribution. We optimize based on data.
But we also acknowledge what we can’t track. We use multiple frameworks to triangulate truth. We design incrementality tests. We track leading indicators. We talk to customers about unmeasured touchpoints.
And we reserve the right to recommend strategies that look bad in attribution if they’re strategically sound-because we understand that the highest returns often hide where attribution can’t see.
This isn’t choosing between data and strategy. It’s being sophisticated enough to recognize that some truths can’t be reduced to a dashboard.
The Final Irony
The platforms themselves don’t use the attribution logic they force on advertisers.
Facebook didn’t launch Reels because attributed ROI was clear. TikTok didn’t build their algorithm to maximize immediate conversions. YouTube didn’t invest billions in creator tools because of attribution models.
The platforms understand something crucial: the most valuable things can’t be measured while you’re building them.
Yet they’ve created ad systems demanding marketers justify every dollar with immediate returns.
Don’t fall for it.
Where This Leaves Us
Multi-channel attribution models are powerful tools we should absolutely use. But the moment we let them become the arbiter of all marketing truth, we’ve lost the thread.
The goal isn’t perfect attribution. The goal is profitable growth.
And the path to growth often runs through territory your attribution model can’t map.
The brands that will dominate the next five years won’t be the ones with the best measurement. They’ll be the ones brave enough to invest in building real value even when the model can’t see it yet.
They’ll understand that perfect measurement is the enemy of meaningful innovation.
They’ll remember that attribution models are maps, not territory-and the map always leaves out the most interesting terrain.
Have the courage to go there anyway.