Marketing Attribution Models: What They Are, How They Work, and How to Choose
Marketing attribution models help you understand which channels drive conversions. Learn how first-touch, last-touch, linear, time-decay, position-based, and data-driven models work, where they fall short, and what to use instead.
A customer clicks a paid search ad on Monday. Sees a retargeting banner on Wednesday. Opens an email on Thursday. Converts on Friday through a direct visit. Which channel gets the credit?
That question sits at the center of marketing attribution, and the answer depends entirely on which attribution model you use. Pick the wrong one and you end up pouring budget into channels that look effective on paper but aren't actually driving results.
This guide breaks down the six most common marketing attribution models, explains when each one makes sense, and covers the growing limitations that have pushed many teams toward alternative measurement approaches.
What Is a Marketing Attribution Model?
A marketing attribution model is a set of rules that determines how credit for a conversion gets distributed across the touchpoints a customer interacted with before buying. Some models give all credit to a single interaction. Others spread it across every touchpoint in the journey.
Attribution matters because marketing budgets are finite. If you don't understand which channels actually influence purchasing decisions, you can't allocate spend effectively. According to Ruler Analytics, 77% of marketers believe they're either using the wrong attribution model or don't know if their current model is right.
Attribution models generally fall into two categories: single-touch and multi-touch.
Single-Touch Attribution Models
Single-touch models assign 100% of the conversion credit to one interaction. They're the simplest to implement and understand, but they ignore everything else that happened along the customer journey.
First-Touch Attribution
First-touch attribution gives all conversion credit to the very first interaction a customer had with your brand. If someone first discovered you through a Facebook ad and later converted through a Google search, Facebook gets 100% of the credit.
When it's useful: Evaluating which channels are best at generating initial awareness and filling the top of the funnel. If you're trying to understand what brings new audiences to your brand in the first place, first-touch gives you a clear (if incomplete) signal.
Where it falls short: It completely ignores the nurturing and closing stages. A channel might be great at generating first impressions but terrible at driving actual revenue. Teams that rely solely on first-touch attribution tend to over-invest in awareness campaigns and under-invest in the channels that move people toward a purchase.
Last-Touch Attribution
Last-touch attribution is the mirror image: 100% of the credit goes to the final interaction before conversion. This is still the default model in many analytics setups, including several ad platforms.
When it's useful: Understanding which channels close deals. For businesses with short sales cycles and simple customer journeys (one or two touchpoints), last-touch can be a reasonable approximation.
Where it falls short: Last-touch systematically over-credits bottom-funnel channels. Branded search and retargeting always look like the heroes because they're the last thing people interact with before converting. But those channels are often capturing demand that was created by something else. As a result, teams using last-touch tend to cut upper-funnel spend that is quietly doing the heavy lifting.
Platform-level ROAS reporting typically defaults to last-touch or last-click logic, which is one reason those numbers tend to overstate performance.
Multi-Touch Attribution Models
Multi-touch attribution models distribute credit across multiple interactions in the customer journey. They're more realistic than single-touch models because modern buyers rarely convert after a single exposure. According to research from Salesforce, the average customer interacts with a brand across multiple touchpoints before making a purchase decision, and that number continues to climb.
Linear Attribution Model
The linear attribution model divides credit equally among every touchpoint. If there were four interactions before a conversion, each one gets 25%.
When it's useful: As a starting point when you have no strong hypothesis about which touchpoints matter most. Linear attribution at least acknowledges that the full journey matters, not just one moment.
Where it falls short: It treats every interaction as equally important. A casual display ad impression gets the same credit as a high-intent product demo request. That creates a lot of noise, especially in longer sales cycles where customers accumulate dozens of touchpoints. Linear attribution can make it difficult to separate signal from background activity.
Time-Decay Attribution Model
The time-decay attribution model gives progressively more credit to touchpoints that occurred closer to the conversion event. A touchpoint from two weeks ago gets less credit than one from yesterday.
Most time-decay implementations use a half-life approach. If the half-life is set to seven days, a touchpoint from a week before conversion receives half the credit of one that happened on the day of conversion. A touchpoint from two weeks out receives a quarter. Matomo's analysis of time-decay attribution explains this decay math in more detail.
When it's useful: B2B companies with long sales cycles, where the most recent interactions are generally more indicative of buying intent. Also useful for businesses running sequential campaigns where later stages are specifically designed to close.
Where it falls short: It systematically devalues the interactions that started the relationship. If a conference booth visit introduced a prospect who later converted through a nurture sequence, time-decay gives almost no credit to that conference, even though the conversion might never have happened without it.
Position-Based (U-Shaped) Attribution Model
Position-based attribution, sometimes called U-shaped, assigns 40% of credit to the first touchpoint, 40% to the last touchpoint, and splits the remaining 20% evenly among everything in between.
There's also a W-shaped variant that adds a third anchor point (typically the lead creation or opportunity creation moment), distributing credit in a 30/30/30/10 pattern.
When it's useful: When you believe both the introduction and the close are the most important moments in the journey, but you still want to give some acknowledgment to the middle touches. Many B2B teams gravitate toward position-based models because they align with how pipeline stages work.
Where it falls short: The 40/40/20 split is completely arbitrary. There's no empirical basis for those weights. They might be directionally right for your business, or they might be wildly off. You won't know unless you validate them against actual incremental results, which brings us back to the core problem with all rule-based models.
Data-Driven Attribution Model
Data-driven attribution (DDA) uses machine learning to assign credit based on statistical analysis of your actual conversion data. Instead of applying fixed rules, it compares converting paths against non-converting paths to identify which touchpoints genuinely influenced outcomes.
Google's DDA model, now the default in Google Ads and GA4, uses a counterfactual approach: it estimates what would have happened without a specific touchpoint and assigns credit accordingly. Google recommends having at least 15,000 clicks and 600 conversions over a 30-day period for the model to produce reliable results.
When it's useful: If you have large volumes of digital conversion data and want something more sophisticated than rule-based models. DDA adapts to your specific data rather than imposing assumptions, which is a genuine improvement.
Where it falls short: Despite the name, data-driven attribution still has major blind spots:
- It's limited to digital. DDA can't credit TV, radio, out-of-home, direct mail, or any offline channel. If a meaningful portion of your budget is offline, the model is working with incomplete information.
- It's platform-scoped. Google's DDA only sees interactions within Google's ecosystem. Meta's attribution only sees Meta. Neither sees the full picture, and both are incentivized to take maximum credit for conversions.
- It degrades with privacy changes. DDA depends on user-level tracking, which Apple's App Tracking Transparency and browser cookie restrictions have significantly weakened. According to reporting on iOS privacy changes, the majority of iOS users now opt out of cross-app tracking, leaving large gaps in the data these models rely on.
For a deeper look at how data-driven attribution compares to media mix modeling, see our breakdown of multi-touch attribution vs. media mix modeling.
Comparing Attribution Models Side by Side
| Model | Credit Distribution | Best For | Biggest Weakness |
|---|---|---|---|
| First-touch | 100% to first interaction | Top-of-funnel analysis | Ignores closing channels |
| Last-touch | 100% to last interaction | Short, simple journeys | Over-credits bottom funnel |
| Linear | Equal across all touchpoints | No prior assumptions | Treats all touches equally |
| Time-decay | Weighted toward recent touches | Long B2B sales cycles | Devalues early interactions |
| Position-based | 40% first, 40% last, 20% middle | Pipeline-stage alignment | Arbitrary weight splits |
| Data-driven | ML-weighted based on conversion data | High-volume digital campaigns | Digital-only, platform-scoped |
Why Attribution Models Are Losing Ground
Attribution models were built for a world where you could follow a user from first click to conversion. That world is shrinking.
The Privacy Problem
Apple's introduction of App Tracking Transparency in 2021 gave iOS users an explicit opt-out from cross-app tracking, and most of them took it. Safari's Intelligent Tracking Prevention limits first-party cookies set via JavaScript to just seven days. Firefox and other browsers have implemented similar protections. According to Direct Agents' analysis of the attribution landscape, these changes have made traditional attribution increasingly unreliable.
The result: the individual-level tracking data that powers every attribution model described above is getting thinner every quarter. Models that depend on complete journey data produce misleading results when that data has gaps.
The Offline Blind Spot
None of the attribution models above can measure offline channels. If you're spending on TV, radio, events, sponsorships, direct mail, or out-of-home, those dollars are invisible to click-based attribution. For many businesses, offline represents 30-50% or more of total marketing spend.
Ignoring half your spend when making allocation decisions is not a measurement strategy. It's a guess.
The Self-Reporting Problem
When you use a platform's built-in attribution (Google Ads conversion tracking, Meta's attribution window, TikTok's pixel), you're letting each platform grade its own homework. Every platform is incentivized to claim as much credit as possible, which is why adding up conversions reported across all your platforms will almost always exceed your actual total conversions.
Cross-channel attribution attempts to solve this deduplication problem, but it still relies on tracking infrastructure that is deteriorating.
What to Use Instead (or Alongside)
Attribution models aren't useless. They provide directional signals about digital campaign performance, and they're available at a level of granularity that other approaches can't match. But they shouldn't be the only tool in your measurement stack.
Media Mix Modeling
Media mix modeling (MMM) takes a fundamentally different approach. Instead of tracking individual users, it analyzes aggregate spend and outcome data at the channel level over time. By using statistical regression, it isolates the contribution of each channel while controlling for external factors like seasonality, holidays, and market trends.
MMM solves the three biggest attribution problems:
- It works without cookies or user-level tracking. Since it runs on aggregate data, privacy changes don't affect it. As Funnel.io's comparison of MMM and MTA notes, MMM is "cookie-independent and does not require user-level data at all."
- It measures offline and online together. TV, radio, direct mail, and digital all go into the same model, giving you a unified view of marketing effectiveness across your entire budget.
- It captures diminishing returns. MMM shows you not just average performance but marginal performance at your current spend level. Your first $50,000 on a channel and your last $50,000 don't perform the same, and understanding those saturation curves is critical for budget optimization.
The tradeoff is granularity. MMM works at the channel level, not the campaign or creative level. For in-flight creative optimization, you still need platform analytics.
Incrementality Testing
Incrementality testing is the gold standard for proving that a channel actually causes conversions rather than just being present when they happen. By running controlled experiments, such as geo-lift tests where you turn off a channel in certain markets and compare results, you get causal evidence that no attribution model can provide.
Incrementality tests are especially valuable for validating what your attribution models (or your MMM) are telling you. If your data-driven attribution says Meta is your top performer but an incrementality test shows minimal lift when you pause Meta spend, you've uncovered a serious misattribution.
The Measurement Stack Approach
The most effective teams aren't choosing one method. They're building a measurement stack:
- MMM for strategic allocation. Understand which channels drive the most incremental ROI and where to shift budget at the macro level.
- Attribution for tactical optimization. Use platform-level attribution and data-driven models for in-flight campaign and creative decisions where granularity matters.
- Incrementality testing for validation. Run periodic experiments to verify that your models reflect reality and calibrate them when they don't.
This layered approach gives you both the big-picture view and the tactical detail, without over-relying on any single method's weaknesses.
How to Choose an Attribution Model
If you're still deciding which attribution model to implement (or whether to stick with your current one), here's a practical framework:
If you have a simple, mostly-digital customer journey with one or two touchpoints and a short sales cycle, last-touch attribution might be good enough. Don't over-engineer it.
If you have a multi-channel digital journey with three or more touchpoints, move to a multi-touch model. Time-decay is a reasonable default for B2B. Position-based works well if you have clearly defined pipeline stages.
If you have enough conversion volume (thousands of conversions per month), use data-driven attribution in Google Ads and GA4. It will outperform any rule-based model you could configure manually.
If you spend on offline channels or care about understanding your full marketing mix, attribution models alone aren't enough. Layer in media mix modeling to capture what click-based tracking can't see.
If you want to know the truth about whether a channel is truly driving results, run incrementality tests. No model, no matter how sophisticated, can replace controlled experiments for establishing causation.
The trend in data-driven marketing is clear: the teams that combine multiple measurement approaches make better decisions than those relying on any single model. Attribution models are one piece of that puzzle. They're just not the whole picture.