· 10 min read
Attribution models for lifecycle: which one to defend in which room
Every lifecycle team has the attribution conversation at least once a quarter. Which model is 'right' depends entirely on what question you're answering and who's in the room. This isn't a debate where one side wins; it's a toolbox where different tools fit different jobs. Here's the operator's map.
Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
The four models, in plain operator terms
Last-touch: credit goes to the most recent touchpoint before conversion. Simple, easy to report, and almost always wrong for lifecycle. Email clicked 2 minutes before purchase gets all the credit; the onboarding sequence from 3 months ago gets none.
First-touch: credit to the first interaction. Better for acquisition attribution (which channel brought them in) than for lifecycle.
Multi-touch: credit is distributed across multiple touchpoints. Linear (equal weight), time-decay (recent touches weighted more), position-based (first + last weighted more). More accurate than last-touch but introduces arbitrary weighting choices.
Incrementality: compare the revenue of users who received messages to users who didn't (holdout group). Answers the causal question — would this conversion have happened anyway? The holdout guide covers the design.
Which model for which question
| Question | Best model | Why |
|---|---|---|
| Which campaign generated this sale? | Last-touch | Operational attribution. Fine for daily dashboards. |
| Which program is most valuable? | Multi-touch | Credits the journey, not just the final click. |
| Is lifecycle worth running at all? | Incrementality | Only model that answers the causal question. |
| Should we kill this program? | Incrementality | Attribution models can't tell you what would happen without the program. |
| How are acquisition channels performing? | First-touch + multi-touch | Lifecycle isn't in this debate; it's an acquisition question. |
| What's the monthly revenue contribution of email? | Incrementality (quarterly) | Attribution inflates the number; leadership eventually notices. |
The political layer
Attribution debates aren't really about correctness. They're about whose team gets credit and whose budget survives the next cycle.
Every attribution conversation has two layers: the epistemological one (which model is most accurate?) and the political one (whose team gets credit?). Lifecycle marketing usually loses the political layer under last-touch because most conversions have an intervening paid or organic touchpoint that gets the last-click. Multi-touch gives lifecycle some credit; incrementality gives lifecycle the credit it actually deserves.
The operator tactic: publish in whichever model serves the lifecycle program's actual contribution, and have a second number ready for when leadership questions it. Last-touch for operational reports, incrementality for budget conversations. The Attribution Audit skill covers how to structure the dual-model reporting without it looking like attribution-shopping.
The credit problem with multi-touch
Multi-touch models sound fair but introduce a different problem: arbitrary weighting. Linear gives equal credit to all touchpoints — but is the 16th email really as valuable as the first? Time-decay gives more credit to recent touches — but that privileges channels that touch near the conversion (which is often paid, not lifecycle).
The cleanest multi-touch approach: assign weights based on holdout-measured incrementality per touch type. If a holdout says your email program produces 40% of tested revenue and your ads produce 35% and your content 25%, the weighting is evidence-based rather than arbitrary. Most programs don't do this because it requires running holdouts across multiple channels simultaneously.
,
When to use incrementality vs attribution
Run attribution in your regular operational dashboards — daily, weekly, monthly. It tells you where conversions happened and which touchpoints were in the path. Useful for spotting broken programs, evaluating message creative, and tactical decisions.
Run incrementality for budget and strategic questions — quarterly or annually. It tells you whether the program is worth running and how much revenue it's actually producing. Useful for allocation decisions, program kill calls, and defending lifecycle marketing to finance.
The mistake: using attribution for budget conversations. Attribution-based lifecycle revenue numbers are almost always optimistic (they count conversions that would have happened anyway). When leadership eventually notices the gap between attribution-claimed revenue and bottom-line revenue, the credibility hit damages the whole lifecycle program. Better to under-promise with incrementality numbers and deliver consistently.
The retention economics guide covers how to frame incrementality numbers in CFO conversations specifically.
Frequently asked questions
- What's the best attribution model for lifecycle marketing?
- Depends on the question. Last-touch for daily operational reporting. Multi-touch for program-level valuation. Incrementality (holdouts) for budget decisions and strategic claims. No single model is 'best' — match the model to the question.
- Is last-touch attribution wrong?
- For lifecycle ROI specifically, yes — it systematically under-credits lifecycle touches in favour of the last click. For operational attribution (which campaign did this user click just before buying), it's fine. Don't use last-touch for budget conversations.
- Why not just use multi-touch?
- Multi-touch introduces arbitrary weighting. Linear, time-decay, position-based — each model produces a different number with no causal grounding. Multi-touch feels fair but isn't necessarily accurate. Use it for operational work; don't rely on it for existential program questions.
- What's incrementality and why is it different?
- Comparing users who received a message to users who didn't. Random assignment removes selection bias; the revenue delta is causal. Attribution models can't answer 'would this have happened anyway' — incrementality can. It's the most defensible measurement and the slowest to run (you need a quarter or two for statistical power).
- How often should I run a holdout study?
- Annually at minimum. Quarterly if your program is large enough to absorb the lost revenue and the measurement window. A holdout study for budget season and a second for mid-year review is the cadence that keeps lifecycle's financial case current.
- Can I use both attribution and incrementality?
- Yes — most mature programs do. Attribution in weekly dashboards, incrementality in the quarterly review. The two answer different questions and complement each other. The risk is attribution numbers getting cited in budget conversations; keep the separation deliberate.