Updated · 10 min read
Attribution models for lifecycle: which one to defend in which room
Every lifecycle team has the attribution conversation at least once a quarter. Which model is 'right' depends entirely on the question you're answering and who's in the room. This isn't a debate where one side wins; it's a toolbox where different tools fit different jobs. Here's the operator's map, and the political layer nobody wants to name out loud.
By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
The four models, in plain operator terms
Last-touch: credit to the most recent touchpoint before conversion. Simple. Easy to report. Almost always wrong for lifecycle. An email clicked two minutes before purchase gets everything. The onboarding sequence from three months ago gets nothing.
First-touch: credit to the first interaction. Useful for acquisition attribution — which channel brought this user in — and largely beside the point for lifecycle.
Multi-touch: credit is distributed across touchpoints. Linear gives equal weight. Time-decay weights recent touches more. Position-based weights first and last. More accurate than last-touch and still built on arbitrary weighting choices nobody will defend when pressed.
Incrementality: compare the revenue of users who received messages to a matched group who didn't. Holdouts. Answers the causal question — would this conversion have happened anyway? — which is the only question leadership actually cares about even when they don't know to ask it that way. The holdout guide covers the design.
Which model for which question
| Question | Best model | Why |
|---|---|---|
| Which campaign generated this sale? | Last-touch | Operational attribution. Fine for daily dashboards. |
| Which program is most valuable? | Multi-touch | Credits the journey, not just the final click. |
| Is lifecycle worth running at all? | Incrementality | Only model that answers the causal question. |
| Should we kill this program? | Incrementality | Attribution models can't tell you what happens without the program. |
| How are acquisition channels performing? | First-touch + multi-touch | Lifecycle isn't in this debate; it's an acquisition question. |
| What's the monthly revenue from email? | Incrementality (quarterly) | Attribution inflates the number; leadership eventually notices. |
Different questions, different tools. The instinct to pick one model and apply it everywhere is the source of most bad attribution conversations — it forces one method to answer questions it wasn't built for, and then people argue about the method when they should be arguing about the question.
The political layer nobody wants to name
Attribution debates aren't really about correctness. They're about whose team gets credit and whose budget survives the next cycle.
Every attribution conversation runs on two layers. The epistemological one: which model is most accurate? The political one: whose team gets credit? Lifecycle marketing usually loses the political layer under last-touch because most conversions have an intervening paid or organic touchpoint that steals the last click. Multi-touch gives lifecycle some credit. Incrementality gives it the credit it actually deserves. Which is why incrementality conversations are the slowest to get organisational buy-in.
The operator move: publish in whichever model fairly represents the program's contribution, and have a second number ready for when leadership challenges the first. Last-touch for operational reports. Incrementality for budget conversations. The Attribution Audit skill covers how to structure the dual-model reporting without it looking like attribution-shopping — because the second it looks like that, you've lost the room.
Why multi-touch feels fair but isn't
Multi-touch sounds like the adult answer — spread the credit, acknowledge the journey, move on. It introduces its own problem: arbitrary weighting. Linear gives equal credit to all touches, but is the sixteenth email really as valuable as the first? Time-decay privileges recent touches, which means channels that touch near the conversion (often paid, not lifecycle) get more credit purely for timing.
The cleanest multi-touch approach is to weight based on holdout-measured incrementality per touch type. If a holdout tells you email produces 40% of tested revenue, ads 35%, content 25%, then the weighting is evidence-based rather than chosen by vibes. Most programs don't do this because it requires running holdouts across multiple channels at once. Which is why most multi-touch numbers, however official they look, are guesses in a spreadsheet.
,
When to use incrementality vs attribution
Run attribution in the operational dashboards — daily, weekly, monthly. It tells you where conversions happened and which touchpoints were in the path. Useful for spotting broken programs, evaluating creative, and making tactical decisions. Is last-touch wrong? For lifecycle ROI specifically, yes — it systematically under-credits lifecycle touches. For operational reporting, it's fine. Don't use it for budget conversations and you'll avoid most of the trouble.
Run incrementality for budget and strategic questions — quarterly at minimum, annually for smaller programs. It tells you whether the program is worth running and how much revenue it actually produces. Useful for allocation, kill calls, and defending lifecycle marketing to finance. A holdout for budget season and a second for mid-year review is the cadence that keeps the financial case current.
The mistake is using attribution for budget conversations. Attribution-based lifecycle revenue numbers are almost always optimistic — they count conversions that would have happened anyway. When leadership eventually notices the gap between attribution-claimed revenue and bottom-line revenue, the credibility hit damages the whole program. Better to under-promise with incrementality numbers and deliver consistently than the other way around.
Can you use both? Yes, and most mature programs do. Attribution in weekly dashboards, incrementality in the quarterly review. The two answer different questions and complement each other. The risk is attribution numbers leaking into budget conversations; keep the separation deliberate and name it out loud. The retention economics guide covers how to frame incrementality numbers in CFO conversations specifically.
This guide is backed by an Orbit skill
Related guides
Browse allHoldout group design: the incrementality tool most lifecycle programs skip
Without a holdout, lifecycle ROI is attribution-model guesswork with a spreadsheet. With one, you get a defensible number you can actually put in front of finance. Here's how to size, run, and read a holdout — and the three mistakes that quietly invalidate the result.
Price-testing through email: what's testable, what isn't
Email is the fastest place to try a new price, and the easiest place to learn the wrong lesson. What you can test cleanly, what you can't, and the measurement traps that quietly turn price tests into expensive false positives.
Retention economics: proving lifecycle ROI to finance
Lifecycle programs get deprioritised when they can't defend their impact in dollars. The four models that keep the budget — LTV, payback, cohort retention, incrementality — and the four-slide pattern that wins a CFO room.
A/B testing in email: sample size, novelty, and what to report
Most email A/B tests produce winners that don't reproduce. Three reasons keep showing up: under-powered samples, the novelty effect, and weak readout discipline. This guide is about designing tests that actually drive decisions instead of theatre.
Sample size: the calculation everyone gets wrong in email A/B tests
Most email A/B tests are powered to detect effects far larger than the test could actually produce. The result: false positives and false nulls, with confident conclusions in both directions. Sample size calculation fixes this before you send. Takes 5 minutes. Here's the 5-minute version.
Send-time optimisation: what it really moves, and what it doesn't
Every ESP markets an STO feature and every vendor deck shows lift. The honest version: STO moves open rate 3–8%, rarely revenue, and only for certain program types. Here's when it's worth turning on.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 54 lifecycle methodologies, 55 MCP tools, native Braze integration. Pay what it's worth.