Updated · 9 min read
Churn cohort analysis: the one chart that tells you if retention is actually improving
Retention rate as a single number is almost useless. 'We retain 60% of users at 30 days' — compared to what? Improving, stable, or declining? Driven by new signups that haven't churned yet, or by genuine lifecycle impact? The cohort retention curve answers those questions. It's the chart to put on the wall if you only get one. Here's how to build it properly and what it's actually telling you.
Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
What a cohort retention chart actually is
Group your users by the week (or month) they signed up. That's the cohort. Then plot, for each cohort, the percentage of users still active 1 week, 2 weeks, 4 weeks, 12 weeks, 26 weeks, and 52 weeks after signup. Each cohort becomes a line on the chart; time-since-signup is the x-axis.
The result: a set of overlapping curves that drop off fastest in the first few weeks and flatten over time. Each cohort can be compared to others at the same cohort age — "at week 4, the January cohort was at 42% retained; the April cohort was at 48%".
Cohort curves are the only view that lets you see program improvement as it happens. Every other metric compresses time and confuses new-user effects with lifecycle effects.
How to read the curve
The first week drop. Where most attrition happens. A cohort at 100% on day 0 and 40% by day 7 has lost 60% in the first week — the activation window. Improvements to onboarding and welcome flows show up here first.
The 30-day curve shape. Continued steep decline through 30 days means activation stuck but engagement didn't. Flattening by day 14 means users who survived the first week are largely sticking. Two cohorts with the same 30-day retention can have very different shapes — the shape tells you where the program is working and where it isn't.
The asymptote. The flat line that cohort retention approaches over months. If cohorts flatten at 15%, you have a 15% "committed user" base that rarely churns. If they keep declining past month 6, you have ongoing attrition even among established users — which points at product value or re-engagement, not onboarding.
Cohort-to-cohort comparison. Stack recent cohorts against older ones at the same age. Newer cohorts above older cohorts means improvement; below means regression. This is the single most valuable cross-cohort comparison.
The dimensions that matter beyond time
A cohort chart by week of signup is the default. Richer analysis stratifies the cohorts by:
Acquisition channel. Paid social cohorts typically retain worse than organic; referred users typically retain best. If blended retention is stable but channel mix shifted toward paid, your real retention is declining and the blend is hiding it.
First-action experience. Users whose first meaningful action was X vs Y often have wildly different retention curves. This is where product-led retention wins come from — find the "aha moment" action and shape the first-week flow around reaching it.
Geography or plan tier. If your product has meaningful variation by region or by plan, cohorts by those dimensions reveal where to invest. The "just fine" blended retention may hide one high-retention segment funding a low-retention one.
The aha moment guide covers how to identify the first-action that drives retention, which becomes the primary cohort stratification for most programs.
What the curve doesn't tell you
,
It also doesn't tell you about users who returned after churning. A user who signed up in January, went dormant in February, and re-engaged in April is missing from the January cohort's week-12 retention but present in the quarterly "active users" count. If your win-back or sunset work is driving meaningful resurrection, a separate "resurrection cohort" chart captures that.
Building the chart
Most modern analytics stacks support cohort analysis natively — Amplitude, Mixpanel, Heap all have built-in views. For SQL-native teams, the query is ~30 lines: one CTE for signups grouped by cohort week, another for activity events, a JOIN on user_id with a time-since-cohort calculation.
The important thing is to pick one active event definition and stick with it for all cohorts. "Active" should be consistent — e.g., "user performed [key product action] in the week ending [date]". Shifting the definition between cohort runs produces curves that can't be compared.
,
The quarterly retention review
The quarterly business review is where cohort curves earn their keep. Structure:
1. Current quarter cohorts stacked against prior four quarters, at the same cohort ages.
2. Highlight any cohort that's meaningfully above or below the trend. Discuss what changed.
3. For cohorts above trend: what did we do that we should keep doing?
4. For cohorts below trend: what happened, and what's the remediation?
5. Decide next quarter's priority lifecycle work based on where the curves say leverage lives.
uses cohort curves as the foundation of quarterly roadmap decisions. Programs that don't look at cohorts drift into reactive, campaign-level thinking.
Frequently asked questions
- How granular should my cohorts be?
- Weekly for fast-moving consumer products, monthly for slower-moving or B2B. Weekly cohorts are noisier but catch changes faster; monthly cohorts are smoother but may miss a regression that happened three weeks ago. Most programs find monthly is enough for executive review, weekly for the lifecycle team's working view.
- What's the right 'active' definition for retention?
- Pick an event that represents meaningful product use — for a marketplace: 'viewed a product'; for SaaS: 'performed core action X'; for content: 'read an article'. Avoid broad definitions like 'logged in' (too loose) or narrow ones like 'made a purchase' (too conservative for early-stage retention). The definition should be stable across all cohort comparisons.
- How long does it take for retention changes to show up?
- Depends on the lever. Onboarding improvements show up in week-1 retention within 4 weeks of launch. Win-back flow changes show up in 90–180 day retention within 3–6 months. Systemic retention improvements (product value, re-engagement programs) take 6–12 months of cohort data to confirm.
- Why do newer cohorts sometimes look worse than older ones?
- Three common reasons: (1) acquisition channel shift toward lower-quality traffic, (2) a product regression between cohorts, (3) seasonal effects. Stratify the chart by channel to rule out (1); timeline-annotate product changes to identify (2); compare year-over-year to isolate (3).
- How do I know if my retention is 'good' in absolute terms?
- Industry benchmarks exist but vary wildly by category. More useful: compare to your own trajectory. Is your 30-day retention higher this quarter than last quarter at the same cohort age? That's real improvement. Absolute benchmarks tempt you to compare to public companies with different products; the cohort-to-cohort comparison is always a fair comparison.
- Can I use cohort analysis for revenue, not just active users?
- Yes — cumulative revenue per cohort is a standard variant. Instead of 'percent still active at week N', plot 'cumulative revenue per user at week N'. Flattens tell you revenue saturates; steep continued climb tells you users continue spending. Combine with user retention to see whether revenue is coming from a stable base or a growing spend per user.
Related guides
Browse all →Browse abandonment: the program that sits between ads and cart
Browse abandonment catches the users who viewed a product and left without adding to cart. Smaller lift than cart abandonment, but larger addressable audience. Here's the trigger logic, data requirements, and the timing that works.
Referral program emails: the three flows that make a referral program work
Referral programs live or die on the lifecycle messaging around them. Here are the three flows every referral program needs — inviter prompt, invitee welcome, reward confirmation — and the timing and copy that make each convert.
Trial-to-paid: the seven-email sequence that converts 20%+ of free users
Trial conversion is the most financially leveraged lifecycle flow in SaaS — every percentage point of improvement compounds against CAC. Here's the seven-email sequence that reliably moves trial conversion from 5% to 20%+.
Replenishment emails: the lifecycle flow that buys itself
Replenishment emails remind users to re-order a consumable product before they run out. Done right, they generate the highest revenue-per-send in any lifecycle program because the purchase intent is already established. Here's the timing, data, and copy.
The monthly newsletter still works — here's the structure
Email newsletters have been declared dead every year since 2015. They're not. A well-run monthly newsletter does real work for a lifecycle program — brand equity, re-engagement, non-promotional relationship. Here's what separates the ones that work from the ones that feel mandatory.
Price increase emails: how to raise prices without a churn spike
A price increase is one of the highest-risk lifecycle moments — done wrong, it triggers churn, public complaints, and a reputation dent that outlasts the extra revenue. Done right, most users accept the change without friction. Here's the sequence that works.
Found this useful? Share it with your team.