Updated · 9 min read
Churn cohort analysis: the one chart that tells you if retention is actually improving
Retention as a single number is almost useless. 'We retain 60% of users at 30 days' — compared to what? Improving, stable, declining? Driven by new signups that haven't had time to churn yet, or by actual lifecycle impact? The cohort retention curve answers those questions. If you only get to put one chart on the wall, this is the chart. Here's how to build it properly, and what it actually tells you versus what it can't.
By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
What a cohort retention chart actually is
Group your users by the week (or month) they signed up. That's the cohort. Plot, for each cohort, the percentage of users still active 1 week, 2 weeks, 4 weeks, 12 weeks, 26 weeks, and 52 weeks after signup. Each cohort becomes a line on the chart; time-since-signup is the x-axis. Simple shape, enormous information density.
You end up with a set of overlapping curves that drop off sharply in the first weeks and flatten over time. Each cohort can be compared to others at the same cohort age — "at week 4, the January cohort was at 42% retained; the April cohort was at 48%." That comparison is the entire point. Everything else on the chart is detail.
Cohort curves are the only view that lets you see program improvement as it happens. Every other metric compresses time and confuses new-user effects with lifecycle effects.
A note on granularity: weekly cohorts for fast-moving consumer products, monthly for slower-moving or B2B. Weekly is noisier but catches changes faster. Monthly is smoother but can hide a regression that happened three weeks ago. Most programs settle on monthly for executive review and weekly for the team's working view.
How to read the curve
The first-week drop. Where most attrition happens. A cohort at 100% on day 0 and 40% by day 7 has lost 60% in the first week — that's the activation window. Improvements to onboarding and welcome flows show up here first, usually within four weeks of launch.
The 30-day curve shape. Continued steep decline through 30 days means activation stuck but engagement didn't. Flattening by day 14 means users who survived the first week are largely sticking. Two cohorts with the same 30-day retention can have very different shapes, and the shape is what tells you where the program is working and where it isn't.
The asymptote. The flat line cohort retention approaches over months. Cohorts flattening at 15% means you have a 15% "committed user" base that rarely churns. Cohorts that keep declining past month 6 mean ongoing attrition even among established users — which points at product value or re-engagement, not onboarding.
Cohort-to-cohort comparison. Stack recent cohorts against older ones at the same age. Newer above older means improvement. Newer below older means regression. This is the single most valuable comparison on the chart and the one most programs forget to make explicit.
Worth saying plainly: industry benchmarks for retention are nearly useless because they vary wildly by category, and comparing your product to a public company with a totally different business is a recipe for bad decisions. The honest comparison is always cohort-to-cohort within your own program.
The dimensions that matter beyond time
A cohort chart by week of signup is the default. Richer analysis stratifies the cohorts by:
Acquisition channel. Paid social cohorts typically retain worse than organic. Referred users typically retain best. If blended retention is stable but channel mix shifted toward paid, your real retention is declining and the blend is politely hiding it. That's the quietest way a lifecycle program can lose ground without anyone noticing.
First-action experience. Users whose first meaningful action was X versus Y often show wildly different retention curves. This is where product-led retention wins come from — find the "aha moment" action and shape the first-week flow around reaching it.
Geography or plan tier. If your product varies meaningfully by region or plan, cohorts along those dimensions show where to invest. Blended "just fine" retention often hides one high-retention segment funding a low-retention one.
The aha moment guide covers how to identify the first-action that drives retention. Once you find it, that's usually the primary cohort stratification for the rest of the program's life.
What the curve doesn't tell you
,
It also misses resurrections. A user who signed up in January, went dormant in February, re-engaged in April is absent from the January cohort's week-12 retention but present in the quarterly active-user count. If your win-back or sunset work is producing meaningful resurrection, a separate "resurrection cohort" chart captures it cleanly; stacking it alongside the retention curve is the honest picture.
Common anxiety worth pre-empting: "why are newer cohorts sometimes worse than older ones?" Three suspects in rough priority order — acquisition-channel shift toward lower-quality traffic, a product regression between cohorts, and seasonal effects. Stratify by channel to rule out the first, annotate the timeline with product changes to identify the second, and compare year-over-year to isolate the third.
Building the chart
Most modern analytics stacks support cohort analysis natively — Amplitude, Mixpanel, Heap all have the view built in. For SQL-native teams, the query is about 30 lines: one CTE for signups grouped by cohort week, another for activity events, a JOIN on user_id with a time-since-cohort calculation. Not complicated, just finicky to get right the first time.
Pick one "active" event definition and stick with it across every cohort. "Active" should mean one specific thing — e.g., "user performed [key product action] in the week ending [date]". Something meaningful, not a login. For a marketplace that might be "viewed a product". For SaaS, "performed core action X". For content, "read an article". Logged-in is too loose; purchased is usually too conservative for early-stage retention. And shifting the definition between runs produces curves that cannot be compared, which defeats the chart's only purpose.
,
Worth noting a useful variant: cumulative revenue per cohort. Instead of "percent still active at week N", plot "cumulative revenue per user at week N." Flattens tell you revenue saturates. Continued steep climb tells you users keep spending. Pair with user retention to separate a stable-revenue base from a growing-spend-per-user dynamic. For subscription businesses, these two charts together are basically the whole board-report.
The quarterly retention review
The quarterly business review is where cohort curves earn their keep. The structure that works:
1. Current quarter cohorts stacked against the prior four quarters, at the same cohort ages.
2. Highlight any cohort meaningfully above or below trend. Discuss what changed.
3. For cohorts above trend: what did we do that we should keep doing?
4. For cohorts below trend: what happened, and what's the remediation?
5. Decide next quarter's priority lifecycle work based on where the curves say leverage lives.
One more timing consideration worth calling out because it catches people: different levers show up at different cohort ages. Onboarding improvements read in week-1 retention within four weeks. Win-back changes read in 90–180-day retention within 3–6 months. Systemic retention work — product value, re-engagement programs — needs 6–12 months of cohort data before you can confidently say the curve moved. Build the review cadence around that reality, not around the quarterly calendar.
uses cohort curves as the foundation of quarterly roadmap decisions. Programs that don't look at cohorts drift into reactive, campaign-level thinking within two quarters. Every time.
Frequently asked questions
- What is cohort analysis?
- Cohort analysis groups customers by a shared entry event (sign-up month, first purchase) and tracks their behaviour over time. Unlike aggregate retention, cohort analysis exposes when customers churn (month 1 is usually the worst), whether retention is improving cohort-over-cohort, and where lifecycle programs deliver real lift. The canonical cohort chart is a triangle: rows = cohorts, columns = months since start, cells = percent still active.
- How do I build a cohort retention curve?
- Pick the entry event (usually sign-up or first-purchase date), bucket users by the month of that event, then for each cohort compute the percentage still active at month 1, 2, 3, etc. A user is "still active" if they performed the retention-defining behaviour in that month — for subscription, paid; for e-commerce, purchased; for engagement, logged in. Most ESPs and BI tools have cohort templates but computing manually in SQL with GROUP BY cohort_month + months_since_start works too.
- What's a healthy cohort retention rate?
- Depends heavily on category. Subscription SaaS m3 retention above 85% is healthy; below 70% signals product-market-fit issues. E-commerce is harder to benchmark because purchase frequency varies. Consumer apps see steep early drop-off with long plateaus — instagram-style apps hit 30% m1 and stabilise there as healthy. The shape of the curve matters more than any single number — a flat 60% is healthier than a steep 80% falling to 30%.
- How does cohort analysis differ from aggregate retention?
- Aggregate retention averages across all cohorts, hiding whether the signal is improving or degrading. If last month's new cohort has 50% worse day-30 retention than the cohort from a year ago, the aggregate retention number barely moves (it's still averaged with all the older healthier cohorts), but the trajectory is collapsing. Cohort views make this immediately visible. Every serious lifecycle program reports cohort-first, aggregate-second.
- How often should I review cohort data?
- Monthly at minimum. Weekly for programs in active iteration. The important check is whether each new cohort's early-month retention is beating the previous cohort's — that's the leading indicator of whether your onboarding / activation / retention work is compounding or flat. If six months of cohorts all show identical m3 retention, your lifecycle programs aren't moving the needle.
Related guides
Browse allBrowse abandonment: the program that sits between ads and cart
Browse abandonment catches the users who viewed a product and left without adding to cart. Smaller per-user lift than cart abandonment. Ten to twenty times the trigger volume. For most programs it's the biggest revenue lever you haven't shipped yet.
Referral program emails — the three flows that make it work
Referral programs live or die on the lifecycle messaging wrapped around them. Three flows matter: inviter prompt, invitee welcome, reward confirmation. Get the timing and copy right on each and you double conversion without touching the offer.
Trial-to-paid: the seven-email sequence that converts 20%+ of free users
Trial conversion is the most financially leveraged flow in SaaS — every percentage point compounds directly against CAC. Here's the seven-email sequence that reliably moves trial conversion from 5% to 20%+.
Replenishment emails: the lifecycle flow that buys itself
Replenishment emails remind users to re-order a consumable before they run out. Done right, they generate the highest revenue-per-send in any lifecycle program because purchase intent is already established. Here's the timing, data, and copy.
The monthly newsletter still works — here's the structure
Email newsletters have been declared dead every year since 2015. They're not. A well-run monthly newsletter does real work for a lifecycle program — brand equity, re-engagement, the non-promotional relationship that makes every other send land. Here's what separates the newsletters worth sending from the ones that feel mandatory.
Price increase emails: how to raise prices without a churn spike
A price increase is one of the highest-risk lifecycle moments your program will ever run. Done wrong, it triggers churn, public complaints, and a reputation dent that outlasts the extra revenue. Done right, most users accept the change without friction. Here's the sequence that works.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 54 lifecycle methodologies, 55 MCP tools, native Braze integration. Pay what it's worth.