Updated · 8 min read
The lifecycle metrics dashboard: what to track, what to ignore
Ask a lifecycle team what they track and you'll get a list of 30 metrics. Ask what they actually act on and the list shrinks to four. That gap is why most lifecycle dashboards are ornamental — they exist to reassure stakeholders, not to inform decisions. A good dashboard is the opposite: eight metrics that trigger real actions, and everything else lives in ad-hoc reports where it belongs.
By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
The test for whether a metric belongs
A metric belongs on the dashboard if, and only if, a change in it would trigger a specific action. Otherwise it's context — useful in an ad-hoc report, noise on a weekly review.
Most dashboards fail this test and nobody notices because everyone nods at the screen anyway. Open rate sits there for months; nobody ever says "open rate dropped three points, so we're changing X". If no one would act on the number, the number is decoration. Good dashboards are ruthless about the filter. Why eight? Because every metric on a dashboard is a claim on attention, and past eight the team stops really looking at any of them. Eight is already generous.
The eight metrics that pass the test
1. Active audience (weekly). Count of users who received at least one marketing email this week AND engaged (open or click). The numerator that feeds everything downstream. If it drops, you have an engagement or sending problem. Investigate first.
2. Revenue per send (rolling 7-day). Total attributed revenue ÷ total sends for the week. Keeps the comparison apples-to-apples across volume changes. Action trigger: a 20%+ drop week-over-week without a volume explanation means something has gone wrong in the audience or the content.
3. Spam complaint rate (30-day rolling). Complaints ÷ delivered, 30-day window. Action trigger: 0.3% requires immediate intervention. The complaints playbook has the rest.
4. Unsubscribe rate (per-send). Unsubs ÷ delivered for the last broadcast. Action trigger: a specific send above 0.5% signals audience/content mismatch on that campaign — not the whole program.
5. List growth: net subscribers this week. Subscribes minus unsubscribes minus hard bounces. Negative weeks are fine when you're pruning. Sustained negative months mean an acquisition or retention gap you need to name.
6. Activation rate (new cohort). Percent of users hitting the activation event within seven days of signup. The program's leading indicator. Action trigger: a three-point drop in a single cohort = investigate the onboarding flow today, not next sprint.
7. Thirty-day retention (cohort). Percent of users active at day 30. The trailing indicator that retention work is paying off. Moves slowly; check monthly, not weekly.
8. Gmail domain reputation. From Postmaster Tools. Action trigger: any movement to Low or below, regardless of what the other metrics say.
What to kick off the dashboard (and why)
The principle: if you'd need to ask "what do I do with that?" after reading the number, the number doesn't belong on the dashboard. Put it in a campaign-specific report or an ad-hoc analysis where context does the work.
Open rate in particular has been broken since 2021 — Apple Mail Privacy Protection inflates opens by auto-loading images for Apple users, and the ratio of real opens to machine opens varies by audience. The Apple MPP guide covers why it's still a fine A/B test proxy (inflation is equal across arms) and a poor primary dashboard metric. Revenue attribution raises a similar question — pick one model, stick with it, and keep the dashboard consistent. Last-click in a 7-day window is the default for email. Measure true incrementality separately via holdout tests. Don't try to build "real" revenue into a daily dashboard; consistency beats precision when you're tracking trends.
Cadence: weekly, monthly, quarterly
One dashboard view, toggling timeframes, beats three separate dashboards. Same eight metrics, different granularity.
Weekly: metrics 1–5 (active audience, revenue per send, complaint rate, unsub rate, list growth). Fast-moving numbers that respond to campaign changes within days.
Monthly: metrics 6–8 (activation, retention, domain reputation). Slower-moving; weekly noise drowns out weekly signal.
Quarterly: cohort analysis, retention curves, revenue attribution readouts, holdout results. The work to understand why the dashboard metrics moved at all.
The leading/lagging split matters here. Leading indicators — activation, complaint rate, unsubscribe rate, revenue per send, domain reputation — move fast and predict downstream impact. Lagging indicators — 30-day retention, net list growth, total revenue — confirm what the leading ones suggested 30–90 days ago. Weight your current decisions toward the leading ones.
The operating rhythm that makes the dashboard real
A dashboard is only useful if someone looks at it on a schedule. Build the review cadence into the team's operating rhythm and it becomes a tool. Skip it and it becomes wallpaper.
Monday 15-minute standup. Review the weekly dashboard. One person presents changes in the five weekly metrics; if any triggered actions, assign and move on. No discussion theatre.
Monthly review, first Monday of the month. Forty-five minutes on the slower metrics plus last month's action items. Output is a list of two or three priorities for the month. Not ten.
Quarterly business review. Cohort analysis, retention curves, experiment readouts, retrospective on what moved. The output shapes next quarter's roadmap, which is the entire point.
A few questions that come up on this topic constantly. Should you separate dashboards by channel — email versus push versus SMS? Only if the audiences or goals genuinely differ. For most programs, rolling channels into the same eight metrics with a channel filter is better than maintaining parallel dashboards; the team should be thinking about "the program", not "the email program versus the push program". And if your dashboard currently has 30 metrics and the team is asking for more, audit each one against the action test: if it moved 20% this week, what specifically would we do? Remove everything where the honest answer is "we'd look into it" or "nothing specific". You'll cut 70%+. Archive the survivors' cousins into a monthly analytical report where context can carry the weight.
covers the quarterly review format and how to structure readouts that influence prioritisation, not merely inform it.
Related guides
Browse allBuilding a lifecycle team — the roles, the order, the size
Lifecycle is a craft, an ops function, and a strategic lever all at once. Most teams accidentally end up with three people holding overlapping halves of the role. Here's the deliberate version: who to hire first, what triggers the next one, and when CRM stops belonging in brand marketing.
B2B lifecycle marketing: what changes when the buyer isn't the user
B2B lifecycle looks like B2C on the surface — emails, flows, segmentation — but the mechanics underneath are different. Buying committees, account-level intent, sales hand-offs, and product-led overlaps all change the playbook. Here's what's actually different.
Quarterly planning for lifecycle: what actually goes in the plan
Most lifecycle roadmaps are calendar lists of campaigns. A real quarterly plan is different — priorities tied to metrics, with tests, investments, and explicit trade-offs. Here's the format that produces decisions, not lists.
Lifecycle for startups: the three flows to build before anything else
Early-stage programs waste months building the wrong lifecycle flows. Here are the three that compound value at every stage — welcome, trial-to-paid (or first-repeat), and winback — and why everything else can wait.
Reporting lifecycle to executives: the monthly update that actually lands
Most lifecycle reporting to execs is a 20-slide deck of campaign-level charts that nobody remembers a week later. The fix isn't more data — it's a different structure. Three numbers, two decisions, one ask. Here's how to build the report that produces ongoing investment instead of polite nods.
CRM vs CDP: which tool do you actually need?
Vendors sell CRM, CDP, marketing automation, and ESP as if they're four shapes of the same box. They aren't. Here's what each one actually does, where it falls over, and the decision rule for picking one first.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 54 lifecycle methodologies, 55 MCP tools, native Braze integration. Pay what it's worth.