Updated · 8 min read
The lifecycle metrics dashboard: what to track, what to ignore
Ask a lifecycle team what they track and you'll get a list of 30 metrics. Ask what they act on and the list shrinks to 4. The difference is the reason most lifecycle dashboards are ornamental — they exist to satisfy stakeholders rather than inform decisions. A good dashboard is the opposite: it shows the 8 metrics that trigger actions, and leaves everything else in ad-hoc reports. Here's the shortlist.
Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
The test for whether a metric belongs
A metric belongs on the dashboard if, and only if, a change in it would trigger a specific action. Otherwise it's context — useful in an ad-hoc report, noise on a weekly review.
Most dashboards fail this test. Open rate sits there; nobody says "open rate dropped 3 points, so we're changing X." If nobody would act, the metric is decoration. Good dashboards are ruthless about this filter.
The eight metrics that pass the test
1. Active audience (weekly). Count of users who received at least one marketing email this week AND engaged (open or click). The numerator that feeds everything downstream. If it drops, you have an engagement or sending problem; investigate first.
2. Revenue per send (rolling 7-day). Total attributed revenue ÷ total sends for the week. Compares apples to apples across volume changes. Action trigger: if it drops 20%+ week-over-week without a volume shift explanation, something's wrong in the audience or content.
3. Spam complaint rate (30-day rolling). Complaints ÷ delivered, 30-day window. Action trigger: crossing 0.3% requires immediate intervention (see the complaints playbook).
4. Unsubscribe rate (per-send). Unsubs ÷ delivered for the last broadcast. Action trigger: a specific send above 0.5% indicates audience/content mismatch on that campaign.
5. List growth: net subscribers week. Subscribes - unsubscribes - hard bounces. Negative weeks are acceptable (hygiene pruning); sustained negative months indicate acquisition or retention gap.
6. Activation rate (new cohort). Percent of users who hit your activation event within 7 days of signup. The program's leading indicator. Action trigger: 3-point drop in a single cohort = investigate onboarding flow.
7. 30-day retention (cohort). Percent of users active at day 30. The trailing indicator that retention work is paying off. Moves slowly — check monthly not weekly.
8. Gmail domain reputation. From Postmaster Tools. Action trigger: any movement to Low or below, regardless of other metrics.
The metrics that should not be on the dashboard
,
The principle: if you would need to ask "what do I do with that?" after reading the number, it doesn't belong on the dashboard. Put it in a campaign-specific report or an ad-hoc analysis where it has context.
The Apple MPP guide covers why open rate specifically has lost value as a primary metric since 2021.
Cadence: weekly, monthly, quarterly
Weekly: metrics 1–5 (active audience, revenue per send, complaint rate, unsub rate, list growth). Fast-moving metrics that respond to campaign changes.
Monthly: metrics 6–8 (activation, retention, domain reputation). Slower-moving; weekly noise outweighs weekly signal.
Quarterly: cohort analysis, retention curves, revenue attribution, holdout readouts. The work to understand why the dashboard metrics moved.
One dashboard view toggling between timeframes beats three separate dashboards. The same eight metrics at weekly / monthly / quarterly granularity is enough for most programs.
The operating rhythm
Dashboards are only useful if someone looks at them on a schedule. Build the review cadence into team operating rhythm:
Monday 15-minute standup: review weekly dashboard. One person presents changes in the five weekly metrics; if any triggered actions, assign and move on.
Monthly review (first Monday of month): 45 minutes on the slower metrics + previous month's action items. Outcome is a list of 2–3 priorities for the month.
Quarterly business review: cohort analysis, retention curves, experiment readouts, and a retrospective on what moved. Output shapes the next quarter's roadmap.
covers the quarterly review format and how to structure readouts that influence prioritisation rather than just inform.
Frequently asked questions
- Why only 8 metrics?
- Because each metric on a dashboard is a demand on attention. More than 8 and the team stops really looking at any of them. The discipline is: only add a metric if you can articulate the action it would trigger. Everything else belongs in ad-hoc reports, not the dashboard.
- Why isn't open rate on the dashboard?
- Apple Mail Privacy Protection (2021+) inflates opens by auto-loading images for Apple users. Open rates are now a mix of real opens and machine opens, with the ratio varying by audience. It's still a useful signal in A/B tests (where the inflation is equal across arms), but it's a poor primary metric for a dashboard.
- How do I handle revenue attribution across multiple channels?
- Pick one attribution model and stick with it for the dashboard — usually last-click within a 7-day window for email. Use holdout tests to measure true incrementality separately; don't try to build the 'real' revenue number into the daily dashboard. Consistency beats precision for tracking trends.
- My dashboard has 30 metrics and the team wants more — how do I cut?
- Audit each one against the action test: 'If this metric moved 20% this week, what would we do differently?' Remove any where the answer is 'we'd look into it' or 'nothing specific'. You'll cut 70%+. Keep the survivors; archive the rest into a monthly analytical report where context can carry the weight.
- Should I separate dashboards by channel (email vs push vs SMS)?
- Only if the channels have meaningfully different audiences or goals. For most programs, rolling channels into the same eight metrics (with a channel filter) is better than maintaining parallel dashboards. The team thinks about 'the program', not 'the email vs the push program'.
- What's a leading vs lagging indicator in this dashboard?
- Leading: activation rate, complaint rate, unsubscribe rate, revenue per send, domain reputation. These move fast and predict downstream impact. Lagging: 30-day retention, net list growth, total revenue. These confirm what the leading indicators suggested 30–90 days ago. Weight recent decisions toward leading indicators.
Related guides
Browse all →Building a lifecycle team: the roles, the order, the size
Lifecycle marketing is a craft, an ops function, and a strategic lever all at once — so it's hard to staff. Here's the progression: which role to hire first, when to add the next one, and how to know if you need a CRM manager, a lifecycle strategist, or a marketing ops engineer.
B2B lifecycle marketing: what changes when the buyer isn't the user
B2B lifecycle looks like B2C on the surface — emails, flows, segmentation — but the mechanics underneath are different. Buying committees, account-level intent, sales hand-offs, and product-led overlaps all change the playbook. Here's what's actually different.
Quarterly planning for lifecycle: what actually goes in the plan
Most lifecycle roadmaps are calendar lists of campaigns. A good quarterly plan is different — it's a set of priorities tied to the metrics you want to move, with tests, investments, and explicit trade-offs. Here's the format that produces decisions, not lists.
Lifecycle for startups: the three flows to build before anything else
Early-stage programs waste months building the wrong lifecycle flows. Here are the three that compound value at every stage — welcome, trial-to-paid (or first-repeat), and winback — and why everything else can wait.
Reporting lifecycle to executives: the monthly update that actually lands
Most lifecycle reporting to execs is a deck of campaign-level charts that nobody remembers a week later. Here's the format that actually lands — three numbers, two decisions, one ask — and produces ongoing investment.
CRM vs CDP: which tool do you actually need?
CRM, CDP, marketing automation, ESP — vendors market all four with overlapping feature lists. Here's what each one actually does, what it's bad at, and how to decide which one your program needs first.
Found this useful? Share it with your team.