· 11 min read
The lifecycle audit — a 30-point checklist
The first question I ask any lifecycle team I work with is: when was your last audit? For most programs the honest answer is 'never' or 'the last time someone noticed something was broken'. Audits catch the quiet degradation — the segment that's 80% dormant, the trigger that stopped firing six weeks ago, the naming drift that's quietly making reporting unreliable. Here's the 30-point checklist, organised by severity.
Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
How to run it without burning three days
An audit that takes three days doesn't get done. An audit that takes three hours, done every quarter, is the one that actually catches drift before it becomes damage.
The 30 points below cover deliverability, data integrity, program health, and operational hygiene. Run all 30 in a quarterly cadence; the first time takes 3–4 hours, each subsequent pass takes 60–90 minutes because you already know where to look. Grade each on a traffic-light system: green (healthy), amber (watch), red (fix this quarter).
The Orbit Lifecycle Audit skillautomates the bulk of this — pulls the current state from Braze, flags anomalies against baselines, and produces the structured report. Use this list as the spec for what to look at whether you're running manually or using the skill.
Deliverability (8 points)
1. Bounce rate over the last 30 days: under 2% green, 2–5% amber, 5%+ red.
2. Spam complaint rate: under 0.1% green, 0.1–0.3% amber, 0.3%+ red. Covered in the complaints playbook.
3. Unsubscribe rate by program: flag any program over 0.5% per send.
4. SPF, DKIM, DMARC configuration intact across every sending subdomain. Covered in the authentication guide.
5. DMARC reports reviewed in the last 30 days — are there unauthorised services signing as your domain?
6. Google Postmaster Tools reputation: High or Medium is healthy. Low or Bad is red.
7. Microsoft SNDS status across all sending IPs.
8. Dormant user suppression active: users who haven't engaged in 90+ days excluded from marketing broadcasts.
Data integrity (7 points)
9. Lifecycle stage values populated on every user profile — flag any user stuck in null or an undefined stage.
10. Activation event is still firing at expected volume per week vs baseline.
11. Every segment used in an active program has been updated in the last 7 days (for rolling segments) or matches its spec (for static).
12. Every custom attribute referenced by a live campaign actually exists on user profiles. Use the Braze Data Model Validation skill to catch drift.
13. Event taxonomy hasn't drifted: events that were one per user session are still one per user session (analytics teams change instrumentation without telling lifecycle teams).
14. RBN (Random Bucket Number) distribution is uniform across the expected range — skewed RBNs mean your sampling is broken.
15. Test user suppression is in place for internal emails and employee accounts.
Program health (9 points)
16. Every active canvas has had a send in the last 30 days. Canvases that haven't fired in 30+ days are either broken or orphaned — either way, pause them until validated.
17. Every scheduled broadcast has a named owner.
18. Onboarding sequence open rate on email #1 is above 40% for Apple-Mail-excluded audiences.
19. Winback program converts at above your baseline reactivation-rate floor (program-specific).
20. Abandoned cart trigger fires within 60 minutes of the trigger event (latency above that is a broken pipeline).
21. Frequency cap is configured on every marketing broadcast and respected by triggered programs.
22. Post-activation sequences fire for newly-activated users — check that transition isn't silently failing.
23. Churned/sunset users are excluded from all marketing sends, including broadcasts run outside the normal program flow.
24. Every program has a defined success metric recorded somewhere — not just "open rate" but the business metric it's intended to move.
Operational hygiene (6 points)
25. Naming convention compliance: off-convention campaigns under 10% of the live portfolio. See the naming guide.
26. Content Blocks: no duplicates, no stale blocks (unused in 90+ days), every block has a named owner.
27. Segments: no unused segments in the workspace older than 90 days. Archive them.
28. Templates: every active template renders cleanly in Gmail, Outlook, Apple Mail, and dark mode.
29. Every program has documentation — a program brief that explains purpose, triggers, audience, success criteria. Use the Program Brief skill to catch up on missing ones.
30. Handoff documents exist for any program launched by someone who has since left the company. Knowledge in one person's head is a program at risk.
What to do with the output
The audit produces 30 traffic-light grades. Three rules for what happens next: any red is a same-week fix (deliverability and data-integrity reds are blocking); ambers get a plan by the end of the quarter; greens become the following quarter's baseline. The audit compounds in value over time because you're tracking deltas, not absolute state.
Share the audit result with stakeholders in whichever format works (a summary report, a Looker dashboard, a Notion page). The purpose isn't the report — it's the visibility of drift. A program where leadership sees the grade every quarter runs tighter than one where it's opaque.
Frequently asked questions
- How often should I run a full lifecycle audit?
- Quarterly for mature programs; monthly for programs still stabilising or that have had a recent deliverability incident. The first audit takes 3–4 hours; subsequent ones run in 60–90 minutes because you know where to look.
- Who should own the audit?
- The most senior lifecycle person, even if they delegate individual checks. An audit signed off by a junior operator doesn't carry the weight needed to unblock cross-functional issues (data engineering, customer service changes) that typically surface.
- What's the single most-often-missed audit item?
- Paused/orphaned canvases. Programs get built, launched, and forgotten. Six months later they're either silently firing with broken logic or contributing to reputation drag. Check that every active canvas has actually fired in the last 30 days.
- Should I automate the audit?
- Yes — most of the data-integrity and deliverability checks are scriptable against the Braze API. The Orbit Lifecycle Audit skill produces the structured report. The program-health and operational-hygiene checks still need human judgement for the 'is this program serving its purpose?' question.
- What does a red grade actually mean?
- Same-week fix. Reds are blocking — deliverability reds threaten sender reputation, data-integrity reds mean segments or triggers may be broken, program-health reds mean a program is failing silently. If a red can't be fixed in a week, it gets escalated.
- How do I get leadership to care about audit results?
- Translate each grade into a dollar number. '3 red deliverability items at current sending volume puts ~$X monthly revenue at risk' lands differently than 'we have a bounce rate problem'. The Retention Economics skill covers the translation layer.
This guide is backed by an Orbit skill
Related guides
Email deliverability — the practitioner's guide
Deliverability is the cumulative result of every send decision over the lifetime of a domain. This guide covers the four pillars — authentication, reputation, engagement, and list hygiene — and how to recover when one breaks.
IP warm-up for Braze — the practitioner's playbook
A dedicated IP has no sending reputation on day one. This guide shows how to ramp to full volume in 14–30 days without triggering spam filters — including the Random Bucket Number methodology most teams miss.
Apple Mail Privacy Protection, four years in
In 2021, Apple broke the email open rate. Four years later, the dust has settled — and the lifecycle programs that adapted are outperforming the ones still measuring like it's 2020. Here's what actually changed.