Updated · 11 min read
Choosing which lifecycle programs to build first
Every program-specific guide in this library tells you how to build that program. None of them tell you which one to build first. That decision is the single biggest lever a lifecycle lead pulls in their first 90 days, and it's usually made by accident — the team builds the program the CEO saw in a competitor's email, or whatever the agency pitched, or whichever one the ESP has a template for. This guide is the selection framework I wish I'd had when I joined my first CRM role.
By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
The selection frame in one sentence
Build the program that captures the most revenue you're currently leaving on the table, that your data and team can actually execute well, and whose success you can prove.
Three factors — revenue leak, executability, provability. The build order is the one that optimises all three. In practice that's rarely the program you were planning to build.
Revenue leak — how much money walks past the current lifecycle stack every week that a new program would capture. Higher leak = higher priority.
Executability — does your ESP, your data, your content team, and your stakeholder bandwidth actually support shipping this well? A program you can't execute is worse than no program.
Provability— when you ship it, can you run a holdout and know if it worked? Programs you can't measure become political footballs the first time a CFO asks "what did this do?"
Highest-leverage programs by business type
Different businesses have different revenue-leak shapes. The program ranked #1 for an ecom retailer is often not even top-three for a SaaS product. Start from your actual business model, not from someone else's program inventory.
Adjusting for team size
Solo operator (lifecycle of one).You can't build and run five programs in parallel. Build one. Make it great. Ship the next one when the first is actually generating measurable revenue and is resilient to a week of you being away. The temptation is to stand up thin versions of three programs; the result is three programs that underperform compared to one done well.
Two-to-three-person team.You can run two programs in parallel and build a third in the background. The limit isn't sending capacity — ESPs handle that fine — it's QA, iteration, and post-launch measurement. Every program you ship adds a maintenance tax; underestimate that tax and your "launched" programs quietly decay.
Four-plus team with a specialist.Now you can afford to build programs in parallel and invest in infrastructure (templates, Liquid libraries, testing harnesses). But the frame doesn't change — revenue leak × executability × provability still picks the order. A larger team just means the "executability" constraint loosens. You still shouldn't build a program whose success you can't prove.
The Lifecycle Audit skill exists specifically for this situation: walk into a new company and quickly size up what's in place, what's leaking, and what the ROI-ordered build queue should look like.
Data maturity gates
A program is only buildable if your data can support it. The common version of this mistake: shipping a program that's technically there but firing off wrong signals because the underlying data model isn't right.
Cart abandonment.Needs clean cart events flowing into your ESP/CDP with item-level data (name, image, price, URL). Needs a reliable "purchased" event to stop the flow. Without both, your cart program will fire at people who already bought — the single worst experience in lifecycle — and you'll burn credibility fast.
Win-back.Needs a reliable "last meaningful activity" timestamp. Most ESPs have "last email open" but that's corrupted by Apple MPP and cached images. You need a behavioural signal — last purchase, last app open, last active session — at a minimum. Running win-back off email-open-only will sunset engaged users who happen to read on iOS.
Post-purchase / replenishment.Needs order data in the ESP with items, amounts, and dates. Replenishment specifically needs repeat-purchase interval data — if you don't know the typical cycle, the replenishment send will feel random.
Welcome. The lowest data bar. You only need the signup event. This is one reason welcome gets built first everywhere — it's the program with the least data dependency. Which is fine, but don't confuse "easiest to build" with "highest priority". For ecom, welcome often isn't in the top three. Custom attribute design →
Programs I'd actively wait on
The best lifecycle build queues have programs on them that never ship. That's not a failure — it's the queue doing its job.
A few programs that operators are constantly asked to build but where waiting is usually the right call.
Birthday / anniversary emails. Beloved by CEOs, barely move the needle in measurable terms. Takes data (birthday field) most signups don't provide willingly. Send volume is too thin per day to meaningfully impact revenue. Build these last, if at all, and certainly not before your cart program. Honest assessment →
SMS before email is solid. SMS is a powerful channel, but it compounds whatever email-channel habits you've built. Bad cadence in email becomes worse cadence in SMS. Adding SMS before your email lifecycle is measurable is adding complexity you can't diagnose. Solid email first, then layer SMS into the highest-leverage moments. SMS playbook →
Loyalty / referral programs pre-scale. Loyalty and referral are scale plays — they work when you have enough volume that even small referral rates produce meaningful users. Building them pre-scale usually produces a beautiful program with 40 enrolments and zero economic impact. Wait until you have an audience that can move the needle on word of mouth.
"AI personalisation" without a measurement baseline.The pitch is compelling. The reality is that AI personalisation needs a clean baseline to beat, a long enough test window, and a holdout — all of which require the foundational lifecycle measurement to be in place first. Don't skip the foundation.
The 90-day build queue
A pragmatic quarter-long build queue for a new CRM lead at a typical mid-size ecom or SaaS company:
Weeks 1-2
Audit + baseline
Weeks 3-5
Ship program #1
Weeks 6-8
Measure + iterate
Weeks 9-12
Program #2 + infra
Weeks 1-2: Audit + baseline. Don't ship anything yet. Inventory what's there, what the ESP/data will support, and establish the metrics baseline so future work has something to beat. The lifecycle audit checklist is the full checklist.
Weeks 3-5: Ship program #1. The highest-leverage program for your business type. Ship it with a holdout from day one so you can measure incrementality honestly. Holdout group design →
Weeks 6-8: Measure + iterate. Let the holdout data accumulate. Iterate on creative, segment, and cadence based on what the numbers say. Every program takes 4-6 weeks to produce trustworthy data; shipping the next one before this one has been measured is how lifecycle teams end up with a dozen half-working programs.
Weeks 9-12: Program #2 + infrastructure. Second-highest-leverage program. Alongside it, invest in reusable infrastructure: template library, Liquid snippets, naming conventions, segmentation patterns. The payoff on infra shows up in program #3 and every program after.
Where to go next
You've got the queue. Now the specific playbooks for each canonical program:
1. Onboarding email flows — how to build the welcome / activation sequence properly.
2. Abandoned cart emails — the cart recovery playbook.
3. Win-back flows — reactivation sequencing.
4. Post-purchase emails — turning one-time into repeat.
5. Holdout group design — how to measure any of this honestly.
6. Personalisation not creepy — when to crank the dial, when to dial back.
7. Lifecycle metrics dashboard — proving impact to the rest of the business.
Frequently asked questions
- My CEO wants a birthday email program. What do I tell them?
- That birthday programs are low-leverage and should be queued after the cart / post-purchase / win-back work that moves measurable revenue. Show the expected incremental revenue from each program type — a cart program on 5% of your annual GMV vs a birthday program on 0.2% — and the trade-off argues itself. If they still insist, build a thin version and prove it underperforms the alternative with real holdout data.
- Should I build the program my competitor has?
- Not because they have it. They might have built it before measuring and be stuck with a program that doesn't work. Build programs based on your revenue-leak shape, not on competitive theatre. The only competitor signal worth trusting is 'they've run this for three years and kept it' — which means it earned its keep.
- What if I build a program and the holdout says it doesn't work?
- Keep it off. Seriously. Killing a program is a rare and valuable action — most teams let ineffective programs limp on because the political cost of turning them off feels higher than the cost of running them. It's not. Every active program is a maintenance tax, a reputation cost, and a distraction from the program that would work. Kill the underperformer, take the lesson, build the next one with sharper hypotheses.
- How do I sequence program infrastructure vs program shipping?
- Ship program #1 with whatever tooling you have. Extract the reusable infrastructure from it as you build program #2. By program #3 you should have a templating pattern, naming convention, and test harness that make program #4 faster to ship than program #1 was. Don't build infrastructure before you've shipped anything — you'll build the wrong infrastructure for the programs you actually needed.
- How do I decide between SMS and push as the second channel?
- Go where the users' attention already is. If your product has an app with regular usage, push is free and relevant; add it before SMS. If your product lives in a browser or the users are email-first, add SMS for the highest-urgency moments (cart, shipping updates, win-back) and keep it narrow — never blast-SMS.
This guide is backed by an Orbit skill
Related guides
Browse all →What is lifecycle marketing? A field guide for operators starting from zero
If you're new to CRM and lifecycle, the field reads like a pile of acronyms and vendor demos. It's actually one simple idea executed across five canonical programs. Here's the frame that makes the rest of the library make sense.
Lifecycle marketing for flat products
The standard lifecycle playbook assumes weekly engagement and neat stage progression. Most real products aren't shaped like that. This is how to design lifecycle for products used once a year, once a quarter, or whenever the user happens to need you — where the textbook quietly makes things worse.
The cadence question: how often should you email?
Everyone asks how often to email. Almost nobody answers it properly, because it's the wrong question. Cadence isn't a single number — it's a consequence of five other decisions you probably haven't made yet. Here's the version of the debate that resolves.
The lifecycle audit — a 30-point checklist
Lifecycle programs decay silently. A recurring audit is the cheapest discipline that catches drift before it shows up in the revenue deck. Here's the 30-point list, grouped by severity, that takes three hours the first time and ninety minutes thereafter.
Abandoned cart emails: what actually works
Cart abandonment is the easiest program to get wrong because the defaults work well enough to hide the problem. Here's the structure that actually moves incremental revenue — timing, sequencing, and the discount policy most teams have backwards.
Post-purchase emails: what to send after the receipt
Post-purchase is the highest-engagement window in the entire customer relationship and most lifecycle programs spend it sending a receipt, a generic welcome, and then silence. Here's the 30-day sequence that actually earns the second purchase.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 54 lifecycle methodologies, 55 MCP tools, native Braze integration. Pay what it's worth.