Updated · 8 min read
Quarterly planning for lifecycle: what actually goes in the plan
The lifecycle team sits down for quarterly planning and produces a Google Doc listing 40 campaigns. It's tidy, it's comprehensive, and it's not a plan — it's a calendar. By week 6 of the quarter, half the items have moved, new urgent ones have landed, and the original document is out of date. A real quarterly plan looks different: fewer items, more specific, tied to metrics, with explicit decisions rather than activities. Here's how to run one.
Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
The difference between a calendar and a plan
A calendar answers "what are we sending when". A plan answers "what are we trying to achieve, what's in the way, and what specifically will we do about it".
The test for a plan: if your priorities change halfway through the quarter, does the plan still serve you? A calendar becomes garbage. A plan becomes the thing you reprioritise within.
Most lifecycle teams drift into calendars because calendars feel productive — you can fill them in, you can show a stakeholder a grid of campaigns. But the calendar doesn't help the team decide which of 20 possible campaigns actually matters. The plan does.
The five-section quarterly plan
Section 1: Where we are. One page of current state. Active audience size, revenue per send, complaint rate, 30-day retention, deliverability health. Compared to last quarter. What moved, what didn't, what's concerning.
Section 2: The three priorities for the quarter. Not ten, not thirty. Three. Each priority is stated as a metric-level goal — not an activity. "Lift trial-to-paid conversion from 12% to 15%" is a priority. "Ship a trial email flow" is an activity.
Section 3: For each priority, the plan. 3–5 specific investments per priority. These are activities: ship the trial flow, run the welcome test, improve the product reminder. Tied to the priority they serve. An activity that doesn't map to a priority shouldn't be in this section — it's a distraction.
Section 4: What we're explicitly not doing. Five to ten things that will come up during the quarter that are NOT on the plan — requests from other teams, campaign ideas that are tempting but below the priority bar, low-value maintenance work. Documenting the no-list prevents re-litigating it every time someone asks.
Section 5: What we'll learn regardless. 2–3 experiments that will produce insights whether they "win" or not. These are separate from the priorities — they're learning investments. E.g., "test whether send-time optimisation produces real lift on our list".
,
How to pick the three priorities
Most programs have more priority candidates than capacity. The filter for picking three:
1. Evidence of leverage. Prior cohort analysis, benchmark gaps, or experimental data showing this area has room to move. A priority based on "it feels like we should work on this" is weaker than one with specific evidence.
2. Team capacity. Honest estimate of engineering, design, copy, and data-science hours needed. A priority with no capacity to execute is a fantasy.
3. Dependency readiness. External teams or data availability that the priority depends on. A retention priority that needs product telemetry the product team is 3 months away from shipping is blocked; pick a different one.
The intersection of leverage + capacity + dependency readiness usually leaves 3–5 candidates. Pick 3; rank the others as contingent backup.
The quarterly review cadence
The plan is only useful if it's revisited. Cadence:
Weekly: short check-in on activities tied to each priority. Are they on track?
Monthly: broader review. Are the priorities still the right priorities? Has something shifted (competitor move, product change, data surprise)?
End of quarter: retrospective. Did we move the metrics? What did we learn? What goes in the next plan?
,
The common failure modes
Too many priorities. "These are our three priorities" followed by a list of seven. Pick three; everything else is in Section 4 (explicitly not doing).
Priorities as activities, not goals. "Ship the welcome flow" is an activity. "Lift week-1 activation from 30% to 38%" is a goal. Goals survive activity pivots; activities don't.
No capacity math. A priority that requires 400 hours of work in a team with 200 available hours is a plan to fail. Be honest about capacity; cut priorities until the math works.
No trade-offs documented. The plan says yes to three things and doesn't say no to anything. When the plan says yes to X and no to Y, it's a real plan. When it lists yesses, it's a wish list.
includes quarterly plan production as one of its default outputs. The plan is the forcing function for the trade-offs that determine what actually gets done.
Frequently asked questions
- How long should quarterly planning take?
- 1–2 weeks of concentrated work, not a single meeting. The work includes: cohort analysis to identify leverage, capacity math, stakeholder alignment on priorities. The plan document itself takes a few hours to write; the thinking takes 1–2 weeks. Don't compress this into a single session.
- Should stakeholders outside the lifecycle team see the plan?
- Yes. The plan is the alignment artefact — it tells brand, product, sales, and exec what the lifecycle team is prioritising. Share the plan, review it with stakeholders, get agreement on the priorities and the no-list. This prevents the mid-quarter 'why aren't you doing X' surprises.
- What if a major thing lands mid-quarter that wasn't in the plan?
- Decide explicitly: does this replace one of the three priorities, or does it get added to the 'not doing' list? Don't silently add it and hope the team can handle everything. If it truly replaces a priority, re-plan; if it doesn't, defer it to next quarter.
- How specific should the plan be about activities?
- Specific enough that a reader can tell what's in scope and what isn't. 'Build the welcome flow' is too vague; 'Ship the 5-email welcome sequence with trigger rules, QA'd in Braze, live by end of month 1' is specific. Specificity prevents scope creep later.
- Do I need a plan if the team is small (1–2 people)?
- Yes, arguably more. A smaller team has less slack; priority misalignment costs more. A 1-page plan for a 2-person team is usually enough. The format doesn't need to scale up; the discipline of picking priorities and saying no does.
- What's the right balance of new work vs maintenance?
- Roughly 60/40 new/maintenance for most programs. Pure-new quarters get burned by production issues; pure-maintenance quarters don't ship new capability. Maintenance goes in Section 3 alongside new priorities where it ties to a goal (e.g., 'fix the bounce handling' under a 'reduce complaint rate' priority).
Related guides
Browse all →Building a lifecycle team: the roles, the order, the size
Lifecycle marketing is a craft, an ops function, and a strategic lever all at once — so it's hard to staff. Here's the progression: which role to hire first, when to add the next one, and how to know if you need a CRM manager, a lifecycle strategist, or a marketing ops engineer.
B2B lifecycle marketing: what changes when the buyer isn't the user
B2B lifecycle looks like B2C on the surface — emails, flows, segmentation — but the mechanics underneath are different. Buying committees, account-level intent, sales hand-offs, and product-led overlaps all change the playbook. Here's what's actually different.
The lifecycle metrics dashboard: what to track, what to ignore
Most lifecycle dashboards show 40 metrics and answer no questions. A good one shows 8 and tells you what to do next. Here's the eight-metric dashboard that actually runs a lifecycle program.
Lifecycle for startups: the three flows to build before anything else
Early-stage programs waste months building the wrong lifecycle flows. Here are the three that compound value at every stage — welcome, trial-to-paid (or first-repeat), and winback — and why everything else can wait.
Reporting lifecycle to executives: the monthly update that actually lands
Most lifecycle reporting to execs is a deck of campaign-level charts that nobody remembers a week later. Here's the format that actually lands — three numbers, two decisions, one ask — and produces ongoing investment.
CRM vs CDP: which tool do you actually need?
CRM, CDP, marketing automation, ESP — vendors market all four with overlapping feature lists. Here's what each one actually does, what it's bad at, and how to decide which one your program needs first.
Found this useful? Share it with your team.