Methodology · Transparent by design
How Orbit is built: AI-first, by design
Orbit is built by one person — Justin Williames — with Claude as the engineering partner, and a deliberate AI-first methodology that compounds across every release. This page explains exactly how, because at a time when AI-generated content quality is the main scepticism in the market, transparency matters more than marketing copy.
The core idea
A lifecycle marketer with a decade of operator context, paired with Claude as an engineering and writing partner, can ship a product that would previously have required a team of 8–12. Not by outsourcing judgement to AI — by making judgement the scarce resource and using AI to remove every other bottleneck around it.
What the operator owns, what Claude does
Every Orbit skill, every guide, every architectural decision starts with Justin's operator judgement — the kind of judgement you only get from having run CRM programs at Linktree, Depop, Trainline, and Deliveroo. Knowing which segmentation patterns survive contact with real data, which deliverability rules actually matter, why most A/B tests fool their owners. That framing can't be offloaded — and isn't.
Claude handles the layer below that: translating the framing into production code, writing the TypeScript for new skills, drafting the first version of every long-form guide, generating the MJML for email templates, producing the JSON-LD structured data, scaffolding tests. Justin reviews, rewrites where the voice needs sharpening, ships.
The practical effect: the same 10 hours of Justin's time that used to produce one well-written guide now produces 10 well-reviewed guides. The 10x isn't magic — it's the removal of the writing/coding friction that surrounded the core judgement work.
The actual stack
Claude (via Claude Code) — primary engineering partner. Every new skill, guide, and feature starts here. Claude does the first draft; Justin reviews against operator reality and edits until the voice and content are right. Sessions are typically 30 minutes for a focused ship, occasionally 2–3 hours for a bigger feature.
The Orbit MCP extension itself— every lifecycle marketing decision Orbit's users make is the same decision Justin faces when building Orbit. Meta, but useful: the extension is dogfooded constantly. If a skill doesn't produce a clean output for Justin, it doesn't ship.
Git, Vercel, Next.js, Postgres — standard modern web stack. Nothing exotic. The goal is to reduce infrastructure fiddling time to near-zero so the operator can spend time on judgement, not on config.
A test suite that gates every ship — 58+ automated tests including MCP contract tests, error-path suites, and a11y / SEO tripwires that catch regressions before they land. Build fails if any test fails; the bar is no exceptions.
Why this works for lifecycle marketing specifically
Lifecycle marketing content on the open web is mostly vendor-blog sludge — written by content farms, optimised for SEO keywords, divorced from actual operator experience. Operators can spot this within two paragraphs. It's also what most AI-generated marketing content defaults to — because the training data IS mostly vendor sludge.
Pairing Claude with a real operator breaks that default. The operator's judgement keeps the output calibrated to what's actually true in the field. Claude's production capacity scales the output. The result is content with the voice and specificity of authored work — at a volume that authored work alone can't match.
Which is why Orbit can ship 80 long-form practitioner guides, 54 skills, and 55 MCP tools as a one-person project. Not because AI replaced the practitioner. Because AI removed the bottleneck that stopped one practitioner from sharing what they know at scale.
The operating rhythm
New skills, new guides, new web apps ship weekly to monthly. See the What's New page for the timeline — the cadence is real, not a launch calendar.
Every release goes through the same loop: operator identifies a real pattern that deserves a skill / guide / tool. Claude drafts the first version. Justin reviews, rewrites the parts that would embarrass him if a seasoned operator read them, tests end-to-end, ships. The feedback loop from Orbit users feeds directly back into the next round.
The compounding effect of this: the product improves in directions users ask for, at a pace that SaaS competitors with 20-person teams can't match. Because the bottleneck isn't capacity — it's operator judgement — and judgement compounds.
What Orbit will and won't be
Will:keep shipping new skills and guides as practitioner-grade content, at AI-first pace. Keep being transparent about how it's built. Stay free to install, pay-what-it's-worth to support. Stay opinionated — generic content is what Orbit refuses to be.
Won't:become a SaaS with seat-based pricing and a bloated sales motion. Hire a content team to pump out generic SEO articles. Accept a fundraise that demands growth-at-all-costs trade-offs. Apologise for being built with AI — that's the feature, not the bug.
What's next
- Install Orbit → Free forever. One download, Claude Desktop does the rest.
- Read the guides → See the methodology this page describes, applied.
- About Justin → The operator behind the product.
- Follow on LinkedIn → Short-form writing on lifecycle, CRM, and AI-first marketing.