Updated · 7 min read
Review request emails: the timing that actually produces reviews
Every commerce and SaaS program needs reviews, and the review request email is the primary mechanism. Most programs send them as a generic batch 14 days after purchase and get 2% response rates. The better version — timed to actual product-use milestones, written like a human, asking one specific question — produces an order of magnitude more reviews.
By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
The timing that works
Biggest lever by a wide margin is when you ask. Too early and the user hasn't used the product. Too late and the moment has passed. Both produce the same 2% response rate.
Ecommerce physical product: 7–14 days after delivery, not after order. Shipping time means the user may still be waiting when you send at 14 days from order. Trigger on delivery confirmation plus N days, where N depends on product use cycle — apparel: 5 days, electronics: 10 days, furniture: 21 days.
SaaS product: after a meaningful usage milestone — 30 days of active use, or completion of a primary workflow. Time-based alone is weak. Usage-based is strong.
Content or course: after completion or a meaningful percentage consumed. "Finished the course? Tell us how it went" converts better than "2 weeks since you signed up, review?".
Right time to ask is when the user has formed an actual opinion. Before that, they have nothing to say. After that, the moment has passed.
The message pattern
Subject: "How's the [product] working out?" or "Quick question about your [product]". Question-framed subjects convert higher than command-framed ("Please leave a review").
Opening: reference the specific product and delivery or use context. "Your [product name] arrived on [date]. By now you've probably had a chance to use it." Specific beats generic.
The ask: one question, one CTA. "Would you share a quick review? It helps other customers decide." Not three CTAs. Not a survey. One review link.
Make it easy: link direct to the review form with the product pre-filled. Every extra click costs response rate. Where possible, embed a 1–5 star rating in the email itself; tapping any rating deep-links to the review form with that rating pre-filled.
Social proof the ask: "Join [X] customers who've shared their experience with [product]." Reinforces that reviewing is a normal thing to do, not a chore.
Why incentives usually backfire
,
Ethical alternatives: loyalty-program points (not directly redeemable for discount), or entry into a monthly draw for reviewers. Both are weaker incentives than direct discount but don't trip platform rules. Check your specific review platform's policies before building anything.
Stronger path: invest in the asking itself. Better timing plus better messaging plus frictionless flow lifts submission 2–3× without platform risk. Same outcome, no policy landmines.
Handling low ratings
Some programs route low ratings (1–3 stars) to support rather than to public review. There's a legitimate version of this and a dark version.
Legitimate: when a user rates 1 star, trigger a support flow — "We're sorry; can we help?" Offer resolution. After the resolution, they can still leave a public review if they want. This is customer-service first, reviews second.
Dark pattern: intercepting low ratings entirely so they never post publicly, while routing high ratings straight to public platforms. Technically shapes review averages. Widely considered unethical and against most platform terms.
First pattern is fine and often helpful. Second is risky and platform-violating. Be honest with yourself about which you're doing — the line is the "even if they don't resolve, the user can still post" principle. If your flow quietly prevents publication, you're in the second bucket.
Measuring review program health
Submission rate: percent of emailed users who submit a review. 5–15% healthy with good timing and messaging. 1–3% for generic unoptimised programs. That's a five-to-fifteen-times gap.
Average rating: should trend high (4+ on a 5-scale) if the product is genuinely good. If average is low, fix the product before optimising the request mechanism. A better review email for a worse product is a louder complaint.
Review depth: average word count of submitted reviews. Higher depth equals more useful reviews for prospective customers. Open-ended prompts ("What would you tell a friend about this product?") produce better depth than structured forms.
Time-to-review: median days from request to submission. Users who submit within 24 hours are highest-intent. If most reviews come days later, the request is getting shelved and forgotten.
On follow-ups: one, seven days after the original. Two total messages is the right number. More than that feels pestering. If they haven't reviewed after two asks, they're not going to. Cut the flow.
Which platform? Depends on the business. Commerce: platform-native (Shopify, WooCommerce) plus Google Reviews for SEO. Higher-consideration purchases: Trustpilot or Google Reviews. SaaS: G2, Capterra, TrustRadius depending on category. Multiple platforms are fine; just make sure the request flow is specific to each, not a blended "please leave a review somewhere" ask.
places review programs as a standing trigger after key product milestones, not a periodic batch. Batch sends produce batch-shaped response rates — which is to say, poor ones.
Related guides
Browse allProduct launch email sequence: the five emails that actually sell a new product
A product launch with one big announcement email captures a fraction of the addressable audience. A proper five-email sequence catches multiple attention windows, builds anticipation, and converts the users who needed a second or third touch. Here's the structure that reliably outperforms the single-send version.
Browse abandonment: the program that sits between ads and cart
Browse abandonment catches the users who viewed a product and left without adding to cart. Smaller per-user lift than cart abandonment. Ten to twenty times the trigger volume. For most programs it's the biggest revenue lever you haven't shipped yet.
Referral program emails — the three flows that make it work
Referral programs live or die on the lifecycle messaging wrapped around them. Three flows matter: inviter prompt, invitee welcome, reward confirmation. Get the timing and copy right on each and you double conversion without touching the offer.
Trial-to-paid: the seven-email sequence that converts 20%+ of free users
Trial conversion is the most financially leveraged flow in SaaS — every percentage point compounds directly against CAC. Here's the seven-email sequence that reliably moves trial conversion from 5% to 20%+.
Replenishment emails: the lifecycle flow that buys itself
Replenishment emails remind users to re-order a consumable before they run out. Done right, they generate the highest revenue-per-send in any lifecycle program because purchase intent is already established. Here's the timing, data, and copy.
Price increase emails: how to raise prices without a churn spike
A price increase is one of the highest-risk lifecycle moments your program will ever run. Done wrong, it triggers churn, public complaints, and a reputation dent that outlasts the extra revenue. Done right, most users accept the change without friction. Here's the sequence that works.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 54 lifecycle methodologies, 55 MCP tools, native Braze integration. Pay what it's worth.