Updated · 8 min read
Apple Mail Privacy Protection, four years in
Mail Privacy Protection shipped at WWDC 2021. Every deliverability blog declared email dead. It wasn't. What Apple actually killed was the open rate — which turned out to be doing four jobs nobody had bothered to separate, and all four went sideways at different speeds. Here's the honest version of what broke, what to use instead, and why the programs that fixed this in 2022 are quietly running laps around the ones that didn't.
By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
What Apple actually changed
The open rate died the week iOS 15 shipped. Nobody noticed for months because the number went up, and up-and-to-the-right is a very easy number to ignore.
The mechanism is dull and the consequences are not. Mail Privacy Protection, shipped with iOS 15 in September 2021, pre-fetches every remote image in a message the moment it lands in an Apple Mail inbox. The user doesn't have to open the message. The user doesn't have to be near the phone. The tracking pixel fires. The proxy servers launder the IP on the way.Source · AppleUse Mail Privacy Protection on iPhoneApple's official documentation of Mail Privacy Protection behaviour, including image pre-fetching and IP masking.support.apple.com/guide/iphone/use-mail-privacy-protection-iph2a7a6fdac/ios
MPP is on by default for every new Apple Mail account. Which means, in practical terms, that for a huge share of your list the email is "opened" within seconds of arrival regardless of what the human on the other end is doing. Asleep. On a flight. At the gym. Still an open.
Can you tell which specific users have MPP switched on? Not directly — Apple masks that on purpose. You can infer it well enough in aggregate: MPP opens fire within a minute or two of send, route through a tight set of Apple IP ranges, and often share identical user agents. Most mature ESPs flag them as machine-opened or pre-fetched in reporting. Braze, Iterable, and Customer.io all surface this; if yours doesn't, assume every Apple Mail open is suspect until proven otherwise.
The knock-on damage arrived quietly. Send-time optimisation. Engagement-based triggers. "They opened it but didn't click" flows. All of them, silently broken. And nobody screamed because the dashboards kept showing green.
The four jobs the open rate was quietly doing
Before 2021 the open rate was holding down four roles at once, like an extremely cheap employee nobody realised was doing the work of three. Call them jobs one through four: A/B test proxy, trigger signal, engagement score for list hygiene, and deliverability diagnostic. Each was doing a different thing. Each broke at a different pace.
A/B tests went first. Apple users are a chunky share of most tested audiences, so subject-line "winners" started being whichever variant happened to skew Apple. Triggers broke second. Re-engagement flows began firing at people who had literally never seen the original email, which is the email version of asking someone why they haven't replied to a letter you never sent. Engagement scoring degraded more slowly, mostly because it was a weak signal doing a heavy job to begin with. Deliverability diagnostics still work, sort of, if you segment Apple out and squint.
So should you still report open rate? Yes, carefully. It's a diagnostic, not a KPI. An aggregate drop in opens — even inflated Apple-heavy ones — is still real signal that something in delivery just broke and you should go look. What it isn't is a metric worth putting in the board deck, the quarterly review, or anywhere a decision-maker might mistake a number that goes up automatically for a number that means something.
The programs that came through MPP fastest all shared one trait: they could answer, for every dashboard and every flow, the question "what is this open rate actually for?" If you couldn't answer that, you couldn't fix it. Most couldn't.
The replacement stack, one job at a time
For A/B testing content: click rate. One metric, end of committee. A click is a real human doing a real thing with real intent, and Apple's pre-fetch doesn't touch it. Yes, click rate is lower volume than open rate — which means tests need more recipients to reach significance. That's fine. Honest noise beats inflated fake signal on any day ending in a weekday. The A/B testing guide has the sample-size maths.
For triggers: rebuild them on clicks or on downstream product events. A click is a person. An open on Apple Mail is, at best, an optimistic hope. If a specific flow genuinely needs a pre-click signal — some onboarding paths really do — fine, trigger on the inflated proxy, but rewrite the downstream copy to assume less intent than the metric suggests. "Thanks for checking out the guide!" lands strangely when the guide arrived at 2 a.m. while the user was asleep.
For engagement-based hygiene: build a composite score and stop relying on any single signal. The recipe is plain: recent clicks, product visits, key actions completed, successful deliveries — weighted and blended into one number per user. That's it. No single input carries the whole thing, which means the next privacy change (and there will be one; this is the industry we picked) can't flatten the whole signal at once. It's more data engineering up front. That's the feature, not the cost. And it's the difference between a program that survives what Apple does next and a program that's currently one feature release away from a second MPP moment.
The Orbit Lifecycle Reporting skillhandles the composite engagement score end to end — weights, dashboards, the inevitable "why does this user score 62 and not 71" question from a stakeholder at 4:45 p.m. on a Friday.
What's still actually useful about the open rate
Not nothing. Let's not bin the whole metric with the breathless enthusiasm of someone rebranding a pricing page.
Three things hold. First: aggregate drops are still diagnostic. If opens collapse in a way the Apple-inflation floor can't explain, something in deliverability just broke and you need to go find it before the complaint rate does. Second: non-Apple segments retain real signal, so programs skewing Android or web-based can still read the number meaningfully. Third: a user whose open rate is flat zero across twelve months is disengaged whether Apple inflated the numerator or not. Zero is still zero.
The trap that kept MPP fallout rolling for years is the obvious one: using open rate to compare across Apple and non-Apple users. A subject-line "winner" where variant A happens to have more Apple recipients than variant B isn't a winner. It's a coincidence with a p-value stapled to it. Segment by client every single time, or stop drawing conclusions.
The programs that adapted, and the ones still pretending
Four years on, the programs that weathered MPP share a pattern. They rebuilt their measurement once, early, and all the way — not in patches every quarter like a boat with a hole in it. They moved triggers off opens. They kept open rate on the dashboard as a diagnostic but stopped calling it a KPI. They built the composite engagement score and committed to it even when it made one quarter's numbers look worse before they looked better.
The programs that didn't adapt are honestly fascinating to look at now. Their open rates are up year-over-year, which reads like success if you don't know why. Their click rates are flat or quietly trending down because the underlying audience quality is drifting and nobody caught it. Their re-engagement flows are reactivating people who never saw the original email, producing the lowest-quality reactivated cohorts the program has ever shipped. The dashboard says up and to the right. The reality says the opposite.
Has MPP hurt overall email performance? For programs that adapted: no, not materially. For programs that didn't: yes, quietly and compoundingly. Still-triggering-on-opens means firing to inflated audiences. Still-reactivating-on-opens means lower-quality reactivated users. Still-A/B-testing-on-opens means confident nonsense about Apple-skewed subject lines, week after week. The damage is gradual and usually invisible until someone audits end to end. Most programs haven't.
The honest punchline is that MPP wasn't really the problem. The problem was a decade of measuring a metric nobody had ever interrogated, and then being surprised when it turned out to be structural. Apple just pulled out the load-bearing string and watched what happened.
Frequently asked questions
- What is Apple Mail Privacy Protection (MPP)?
- Apple Mail Privacy Protection, released with iOS 15 in September 2021, pre-fetches tracking pixels on behalf of Mail users who opted into the feature (which is the default choice on first launch). This means the open-rate pixel fires whether the user actually opened the email or not — so open rate from Apple Mail clients is effectively 100% and no longer a reliable engagement signal.
- How much did Apple MPP inflate email open rates?
- Open rates across the industry inflated 15-40 percentage points after MPP rollout, depending on Apple Mail share. A program that had a true 25% open rate in 2020 might see 55% reported in 2022 — not because engagement improved, but because every Apple Mail recipient now registers as an open regardless of behaviour.
- Should I still use open rate as a metric after MPP?
- As a diagnostic signal only, not a KPI. Open rate still tells you something about deliverability (zero opens = probably not inboxing) and directional comparison across non-Apple segments. But as a triggering signal for re-engagement, winback, or suppression — no, it's broken for any segment with significant Apple Mail share. Replace with click-based engagement scoring, reply tracking, and forwarding behaviour.
- What should replace open rate in lifecycle scoring?
- A composite engagement score combining clicks (weighted highest), replies, forwards, out-of-folder moves, and recency. Most modern programs use a 90-day weighted score that decays older activity. Build it once, commit to it for at least two quarters, and stop referencing open rate in triggering logic.
- Is Apple MPP the same as Gmail's privacy protections?
- No. Apple MPP pre-fetches pixels on the user's behalf (inflating opens). Gmail does NOT pre-fetch pixels — it proxies images via Google's servers, which breaks IP-based geo tracking but doesn't inflate open rates. The two protections solve related problems in opposite ways.
This guide is backed by an Orbit skill
Related guides
Browse allList hygiene: the six-rule policy
List hygiene isn't cleanup; it's a continuous policy that runs automatically. Here's the six-rule policy every lifecycle program should have written down, each tied to a specific deliverability outcome.
The deliverability mental model: one picture for authentication, reputation, content, and monitoring
Every deliverability guide covers one piece — SPF, DKIM, DMARC, BIMI, reputation, warmup. What's missing is the systems-level picture that ties them together. This is the one diagram a senior operator needs: how mailbox providers decide whether your email reaches the inbox, and where each piece of the stack plugs in.
Email deliverability — the practitioner's guide
Deliverability isn't a setting. It's the running total of every send decision you've made since you bought the domain. Four pillars hold it up. Break one and the whole program starts leaking.
IP warm-up in Braze — the playbook that actually holds
A fresh dedicated IP has zero reputation on day one. Most warm-up guides fixate on ramp speed and ignore the harder question — which users get the send each day. Here's the schedule, the Random Bucket Number trick, and the day-10 mistake that ruins most of them.
The unsubscribe page is the most important page in your lifecycle program
The page every lifecycle team ignores is the one quietly deciding sender reputation, suppression-list quality, and the fate of next quarter's deliverability. A short defence of why it deserves the ten-minute rebuild.
SPF, DKIM, and DMARC explained for lifecycle marketers
Three DNS records decide whether your marketing email is trusted or binned. Gmail and Yahoo made all three mandatory for bulk senders in 2024, and the grace period is over. This is the practitioner's explainer: what each one does, how they interact, and the setup order that won't block your own mail.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 54 lifecycle methodologies, 55 MCP tools, native Braze integration. Pay what it's worth.