· 8 min read
Apple Mail Privacy Protection, four years in
When Apple announced Mail Privacy Protection at WWDC 2021, every deliverability blog called it the end of email marketing. It wasn't. But it did break the open rate, which turned out to be more consequential than most programs expected, because the open rate was doing jobs nobody had examined in years. Four years in, here's what MPP actually changed and what practitioners who've adapted are doing instead.
Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
What MPP actually did
The open rate, as a signal of human behaviour, died the week iOS 15 shipped. Most programs didn't notice for months because the numbers went up, not down.
Apple Mail Privacy Protection, shipped in iOS 15 in September 2021, pre-fetches all remote images in an email the moment the message arrives in a user's Apple Mail inbox — whether or not the user ever opens it. The pre-fetch happens through proxy servers that strip IP addresses. Tracking pixels embedded in the email fire anyway.Source · AppleUse Mail Privacy Protection on iPhoneApple's official documentation of Mail Privacy Protection behaviour, including image pre-fetching and IP masking.support.apple.com/guide/iphone/use-mail-privacy-protection-iph2a7a6fdac/ios
The consequence: for any user on Apple Mail with MPP enabled — and that's now the default for all new Apple Mail users — your email appears to have been opened the moment it arrives, regardless of whether the human actually looked at it. The open rate, as a signal of human behaviour, died the week iOS 15 shipped.
The second-order consequence: send-time optimisation, send-based triggers, and re-engagement rules that depended on opens all broke quietly. Most programs didn't notice for months because the numbers went up, not down.
The jobs the open rate was doing
Before 2021, the open rate was doing four distinct jobs at the same time, which is why losing it caused so much breakage. Job one: a proxy for content resonance, used to compare subject-line performance in A/B tests. Job two: a trigger signal, used to fire follow-up emails ("they opened it but didn't click"). Job three: a user-engagement signal, used to separate active subscribers from dormant ones for list hygiene. Job four: a diagnostic, used to tell whether deliverability was failing.
MPP killed all four at different rates. The A/B testing job died first because Apple users are a meaningful share of tested audiences. The trigger-signal job died next because trigger-fired emails started going to users who had never actually seen the original. The engagement-signal job died more slowly because it was already a weak signal. And the diagnostic job lingered — open rates still mean something in aggregate if you segment out Apple users — but the noise-to-signal ratio collapsed.
The programs that handled MPP best were the ones that knew which of these four jobs they were actually doing in each place. Programs that treated open rate as a single metric got blindsided.
The replacement signal stack
For content resonance, use click rate instead of open rate. Click is still a real human action, and MPP doesn't inflate it. Click rate is lower-volume and therefore statistically noisier per send — but the noise is honest noise, not inflated noise. The A/B testing guide covers how to size tests against the lower-volume click metric.
For triggers that previously fired on opens, rebuild them on clicks or on downstream actions. A user who clicked a link in the email is genuinely engaged; a user who opened it in Apple Mail might not have seen it at all. If the trigger really needs pre-click engagement — some onboarding flows legitimately do — accept that you're now triggering on an inflated proxy and adjust the downstream messaging to assume less intent.
For engagement-based list hygiene, expand the engagement definition beyond opens. A user is engaged if they: clicked any link in the last N days, visited the product, completed any key action, or received mail that successfully delivered without bouncing. This is a multi-signal engagement score rather than a single metric. Requires more data-engineering work than the old open-based approach but survives the next privacy change because it doesn't depend on any single signal.
The Orbit Lifecycle Reporting skill handles the composite engagement score and how to surface it in dashboards without turning the whole thing into a data-science project.
What's still useful about the open rate
Not nothing. The open rate still works as a deliverability diagnostic at the aggregate level — a sudden drop in opens (even an inflated opens number) is a real signal that delivery has degraded. It still works for non-Apple Mail segments, so if your audience skews Android or web-based, open rate retains more signal than the averages suggest. And it still works as a rough engagement floor — a user whose open rate is zero for a year is definitely disengaged, regardless of MPP.
The trap is using open rate for anything that compares across Apple and non-Apple users. A subject-line test that looks like a winner because Apple users inflate one variant's open rate is a false positive. Always segment open-rate analysis by client to see the underlying signal.
The programs that actually adapted
Four years on, the programs that weathered MPP best share a pattern. They rebuilt their measurement layer once, early, instead of patching it incrementally. They moved triggers off opens to clicks or product events. They kept open rate in dashboards as a diagnostic but stopped treating it as a KPI. And they invested in the composite engagement score that doesn't depend on any single signal.
Programs that didn't adapt show up as confused data now. Their open rates are up year-over-year because of MPP inflation, which looks like success to anyone not in the weeds. Their click rates are flat or declining because the underlying audience quality is drifting but nobody's caught it. Their re-engagement programs are firing on inflated opens and producing lower-quality reactivated cohorts than they used to. These are the programs still measuring like 2020, even though 2020 ended four years ago.
Frequently asked questions
- What is Apple Mail Privacy Protection?
- A privacy feature in Apple Mail, shipped with iOS 15 in September 2021, that pre-fetches all remote images in an email the moment it arrives — regardless of whether the user opens it. This fires tracking pixels automatically, making open rates unreliable for Apple users. IP addresses are also masked through proxy servers.
- Should I still measure open rate?
- Yes, but as a diagnostic signal rather than a KPI. Aggregate drops in open rate (even inflated) can still reveal delivery problems. But for content resonance, engagement scoring, and A/B test decisions, click rate and product-side actions are more reliable than opens.
- What should replace open-based triggers?
- Click-based triggers and product-event triggers. A user who clicked is genuinely engaged. A user who opened in Apple Mail may not have seen the email at all. For pre-click engagement triggers that still need to fire, accept the inflated proxy and tune the downstream messaging to assume less intent.
- Can I tell which users have MPP enabled?
- Not directly — Apple masks the signal. But you can infer: Apple Mail opens from MPP users happen extremely quickly after send (usually within 1–2 minutes), through specific Apple IP ranges, often with identical user agents. ESPs including Braze flag this as 'machine-opened' in reporting. Don't treat those opens as engagement.
- What's a composite engagement score?
- A single engagement signal computed from multiple inputs: recent email clicks, site visits, product actions, purchases, and non-bouncing sends. It survives any single privacy change because no one input can break the whole signal. Requires upstream data engineering but produces more durable lifecycle segmentation than open-based approaches.
- Has MPP hurt overall email marketing performance?
- For programs that adapted, no. For programs that didn't, yes — quietly. Programs still triggering on opens are firing to inflated audiences, their re-engagement is reactivating lower-quality cohorts, and their A/B tests are producing false positives on Apple-Mail-skewed subject lines. The damage is gradual and usually invisible until you audit end-to-end.
This guide is backed by an Orbit skill
Related guides
Email deliverability — the practitioner's guide
Deliverability is the cumulative result of every send decision over the lifetime of a domain. This guide covers the four pillars — authentication, reputation, engagement, and list hygiene — and how to recover when one breaks.
IP warm-up for Braze — the practitioner's playbook
A dedicated IP has no sending reputation on day one. This guide shows how to ramp to full volume in 14–30 days without triggering spam filters — including the Random Bucket Number methodology most teams miss.
The unsubscribe page is the most important page in your lifecycle program
The page every lifecycle team ignores is the one that quietly decides sender reputation, suppressed-list quality, and the fate of your next quarter's deliverability. A short defence of why it's worth the ten-minute rebuild.