For two decades, the cold email open rate was the SDR's first checkpoint. A 30% open rate meant the subject line was working. A 50% open rate meant the list was hot. A sub-15% open rate meant something was broken: bad timing, bad subject, bad sender reputation.
That model no longer works. Cold email open rates in 2026 measure almost nothing. The metric collapsed not because email died, but because three infrastructure changes made the underlying signal unreadable. And no amount of tooling can put it back together.
Short on time?
Skip to the cumulative math for the worked example, or jump straight to what's still measurable for the replacement signals.
What an "open" actually meant
The open rate has always been a proxy. SMTP doesn't tell senders "this person opened your message." The mail protocol has no read-event of any kind. Instead, every email tracking tool embeds a 1×1 invisible image in the message body, and when the recipient's email client loads the image, the sender's tracking server logs an "open." If the image loads, the email got rendered. By a human, or by a machine.
That proxy worked when most email was loaded by humans on desktop clients with images-on-by-default. It was directionally accurate. A 30% open rate genuinely meant about 30% of recipients had eyeballs on the message.
By 2026, three things have happened in parallel that destroy that mapping. The pixel still fires. The "open" still gets logged. The sender's dashboard still ticks up. But the underlying claim, "a human read this," is no longer true for most pixel fires.
The three things that broke it
1. Apple Mail Privacy Protection (2021–present)
Apple's Mail Privacy Protection (MPP), launched with iOS 15 and macOS Monterey in September 2021, pre-loads tracking pixels for all messages, regardless of whether the recipient ever opens them. Apple Mail handles the loading through two separate relay proxies, so even the IP address senders see is Apple infrastructure rather than the recipient's device. The mechanism is documented in detail in Postmark's writeup of the change.
Two facts about MPP make it the dominant noise source in modern open-rate data:
It's enabled by default and adoption is near-universal among Apple Mail users. Most recipients on iOS and macOS never see a setting they need to change. Independent estimates put adoption at roughly 97% of the Apple Mail base.
Apple Mail is the largest single source of email opens worldwide. Litmus's market-share data puts Apple Mail at roughly 47% of all email opens as of January 2026, based on 1.1 billion tracked opens. The figure has fluctuated between 46% and 67% over the trailing twelve months.
The math compounds. If close to half of all opens come from Apple Mail and almost all of those are MPP-prefetched, then on any list with a mainstream consumer or executive audience, the majority of "opens" reported by a tracking tool fired before the message was visible to a human.
Industry studies have measured the inflation directly. Omeda's analysis of the MPP rollout and follow-up reporting from email vendors put the MPP inflation effect somewhere between 15 and 35 percentage points on senders with significant Apple Mail audiences. A campaign reporting a 50% open rate post-MPP might genuinely have a 25–30% human open rate. The reported number isn't directional anymore. It's polluted at the source.
2. Email security scanners
The second wave of noise comes from B2B inbox infrastructure. Microsoft Defender for Office 365 (Safe Links), Proofpoint, Mimecast, Barracuda, and Cisco IronPort all "detonate" links and load images automatically before delivering messages to the recipient. This isn't a side effect of how they work, it's their primary purpose. A scanner has to render the email and follow the links to know whether either contains malicious payloads.
Microsoft's official documentation describes the mechanism for Safe Links: every URL in inbound mail gets rewritten to a Microsoft domain and rescanned at click time. From the tracking-pixel side, the pre-delivery scan loads images and registers an open. From the click-tracking side, the time-of-click scan registers a click. Both events happen before the recipient is involved.
Industry coverage of the resulting noise is now extensive. Mailmodo's writeup of the bot-click problem catalogs the same set of vendors as the dominant source of phantom engagement on B2B campaigns, and notes that the bot signatures are reasonably distinctive: opens within 60 seconds of delivery, sub-second sequential clicks on multiple links, and requests from known security-vendor IP ranges.
The scale on B2B cold outbound is severe. Estimates from document tracking platforms that filter scanner traffic put bot share at roughly 15–40% on typical enterprise campaigns, with higher rates on sends into financial services and healthcare where mail-flow security is most aggressive. Whatever the exact number for a given campaign, the floor is high enough that it dominates the small absolute differences (a 22% open rate vs. a 28%) that sales teams use to A/B-test subject lines.
The mechanics are covered in more depth in HummingDeck's earlier piece on why pixel tracking fails as a read signal.
3. AI inbox agents
The newest, fastest-growing source of open-rate inflation: AI assistants embedded directly in major mail clients. Gmail's Gemini, Microsoft 365 Copilot, and a growing list of third-party inbox copilots preview, summarize, and triage inbound email on behalf of the user. Each pre-render loads the tracking pixel. Each summary produces an "open" the sender attributes to the human, when the human never read the message.
The early measurements are striking. Folderly's analysis of post-Gemini email metrics reports that since Gmail's AI features launched, average open rates climbed to 45.6% while click-through rates fell from 4.35% to 3.93%. The gap is structural: Gemini opens the message to summarize it (open count goes up), and the user is then satisfied with the summary and never clicks through (CTR goes down). The pattern is visible across every campaign that runs through Gmail at scale.
This is the category that breaks every existing filter
Apple MPP and security scanners can be filtered, at least in principle, with IP ranges and user-agent fingerprints. AI agents render in the user's actual session, on the user's actual device, with the user's actual login. There is no plausible filter that distinguishes "Gemini opened your email to summarize it for the user" from "the user opened your email." The pixel fire looks identical because it is identical.
The number of AI agents in the inbox is growing every month. Outlook Copilot is rolling out across enterprise tenants. Apple Intelligence has begun summarizing email previews on iOS. Third-party tools like Superhuman and Hey have added AI features. Within 18 months, AI-pre-loaded opens will likely exceed scanner-pre-loaded opens.
Why filtering can't save the metric
The standard response from the email-tools industry is filtering. Strip out known bot user agents. Exclude suspicious open patterns. Weight opens by IP geolocation. Several email tools now ship "bot-filtered open rates" as a feature.
It doesn't fix the problem. The reasons:
- Apple MPP can't be filtered. Apple deliberately uses real Apple infrastructure to load pixels, with real recipient IPs that match the actual region. The "opens" look identical to genuine human reads because they originate from the same network paths.
- Modern security scanners route through normal cloud IPs. Microsoft SafeLinks runs on the same Azure infrastructure that 90%+ of legitimate Microsoft customers use. Filtering by IP block produces false positives that are worse than the noise being filtered.
- AI agents render in the user's actual session. The pixel loads from the recipient's real device fingerprint, on the recipient's real network, with the recipient's real cookies. There is nothing to filter.
Even if these problems were all solved, the open rate would still measure the same thing it has always measured: that someone-or-something rendered the image. It doesn't measure attention. It doesn't measure intent. It doesn't measure interest. It doesn't predict reply, meeting, or deal.
The metric isn't broken because of bots. It's broken because the underlying proxy was always weak. "If a pixel loads, a human read" was never quite true, and the infrastructure has now made it useless. Filtering treats the symptom; the disease is structural.
For a deeper look at how deliverability mechanics compound this problem at the inbox boundary, see HummingDeck's full writeup on email deliverability in 2026.
The cumulative math
A back-of-envelope illustration of how the three inflation sources stack on a typical B2B cold outbound campaign. Treat the numbers as a worked example, not a measurement.
A list of 1,000 corporate recipients receives the same cold email. The tracking dashboard reports a 38% open rate, or 380 opens. Where do those 380 events come from once you subtract the pre-rendered ones?
- MPP prefetches. On a B2B list, Apple Mail typically lands in the 25–35% range of total opens (lower than the all-category Litmus figure of 47%, because corporate inboxes skew toward Outlook and Workspace Gmail). With near-universal MPP adoption inside that segment, that's roughly 100–135 of the 380 reported opens fired by Apple infrastructure rather than a human.
- Security scanner hits. Industry estimates put scanner-driven opens at 15–40% of total opens on enterprise-heavy lists. That's another 60–150 reported opens that fired during inbound mail-flow scanning. Note that this category overlaps with the MPP segment: an Apple Mail recipient at a corporate domain whose mail also routes through Mimecast or SafeLinks gets both inflators firing on the same pixel, so the sum is not additive.
- AI inbox agent prefetches. A newer source, currently small but growing fast. Post-Gemini measurements suggest the AI layer alone has added roughly 5–15 percentage points to reported open rates on Gmail-heavy lists, partly overlapping with both prior categories.
Net it out, accounting for overlap, and the human-open rate on this campaign sits somewhere in the low-to-mid teens, not 38%. The exact split depends on the audience mix, the corporate-domain density, the iOS share, and the share of recipients whose inboxes have an AI agent enabled. The mix changes; the conclusion doesn't: a comfortable majority of the reported opens are pre-rendered events with no human attention attached, and the open-event payload contains nothing that lets the dashboard tell them apart from the real ones.
What's still measurable
Three signals have survived the collapse, and the gap between what each one measures and what the open rate measures is now large.
1. Reply rate. Replies require a human composing text. Bots, scanners, and AI agents do not generate cold-email replies in any volume that affects the metric. Average B2B cold email reply rates have declined steadily over the last decade, according to Belkins's 2025 benchmark study and MailForge's parallel analysis: from roughly 8.5% in 2019 to about 7% in 2023, and to 3–5% across 2024–2025. The decline is real (worsening deliverability, AI-generated outreach saturating inboxes, list fatigue), but the metric itself remains trustworthy. A 5% reply rate means a 5% reply rate.
2. Post-click engagement on shared content. When the cold email contains a link to a tracked document, sales room, or other rendered-on-the-server content rather than a generic landing page, the receiver behaviors that follow are hard to fake. Time on page. Scroll depth. Return visits weeks later. Multi-IP opens that signal forwarding to a colleague. A bot scanner won't read for four minutes. An AI summarizer won't return three weeks later. These signals come from the content layer, where the engagement actually happens, not the email envelope, where the noise is.
3. Deal velocity from engaged accounts. When sales teams correlate downstream deal outcomes with the source of the engagement signal, post-click behaviors predict closed-won far better than email opens. An account that comes back to read the pricing page two weeks after the first send is, empirically, a stronger forecast input than one with a 100% email-open rate but no other signal. Most sales forecasting tools have not caught up to this distinction yet.
For deeper coverage of the replacement metrics and the mechanics of capturing them, see How to Track Prospect Engagement After a Cold Email.
What each metric actually predicts
A simple comparison helps clarify what survives the 2026 noise floor and what doesn't:
| Metric | Inflated by | Predicts reply | Predicts deal |
|---|---|---|---|
| Email opens (pixel-based) | Apple MPP, security scanners, AI inbox agents | Weakly | No |
| Email clicks (link-based) | SafeLinks, Mimecast, link preview bots | Weakly | No |
| Reply rate | Almost nothing (humans only) | Yes (it is the signal) | Moderately |
| Time on linked content | Brief scanner hits filtered by minimum-duration rules | Moderately | Yes |
| Forwarding / multi-viewer events | Almost nothing (requires a deliberate human act) | Strongly | Strongly |
| Return visits to shared content | Almost nothing | Strongly | Strongly |
The pattern is consistent. Signals that require sustained human attention, deliberate human action, or both, hold up. Signals that fire on automated render or automated click do not.
What to do operationally
Three changes most outbound teams can ship this quarter.
Stop logging opens to CRM. Open events should not write to the deal record. Logging them creates false confidence and trains teams to chase ghosts ("she opened the email five times!" when in fact Gemini summarized it twice and SafeLinks scanned it three times). The events worth recording are: reply, click-through to a tracked content asset, document engagement, and inbound meeting requests.
Replace email attachments with tracked links. Sending a PDF as an attachment is invisible to the sender; the file leaves the outbox and disappears. Sending the same content as a tracked link or branded deal room exposes the actual reading behavior, on the content layer, where the noise sources from the inbox layer don't reach. This is the single biggest change a sales team can make to recover real measurement, and it doesn't require a process overhaul, just a tooling swap.
Stop reporting open rate in pipeline reviews. The open rate is now a vanity number with directional value only at the campaign-aggregate level (e.g., comparing Tuesday-send vs. Thursday-send), and even there the noise floor is high enough to make small differences meaningless. Replace it in reporting with reply rate (real replies) and post-click engagement rate (humans reading shared content). Both numbers will be lower than the open rates they replace. Both will mean something.
This shift is part of a broader move from volume-based outbound to content-led prospecting, where the value of the outreach is in what gets shared rather than how many emails go out. The methodology is covered in detail in Content-Led Prospecting: Why the Smartest Sales Teams Are Leading With Value, Not Volume.
What this changes for sales metrics broadly
The collapse of the open rate isn't an isolated metrics problem. It's part of a broader shift in how sales teams gather buyer intent data.
The third-party intent data category (Bombora, 6sense, ZoomInfo) attempted to solve "opens are weak" by buying signals from outside the funnel: who is researching your category on the open web, who is appearing in topic-clustered intent feeds, who has looked at competitor pages. The category produced revenue but the signal quality is mixed and the cost per qualified account is high.
The first-party alternative, which is what actually survives the open-rate collapse, is engagement on content the seller produces and shares. The advantage isn't just price. First-party signals are higher fidelity (the seller controls the content, the page, the tracking, the bot filter), arrive in real time, and tie directly to a named recipient or account. Open-rate culture trained sales teams to look for signals where the noise floor is now too high to read; first-party content engagement moves the measurement to a layer where the noise floor is much lower.
For the longer argument on the first-party shift, see First-Party vs Third-Party Intent Data. And for context on how regulatory pressure (the CNIL's 2025 guidance on tracking pixels, the UK ICO's parallel position) is making pixel-based open tracking legally precarious in addition to technically broken, see HummingDeck's writeup on the read-receipt and pixel ecosystem.
Bottom line
The cold email open rate isn't recoverable. Filtering doesn't fix it. Better tools don't fix it either. The proxy itself is structurally broken in 2026: pixel-loaded does not equal human-read, and on most modern lists, the majority of pixel loads happen with no human in the loop at all.
Sales teams that still run their pipeline reviews against open-rate dashboards are running on noise. The replacement isn't a single new number. It's a different model of what counts as a buyer signal: real replies, time on real content, forwarding events, return visits weeks later. The sub-30% open rate that used to mean "the subject line is broken" now means almost nothing. The 4-minute read on the pricing page does. That's the checkpoint worth watching.
For teams ready to stand up the alternative end to end (one tracked link per stakeholder, page-level analytics, bot filtering, accept-or-decline workflows), see HummingDeck for Sales.
Related:
- How to Track Prospect Engagement After a Cold Email: the replacement metrics in depth, and how to capture them.
- Email Deliverability in 2026: why your emails aren't reaching the inbox in the first place.
- Do Email Read Receipts Actually Work?: the broader pixel-and-receipt failure mode this post sits inside.
- Content-Led Prospecting: the methodology shift implied by "stop measuring opens, start measuring engagement."
- First-Party vs Third-Party Intent Data: why content engagement is the durable replacement for buyer signals.
- Why Your Deck Analytics Are Wrong: the same bot-detection thesis applied to deck and document tracking.
- Can You Track If Someone Opened Your Email Attachment?: the practical workaround for the "I attached a PDF and have no idea" case.
