You share a proposal link with a prospect. Two minutes later, your tracking tool says they opened it, scrolled through six pages, and spent 30 seconds on each one.
You fire off a follow-up email. They have no idea what you're talking about — they haven't opened it yet.
What happened: their company's email security scanner opened your link before they did. The "view" was a bot. And unless your analytics platform specifically handles this, you can't tell the difference.
This isn't a rare edge case. It happens on virtually every email sent to enterprise recipients using Microsoft 365 or Google Workspace.
The bots that inflate your analytics
There are three categories of automated traffic that hit shared deck links:
Link preview bots
When you paste a link into Slack, LinkedIn, Twitter, or iMessage, a bot immediately fetches the page to generate a preview card. These requests come from identifiable user agents — Slackbot-LinkExpanding, LinkedInBot, Twitterbot. They're easy to detect and filter. Most tracking tools handle these correctly.
Search engine crawlers
Googlebot, Bingbot, and their variants routinely index public-facing content. These also identify themselves openly and are straightforward to filter. If your shared links are behind authentication or use unique slugs, crawlers rarely find them anyway.
Email security scanners
This is the category that breaks analytics.
Microsoft Defender SafeLinks, Proofpoint URL Defense, Mimecast, and Google Safe Browsing scan every link in incoming emails before the recipient sees them. Their job is to detect phishing pages and malware. They do this by actually opening the link, loading the page, and in many cases navigating through its content.
What makes them nearly impossible to distinguish from real users:
- Spoofed user agents. SafeLinks doesn't announce itself as a bot. It uses real browser user agents — things like "Chrome 120 on Windows 11" or even mobile device strings like "itel A27 Android." Your server sees what looks like a legitimate browser visit.
- Page navigation. SafeLinks doesn't just load the first page. It navigates through approximately 6 pages of your deck, regardless of total length. Proofpoint does similar multi-page scanning.
- Realistic timing. These sessions last 22–48 seconds. They don't blast through instantly like a crude scraper — the timing looks plausible for a quick browse.
- Synthetic DOM events. Some scanners fire JavaScript events — programmatic scrolls, simulated touches, even click events — to trigger any malicious scripts that might be hidden behind user interaction.
The net effect: every time you email a deck link to someone at a company using Microsoft 365 (that's most of your enterprise prospects), you get a fake "view" that looks like a real one. Sometimes two — one from the sender's security stack and one from the recipient's.
Here's how a SafeLinks bot session compares to a real human viewer side by side:
URL rewriting makes it worse
Some security tools don't just pre-scan links — they rewrite them entirely. SafeLinks replaces every URL in the email body with a redirect through Microsoft's own servers (https://nam02.safelinks.protection.outlook.com/...). Proofpoint does the same with its URL Defense rewrites (https://urldefense.proofpoint.com/v2/...).
When the recipient finally clicks the link in their email, they're not opening your original URL directly. They're clicking the rewritten SafeLinks URL, which routes through Microsoft's proxy first — and Microsoft's proxy opens your page to re-scan it at that moment. Then it redirects the user's browser to your actual page.
The result: the bot scan and the real visit happen simultaneously. The security scanner opens your deck from a datacenter IP at the exact same second the real user opens it from their office. You get two concurrent sessions — one bot, one human — with nearly identical timestamps. If your analytics can't tell them apart, you either count both (inflated) or risk filtering out the real one (undercounted).
This is the hardest scenario to handle correctly. The pre-delivery scan is easy to identify in isolation — it arrives minutes before anyone clicks. But the click-time re-scan arrives alongside the real user, from a different IP, with a different user agent, creating a separate session that overlaps with the legitimate one.
The follow-up trap
If you're using view notifications to time your follow-ups, bot views are actively harmful. You call a prospect "while they're looking at your deck" and they haven't opened their email yet. You look aggressive instead of attentive. And with URL rewriting, even the timing is deceptive — the bot session starts at the same moment the real one does.
Why standard bot detection doesn't work
The standard approach — checking user agents against a list of known bots — catches link preview bots and crawlers but completely misses email security scanners. SafeLinks deliberately uses real-looking user agents specifically to avoid being fingerprinted, because phishing pages that detect bots serve different content than they serve to real users.
IP-based rate limiting doesn't help either. SafeLinks rotates through a pool of Azure datacenter IPs, so each scan comes from a different address. You can't block by IP without blocking legitimate users on corporate VPNs.
Browser fingerprinting (canvas, WebGL, font enumeration) doesn't work because SafeLinks runs a full Chromium-based browser. Its fingerprint is indistinguishable from a real Chrome installation.
How we solve it: three layers of detection
We built a detection system that catches email security scanners without blocking legitimate users — including those on corporate VPNs and proxies.
Here's how traffic flows through the three layers. Hover over any node to trace its path:
Layer 1: User agent matching
The straightforward part. We check incoming requests against 60+ known bot patterns — link preview bots, search crawlers, headless browsers, HTTP client libraries. If a request identifies itself as Slackbot or python-requests, we reject it immediately. No session is created.
This catches about 40% of bot traffic. The easy 40%.
Layer 2: Datacenter IP detection
Email security scanners spoof their user agent but they can't hide where they're running. SafeLinks operates from Azure datacenters. Proofpoint runs on AWS. These are known hosting provider IP ranges.
When a viewer opens a shared link, we check their IP against a geolocation service that reports whether the IP belongs to a hosting provider. If it does, the session is flagged as a bot.
This catches the majority of email security scanner traffic. A viewer connecting from "Quincy, Washington" on a Microsoft Azure IP at 2 AM is almost certainly not your prospect reading your proposal.
There's a subtlety here: we don't block these sessions. We create the view record but flag it as bot, exclude it from analytics, and don't send you a notification. This lets us audit false positives and gives users the option to inspect bot sessions if they want to investigate.
What about VPN users?
Corporate VPN users sometimes route through datacenter IPs, which means Layer 2 might flag a real user as a bot. That's why Layer 3 exists — it gives humans a path to "prove" they're real, even if their IP looks like a datacenter.
Layer 3: Gesture-based human confirmation
This is the critical layer. It's based on a simple observation: real humans physically interact with the page. Bots don't — or when they do, their interactions have a distinctive signature.
When a session is initially flagged as a bot (from Layer 2), we don't count it as a real view. Instead, we watch for genuine human input:
What we monitor: Mouse movement, touch events, keyboard input, and scroll activity.
What we ignore: Page navigation, link clicks, and downloads. This is counterintuitive — these seem like strong signals of human behavior. But SafeLinks navigates through 6+ pages of every deck it scans. Headless browsers like Puppeteer can programmatically click buttons and trigger downloads. These actions can be faked trivially. Low-level input device events are much harder to synthesize convincingly.
The 3-second window: We ignore all gestures in the first 3 seconds after page load. SafeLinks and Proofpoint fire their synthetic events almost immediately — within 500 milliseconds of loading the page. Real humans take at least 3 seconds to orient themselves and start interacting. This single timing threshold eliminates the majority of synthetic event noise.
Mouse movement analysis: When we do detect a mouse movement after the 3-second window, we log the trajectory — coordinates and timestamps of up to 20 movement events. Bot-generated mouse movements follow geometric paths (perfectly straight lines, exact circles) or fire a single event. Human mouse movements have organic acceleration and deceleration, slight curves, and micro-corrections.
When a genuine gesture is detected, the session is atomically "promoted" from bot to real. At that point — and only at that point — the view count increments and you receive a notification.
The timing matters
The entire system is designed around a specific race condition: what happens when a real user opens a link, moves their mouse once, and immediately closes the tab?
The confirmation request might not complete before the tab closes. Browsers cancel in-flight fetch() requests on navigation. So we use sendBeacon() — a browser API specifically designed for fire-and-forget requests that survive page unloads — as a backup. The final beacon carries the confirmation data alongside the session's engagement metrics.
On the server side, the promotion is atomic: a single SQL update that only succeeds if the session hasn't already been promoted. This prevents double-counting when both the regular confirmation and the beacon backup arrive.
What this means for your analytics
The practical impact is significant. In our testing across enterprise-focused sales teams, 15–40% of apparent deck views were bot sessions — primarily from SafeLinks and Proofpoint. For teams selling to large enterprises with strict email security policies, the number was even higher.
Without filtering:
- View counts are inflated by 15–40%
- "Most engaged" prospects may be the ones with the most aggressive email security
- View timing is unreliable — you think someone viewed your deck at 2 PM when the security scanner opened it at 1:58 PM
- Follow-up signals are noise
With filtering:
- Every view represents a real person who actually looked at your content
- Engagement metrics (time spent, pages viewed, completion rate) reflect genuine interest
- View notifications trigger when a human is actively engaged, not when a bot pre-scans
Checking for bot sessions in HummingDeck
The Activity feed includes a "Show bot sessions" filter that lets you inspect filtered-out sessions. Use it to verify the system is working correctly for your specific prospects — if you see a legitimate viewer being flagged, the gesture confirmation layer should promote them once they interact with the page. If it doesn't, reach out to support.

The broader lesson
Email security scanning isn't going away — it's getting more thorough. Microsoft and Google are expanding automated link scanning as phishing attacks become more sophisticated. Any analytics platform that relies on page loads as a proxy for human views will become increasingly inaccurate.
The signal you need isn't "was my link opened." It's "did a human engage with my content." That requires detecting the difference — and the difference is in the gestures, not the page load.