Content Engagement Is the New MQL: The 2026 Marketing Ops Playbook

Ilya SpiridonovIlya Spiridonov
··22 min read
Content Engagement Is the New MQL: The 2026 Marketing Ops Playbook

Only 3% of your B2B website visitors self-identify through form fills. The other 97% are invisible to your MQL funnel.

The number comes from 6sense Research. Form-fill rates stay near 3.5% across industries, regions, and company sizes. It explains why every RevOps team you talk to is quietly frustrated with their MQL pipeline.

The Marketing Qualified Lead stopped working somewhere between Apple Mail Privacy Protection (2021) and AI-generated content saturation (2024). Lots of people have said that. Fewer have shipped a replacement.

The replacement actually getting traction is Jon Miller's MQX / MEX / Hand-Raisers framework. Miller invented the MQL at Marketo, and he's the clearest voice on what replaces it. The tiers are right. The hole is the measurement layer underneath: what engagement, at what threshold, measured how. That's what this post fills in, plus a section on where this approach breaks. "Engagement is not intent" is a caveat most MQL-is-dead writing skips.


How the MQL broke

The MQL worked when three things were true:

  1. Email opens were reliable. A pixel load meant a human opened the email.
  2. Form fills meant intent. If someone typed their email to download your whitepaper, they wanted the whitepaper.
  3. Downloads meant reading. If they downloaded the whitepaper, they'd read the whitepaper.

All three broke between 2021 and 2024.

Email opens are a lie. Apple Mail Privacy Protection, launched in September 2021, pre-loads images the moment a user opens the Mail app, regardless of whether they open your specific email. Combined with corporate email scanners (Microsoft SafeLinks, Proofpoint, Mimecast) that click every link before delivery, a substantial share of "opens" in your MAP aren't humans. Litmus and other analytics vendors have published data showing open-rate inflation of 30–50% post-MPP.

Form fills flooded with noise. Buyers learned to use throwaway Gmail addresses. Bots fill forms. Procurement people download your gated content and forward it to legal. The form submission you got excited about often isn't from the person making the buying decision. Often not from any person at all.

Downloads don't signal reading. The gap between "downloaded the whitepaper" and "actually consumed the whitepaper" has always existed. In 2026, with every buyer subscribed to more newsletters than they can read and every inbox flooded with AI-assisted content, the gap widened to a chasm.

The industry's own voices have been saying this for years. Matt Heinz puts it best:

"You can't buy a beer with an MQL. You can't actually spend your web traffic. We have to focus on outcomes that make the company money."

Matt Heinz, Heinz Marketing

Chris Walker puts it sharper:

"Vanity Metrics = KPIs that aren't aligned with revenue and sales productivity, but are used to justify effectiveness of marketing programs. Ex. SQLs, MQLs, clicks, 'leads', cost per lead (CPL), cost per acquisition (CPA), website visitors, form conversion rate, etc."

Chris Walker, founder Refine Labs

Forrester, the institution that codified the MQL model via its SiriusDecisions acquisition, formally repudiated it in 2023. Terry Flaherty, their VP Principal Analyst, explained why individual lead scoring is structurally broken:

"B2B buying decisions, especially when deals are large and complex, are made by buying groups, not an individual person... Scores assigned based on a combination of profile characteristics and engagement for a single individual... [is like] 'Whose Line Is It Anyway?' where everything is made up and the points don't matter."

Kerry Cunningham, formerly at Forrester and now Principal Researcher at 6sense, put a number on the ceiling:

"You can make improvements of 3% to 5% [with MQL models]... in the very worst-run operations, and you can goose a 1% or 1.5% improvement out of the rest, but that's about it."

The stat worth memorizing: roughly 87% of MQLs fail to convert to closed deals, per Apollo's B2B research. Multiple vendor-published datasets also show demand-gen MQLs look cheap at the top of the funnel but cost meaningfully more per SQL downstream. Cheap leads get expensive fast.

Practice hasn't caught up with the rhetoric. Plenty of teams read these critiques, nod, and keep their MQL dashboard on the CRO's weekly review. Kim Peterson at LeanData named why:

"The number one reason organizations aren't moving to buying groups, signals, and a more advanced revenue process is one word: culture. We're addicted to MQLs as the cornerstone of our culture."

The critique is cooked. The replacement isn't. The rest of this post is about the replacement.


The responses that didn't stick

Before the replacement, what didn't work.

MAP-based lead scoring. HubSpot, Marketo, Pardot all support behavioral scoring on top of demographic scoring. In principle, this fixes the "downloads don't signal intent" problem. In practice, it replaces downloads with pixel events, which suffer from the same bot inflation, the same multi-stakeholder confusion, and the same "score as vanity metric" dynamic. A prospect with 87 points isn't more real than a prospect with 1 download if both numbers come from the same flawed inputs.

Third-party intent data. Bombora, 6sense, Demandbase, G2 Intent. Category is $1B+ and growing. Real value: third-party intent signals capture buying research happening outside your properties. But the data is lagging (you learn about interest 2–4 weeks after it peaks), topic-based (not product-specific), and account-level (you know Acme is surging but not which Acme employee cares). And the price, typically $30–80K per year for the platforms, makes this impractical for teams under $10M ARR.

ABM platforms. Demandbase, Terminus, RollWorks. Great for enterprise. Typically $50K+ per year plus implementation. For SMB and mid-market teams, the platform overhead exceeds the pipeline lift. Jon Miller, who co-founded Marketo and Engagio, put it this way:

"Traditional demand generation is fishing with a net. You throw your net out, you see what you catch, you don't care which fish you catch, just that you caught enough. Whereas account-based marketing is fishing with a spear where you identify those big fish and go after them... But it doesn't feel very good to get poked by a spear."

Signal-based selling. Common Room, Clay, HG Insights popularized this. The core idea is right: move from lead-scoring to signal-aggregation across funding rounds, hiring posts, tool changes, job changes, and content engagement. But implementation is vague. Most "signal-based" posts define signals without explaining how to operationalize them at scale. Chris Walker said it out loud:

"In any individual company, the definition of a signal should be different. One company might say, 'This ebook download is a signal for us.' Another company might say, 'We never win those. That's not a signal for us.' It should be determined purely based on data."

That's correct, but it means each company has to build their own signal definition from scratch. Which most don't.

Product-led growth. Works for product experiences that can deliver value in a self-service session. Works less well for enterprise B2B where evaluation spans months and a buying committee. PLG is a GTM motion, not a replacement for qualification.

Sangram Vajre, co-founder of Terminus and now CEO of GTM Partners, has been pushing for a deeper rethink:

"Change your metric from 'Leads' to 'Engagement'. A lead is a binary status. Engagement is a spectrum that indicates true buying intent."

He also questions whether the SDR function itself needs a rethink. That's a debate for another post. For now: if not MQLs, what? "Engagement" is right but vague. It needs a framework.


What's actually new: first-party content engagement became measurable

Four things genuinely changed in the last five years. Together they make a real MQL replacement possible.

Per-page document analytics. DocSend pioneered the category when it launched in 2013; Dropbox bought them for $165M in 2021. The idea was narrow: track who opened your fundraising deck and for how long. By 2026 the category expanded. Every document you share (whitepaper, case study, proposal, pitch deck, report) can be instrumented at the page level. You can see not just that Sarah opened the whitepaper, but that she spent 4 minutes on page 3, skipped page 5, and came back to page 7 a week later.

Bot detection became real. Ten years ago, "views" were just HTTP requests. Today, platforms with serious analytics (not MAPs; MAPs still fail at this) layer three signals: user-agent matching against known scanner patterns, datacenter IP identification, and gesture-based human confirmation (mouse, touch, keyboard). You can distinguish a human view from a SafeLinks pre-scan. That alone rehabilitates what "engagement" means.

Forwarding detection. If you send a link to one person and it gets opened by multiple people at the same company, you know the deck is being socialized internally. In B2B enterprise sales, this is the single strongest pre-commit signal. Stronger than any intent data platform can produce, because it's first-party behavior on your own content.

Return visit tracking. A buyer returning to your pricing page two weeks after their initial read is doing something specific: they're revisiting to make or defend a decision. Capturing that moment, not just the initial open, turns engagement into a time-series signal instead of a single event.

Add those together and you have something genuinely new: qualification data that doesn't rely on form fills, doesn't rely on email pixels, doesn't rely on third-party topic inference, and doesn't require a $50K platform. You can run it on the content you're already sending.

That's where the MQL replacement actually lives. Not in better scoring, not in bigger platforms. In measuring what's already happening on your own properties, with fidelity that wasn't technically possible a decade ago.


Operationalizing MEX: the measurement layer

Jon Miller's framework is the conceptually correct replacement for the MQL. His three categories:

  • Hand-Raisers. Explicit requests: demo, pricing, "contact me." The buyer qualified themselves.
  • MQX (Marketing Qualified). Marketing's evidence-based judgment that buying activity may be happening at an ICP account. Warrants proactive, consultative outreach.
  • MEX (Market Engaged). Right people at right accounts engaging with your content, no buying signals yet. Worth talking to because this is where you create demand, not just capture it.

Miller, who helped invent the MQL at Marketo, is also shipping the clearest replacement. The shape is right. What's missing is the measurement layer underneath.

Miller says "engagement" qualifies someone for MEX. That leaves one question open: what engagement, at what threshold, measured how?

Call those measured thresholds Engaged Qualified Leads (EQLs). Not a competing framework. The instrumentation that lets you decide when someone enters MEX, and when MEX behavior is sharp enough to promote into MQX.

EQL in one sentence

An Engaged Qualified Lead is an individual demonstrating measurable, bot-filtered engagement with specific content on your properties, at or above behavioral thresholds you define.

Four words carry the weight:

  • Measurable. The qualifier is a behavior, not a score.
  • Bot-filtered. Most of the industry's "engagement" counts scanners and preview bots.
  • Specific content. Not "opened an email" but "read the case study" or "returned to pricing."
  • Thresholds you define. Your numbers, from your data, not a template.

Three tiers, mapped onto Miller's vocabulary:

Tier 1: Awareness (pre-MEX)

  • Opened the content, spent under 60 seconds, did not return
  • Signal strength: low
  • Action: nurture, do not pass to sales

Where most current "MQLs" actually sit. Someone downloaded a whitepaper, got the confirmation email, moved on. Valuable for brand, not for pipeline.

Tier 2: Active Evaluation (MEX territory)

  • Read 2+ pages, spent 2+ minutes, OR returned to the same content within 7 days
  • Signal strength: medium
  • Action: this is your MEX population. Keep warming with targeted content.

Where buyers actually sit during real evaluation. Doing the work, not yet ready for a call. The MQL model treats Tier 2 the same as Tier 3. That's the critical mistake that produces the "sales ignores your MQLs" dynamic.

Tier 3: High-Intent Engagement (MEX → MQX transition)

  • Forwarded the content internally (another viewer on the same link, same company domain), OR
  • Returned 2+ times to the same content, OR
  • Engaged with bottom-of-funnel content (pricing, comparison pages, customer references)
  • Signal strength: high
  • Action: SDR or AE outreach within 24 hours

The MEX → MQX transition. A Tier 3 trigger is the behavioral evidence that pushes someone from "engaging with your ideas" (MEX) into "buying activity may be happening at this account" (MQX) in Miller's vocabulary.

The account rollup

B2B sales happen through buying groups, not individuals. An account with multiple EQLs is a stronger signal than any single EQL.

An Engagement-Qualified Account (EQA) is an account with 2+ EQLs on the same content or content series within 30 days, plus at least one Tier 3 signal across the group.

This is the unit your SDR team should actually be working. EQLs tell you who; the account rollup tells you when.

The logic lines up with 6sense's B2B Buyer Experience Report: 81% of buyers choose a preferred vendor before speaking with sales, and 69% of the purchase process happens before sellers are engaged. If you're waiting for one person to fill out a demo form, most of the decision has already happened. Account-level engagement signals let you see the buying group form on your content before anyone raises their hand.


The 7-step playbook

How to operationalize this without buying a new stack.

Step 1: Audit what you're currently counting as engagement. Open your MAP. Look at your highest-score leads from the last 90 days. How many became pipeline? How many closed? If the conversion rate is below 5%, your current scoring is theater.

Step 2: Instrument your top 5 content assets with per-page tracking. Not all your content. Just your top 5 by volume or strategic importance. Whitepaper, case studies, pitch deck, pricing page, and one comparison page. You need a tool that captures per-page time, return visits, and multi-viewer detection. (HummingDeck does this; so do a handful of other platforms. The principle matters more than the tool.)

Step 3: Turn on bot filtering. If your current analytics don't filter bots, you're measuring noise. At minimum, filter known scanner user-agents and datacenter IPs. Better tools add gesture verification.

Step 4: Define your EQL thresholds, in writing. Using the three-tier structure above, write down what "Tier 2" and "Tier 3" look like for each asset. Example for a case study:

  • Tier 1: opened, <60s, no return
  • Tier 2: 2+ pages read OR 2+ minutes spent
  • Tier 3: forwarded to another viewer OR returned within 14 days OR hit pricing page after

Adjust by asset. A one-page battle card has different thresholds than a 30-page whitepaper.

Step 5: Rewrite your MQL/SAL definitions in behavior-language. Don't say "100 points." Say "read the case study twice and visited pricing within 7 days." If sales sees a specific behavioral pattern, they treat it as a signal. If they see a score, they treat it as your marketing team's opinion.

Step 6: Wire EQL and EQA events into CRM. Whatever CRM you're using (HubSpot, Salesforce, Close), create a custom activity type for "Tier 3 Engagement Triggered." When the tracking tool fires, log it against the contact and account. SDR workflow prioritizes accounts with recent Tier 3 triggers, not the highest lead score.

In HubSpot: create a custom event engagement_tier_3 with properties content_asset, trigger_type (forwarded / returned / pricing_visit), and viewers_same_account. Build a workflow that, when the event fires on an ICP-matched contact, creates a task for the owning AE with the content asset name and specific trigger in the task description. The task title should read something like "Tier 3 on Q3 Pricing Deck: forwarded internally (2 viewers)", not "High lead score." In Salesforce: same shape via a custom activity record type plus a flow. In Close: a Smart View filtered on the custom activity. The principle is identical across CRMs. The task description contains the behavior, so the SDR can open with that specific hook instead of a generic "hope you found it useful" email.

Step 7: Measure pipeline correlation, not vanity metrics. After 60 days, pull every deal closed-won and closed-lost. Look at which engagement tiers they hit and when. If Tier 3 behavior doesn't correlate with pipeline velocity, your thresholds are wrong. Adjust them. Iterate quarterly.

This is a 60–90 day implementation, not a multi-quarter consulting engagement. If it's taking longer than that, someone is selling you a platform you don't need.


What a Tier 3 handoff actually looks like

What this looks like in practice.

Before (MQL-era handoff):

"Sarah at Acme has 87 points. She downloaded the buyer's guide last week. Routing to Alex."

Alex gets this. Alex has no context. Alex writes a generic "hope you found the guide useful" email. Sarah archives it. MQL-to-meeting rate stays at 12%.

After (EQL-era handoff):

"Sarah at Acme read the buyer's guide Monday. Tuesday, a second viewer from Acme (same company domain, different city) opened the same link, likely a colleague she forwarded to. Thursday, Sarah visited the pricing page and spent 3 minutes on the Enterprise tier comparison. Account hit EQA on Thursday. Routing to Alex."

Alex gets this. Alex has specific context. Alex writes: "Sarah, saw you and a colleague reviewed our buyer's guide, and you took a look at our Enterprise pricing. Happy to help you think through whether our Enterprise or Pro tier fits your team's workflow. Quick call Tuesday?"

Same prospect, same data sources. Completely different outreach. The handoff carries the behavior, the timing, and a specific anchor for conversation.

Palo Alto Networks' Forrester client story documents a 17% higher closed-won rate after moving from MQL-based routing to buying-group engagement routing, plus a 17× increase in pipeline progression and doubled deal sizes. Jeremy Schwartz (senior manager of global lead management) said opportunities with multiple people attached were eight times more likely to advance than single-contact ones. One case study isn't proof, but the direction is consistent: specific-behavior outreach beats score-based, and buying-group context beats individual scoring.


Common objections

"We need scoring to prioritize at scale." Score behavior, not events. A Tier 3 trigger is effectively a score threshold: binary instead of continuous. Binary thresholds are easier to act on than 0–100 scales.

"Our MAP doesn't support this." Correct. Your MAP won't. You need a tool that captures content-level engagement, not MAP form fills or email opens. You pipe those signals into the MAP or CRM as events. Infrastructure work, not a platform replacement.

"Our sales team ignores marketing-passed leads anyway." That's the MQL-is-broken problem. It won't fix itself. If sales is ignoring EQLs after you rebuild the definition, the issue is trust. They don't believe your signal. Show the pipeline correlation math from Step 7. If the data supports the handoff, sales will engage. If it doesn't, your thresholds are wrong.

"Not every product has 'content' to track." If you're selling B2B, you have content. Your pricing page is content. Your product tour is content. Your customer stories are content. "Content" here means "any digital surface where a prospect spends measurable time." You probably have more trackable surfaces than you think.

"This sounds like product-led growth. We're sales-led." It isn't PLG. PLG is a business model where the product itself converts users. The EQL framework works for any motion, including sales-led enterprise. The common thread is measuring behavior; the GTM motion doesn't change.

"We already use intent data." Good. Third-party intent tells you accounts that are researching. First-party engagement tells you what those accounts are actually doing with the content you sent. They stack. If you're buying 6sense, keep buying 6sense. Layer first-party engagement on top. The combination is more predictive than either alone.


Engagement is not intent

The caveat every MQL-is-dead thinkpiece skips.

Time on page is not buying intent. Return visits are not buying intent. Forwarding is not buying intent.

The filter, not the trigger

Engagement without ICP fit is just a research student. A Tier 3 EQL signal from someone at a Fortune 100 competitor researching your product is not a buyer. The EQL framework narrows your focus. It doesn't replace human judgment on whether the account matches your ICP.

A return visit from a journalism student writing a case study isn't a buyer. If your content is good, plenty of people who aren't buyers will engage with it.

That's why the EQL framework is a filter, not a trigger. It narrows your focus from everyone who touched your marketing down to people doing meaningful work with your content. Final qualification still requires human judgment. Does the account match your ICP? Does the role match your buyer persona? Is the timing plausible?

The MQL was over-trusted because it felt like automation could replace judgment. The EQL replaces bad data with good data. It doesn't replace the judgment step. Expect your SDR team to reject 20–30% of your EQLs on ICP grounds, and build that into your metrics. The rejection isn't a failure; it's the system working.


Getting started

You don't need to rebuild your stack. Start here:

  1. Pick one content asset you care about: a whitepaper, a case study, a pitch deck, a comparison page.
  2. Instrument it with per-page engagement tracking and bot filtering.
  3. Define Tier 1/2/3 thresholds for that asset (use the examples above as a starting point).
  4. Track EQLs for 30 days. Don't change anything else yet.
  5. Look at the data. How many EQLs fired? What patterns correlate with pipeline movement?
  6. Then roll out to your top 5 assets. Codify thresholds for each. Wire into CRM.
  7. Measure pipeline correlation at 60 days. Adjust.

A single content asset instrumented properly is more useful than a whole MAP with 87 points of noise. Start small, prove the model, expand.


Where this goes

The CMO reporting shift is already underway. Some are still reporting "we generated 4,200 MQLs this quarter." Others are reporting "we identified 47 actively-engaged target accounts, 12 closed, $2.1M pipeline." The second group is winning. Not because they have better tools. Because their metrics aren't lying to them.

B2B marketing is moving toward quality over volume, signal over score, behavior over points. Downloads don't matter. Attendance doesn't matter. Opens don't matter. What matters is whether the right people, in the right accounts, are doing meaningful work with the content you send them.

Measure that. Pass it to sales. Ignore the rest.

For the implementation layer: threshold worksheets per asset type, copy-pasteable CRM field setups (HubSpot, Salesforce, Close), example handoff scripts, and a 30/60/90 day rollout plan, grab the companion playbook.


FAQ

Is the MQL really dead? As an output metric for marketing, yes. As a lifecycle milestone it still has a role. But "we generated 4,200 MQLs" is vanity reporting.

What replaces MQL in 2026? Jon Miller's MQX / MEX / Hand-Raisers framework, with measurable engagement thresholds (EQLs) defining when a contact has crossed into MEX and when MEX behavior is sharp enough to promote into MQX.

What's the difference between lead scoring and engagement scoring? Lead scoring outputs a number from demographic fit plus pixel events. Engagement scoring outputs a behavior pattern (time on page, return visits, forwarding) with bot filtering. Something an SDR can open with specifically.

Do I still need a MAP? Yes, for email automation, lifecycle triggering, and contact management. But not as the qualification source of truth.

Is ABM the answer? ABM is useful when you have enterprise-sized accounts. EQL stacks inside an ABM motion: ABM tells you which accounts; EQL/EQA tells you when to engage.

What's "engagement" defined as, concretely? Per-page time spent, return visits within a defined window, forwarding detected (multi-viewer on same link), and engagement with bottom-of-funnel content. All bot-filtered, all thresholded per asset.

How do I measure engagement without buying a new platform? Use a document-tracking or content-engagement tool that captures per-page analytics with bot filtering. The required capability is first-party, individual-level, bot-filtered engagement. Not aggregated site analytics or email opens.

What's a good threshold for SDR outreach? For mid-funnel content: forward detected OR return within 14 days OR pricing visit = Tier 3 = outreach. For direct pricing visits from ICP accounts: hit immediately. Calibrate per asset based on your pipeline correlation data.

How does this work with product-led growth? Same discipline, different surface. PLG signals (free-tier usage, feature engagement) substitute for content engagement; the thresholding logic is identical.

Does this still work for enterprise sales? Better than for SMB, in fact. Enterprise buyers do more research and involve more stakeholders, producing more measurable signals and more account-level rollups. See the Palo Alto Networks case study.

What tools support this? Document tracking platforms (HummingDeck, DocSend, Papermark, others) capture per-page engagement; a subset add bot filtering and forwarding detection. Intent data platforms layer third-party account signals on top. The tool stack matters less than the measurement discipline.

How does engagement-based qualification compare to intent data? Complementary. Third-party intent tells you accounts researching your category externally. First-party engagement tells you what those accounts are doing on your content. The combination is more predictive than either alone.