Open rates are a useful starting point, but they barely scratch the surface of how prospects actually interact with a MARC brochure. When you send something as tactile and memorable as a MARC, the real story lives in what happens after the first open - how long people watch, how often they come back, whether they share it, and which behaviors consistently show up in journeys that end in revenue.
MARC's analytics layer turns those moments into measurable data. This article breaks down twelve engagement metrics that go far beyond "Did they open it?" and explains how each one helps you predict sales outcomes, prioritize outreach, and improve your campaigns over time.
An 80-90% open rate is impressive, and MARC campaigns regularly perform in that range. But an open alone doesn't tell you if someone:
If you stop at open rates, you treat a quick curiosity flip the same as a deeply engaged buying committee. The difference between those two interactions is measured in pipeline, not just percentages. That's where the following twelve metrics come in.
What it is: How long, on average, recipients watch the video content in your MARC.
Why it matters: Watch time is one of the clearest indicators of interest and message resonance. If your average sits around a minute or more, you're holding attention long enough to tell a real story. If most viewers drop off at 20 seconds, the issue is likely your opening hook, relevance, or length.
What it is: Not just the average, but how views are distributed in buckets like 0-15 seconds, 15-60 seconds, 60+ seconds.
Why it matters: Two campaigns can share a similar average but perform very differently. A healthy distribution will show a meaningful percentage of recipients in the 60+ seconds bucket. That group is your high-intent core.
What it is: The percentage of recipients who watch the content more than once.
Why it matters: Replays are one of the strongest signals of interest. Prospects replay sections when they're clarifying details, sharing with someone nearby, or deciding if they should take the next step. In MARC data, replay behavior consistently shows up in journeys that lead to demos and proposals.
What it is: The average number of distinct engagement sessions per MARC - MARC campaigns routinely see six or more.
Why it matters: Multiple sessions suggest that your content has ongoing relevance. A single open may be curiosity. Six separate engagements over a week look more like evaluation and internal discussion.
What it is: Engagement that happens across more than one day.
Why it matters: Multi-day patterns indicate that the brochure isn't just a one-time novelty. It has become a reference point that prospects revisit as they move through their decision process. This is especially important for longer sales cycles and high-value offers.
What it is: Activity that indicates more than one person is interacting with the same MARC - for example, engagements from different locations or distinct active periods that don't look like a single user.
Why it matters: Buying decisions in B2B and considered purchases often involve a committee. When MARC data shows multi-viewer patterns, you're seeing the buying group behavior most channels can't expose. These brochures tend to correlate with higher deal sizes and better win rates.
What it is: How long it takes between delivery and the first open.
Why it matters: Fast engagement often indicates that your audience was primed - because of timing, relevance, or prior outreach. Longer delays don't necessarily mean failure, but they do suggest a need to adjust when and how you introduce MARC into your sequences.
What it is: The gaps between each engagement session.
Why it matters: Tight clusters of engagement - for example, multiple interactions in a 24-48 hour window - are excellent triggers for sales outreach. Wider spacing might indicate ongoing evaluation or internal discussion. Both patterns deserve different follow-up plays.
What it is: The percentage of recipients who scan a QR code, visit a landing page, or take another measurable next step from the MARC.
Why it matters: CTA interactions bridge offline and online behavior. They tell you which offers, promises, and formats succeed at moving prospects deeper into your funnel. Over time, you'll see certain CTA patterns consistently outperform others.
What it is: How visitors behave once they arrive at your digital destination from a MARC (time on page, clicks, scroll depth, conversions).
Why it matters: This is where offline engagement hands off to your digital experience. Strong MARC performance can still underdeliver if the landing experience is weak. Looking at these metrics together helps you optimize the full journey, not just the brochure.
What it is: Engagement metrics broken down by industry, persona, region, or deal size.
Why it matters: Not every audience behaves the same. Some industries replay content more often. Some roles lean into ROI segments, while others gravitate toward implementation and proof. Segment-level analysis helps you tailor content and targeting to the people who value it most.
What it is: A weighted composite score that combines multiple engagement metrics into a single account-level signal.
Why it matters: Sales teams don't have time to interpret raw dashboards. An engagement score condenses watch time, replays, multi-day behavior, and more into a simple number that says, 'This account is heating up.' High scores can trigger outreach, routing, or marketing follow-up.
Metrics are only useful if they change what you do. With MARC, there are three especially powerful ways to turn engagement data into sales impact:
If you'd like a structured way to score and act on these twelve metrics, MARC offers a ready-made engagement scoring framework and implementation guide.
Download the Engagement Score Framework
To see these metrics in your own context, you can also walk through a live dashboard using real campaign examples.