arrow_back Back to Blog
DMARC & Email Security March 3, 2026 · 10 min read

DMARC Is at p=reject — So Why Are Spoofed Emails Still Getting Through?

You've done everything right: SPF published, DKIM signing in place, DMARC at p=reject. And then a spoofed email — pretending to be your client's own domain, with SPF fail, DKIM none, DMARC fail — lands in an inbox anyway. This is more common than most DMARC guides admit, and the explanation matters.


DMARC p=reject is a policy request, not an enforcement command

This is the part most documentation glosses over. When you publish p=reject in your DMARC record, you are instructing receiving mail servers to reject messages that fail DMARC authentication. But the operative word is instructing. DMARC is a published policy that receiving servers are asked to honour. They are not technically required to do so, and several of the largest receiving infrastructures in the world will, in certain conditions, override it.

The RFC that defines DMARC (RFC 7489) explicitly acknowledges this. It describes what receivers should do, using language that leaves room for receivers to deviate when their own filtering systems have higher-confidence signals. In practice, that discretion is exercised regularly — particularly by large consumer and enterprise email providers whose filtering stacks layer machine learning, reputation scoring, and historical sender behaviour on top of protocol-based authentication results.

The practical implication:

DMARC enforcement is most reliably honoured for domains with consistent sending histories, high volume, and clear authentication alignment. For more obscure domains — or in edge cases where a receiver's ML model assigns a low spam score to a failing message — the receiver may silently deliver or quarantine rather than reject, regardless of your published policy. That doesn't mean DMARC is broken. It means p=reject reduces the attack surface substantially, but doesn't eliminate it entirely.

How receiver-side scoring overrides DMARC decisions

Modern enterprise email filtering — including the filtering built into Microsoft 365 — doesn't make a single binary decision about an inbound message. It runs the message through multiple evaluation layers in sequence, and the outputs of those layers interact in ways that can produce counter-intuitive results.

A simplified version of the decision chain looks like this: the message arrives and is evaluated against authentication records (SPF, DKIM, DMARC). It also receives a spam confidence score based on content analysis, sending IP reputation, domain age, and behavioural signals. And in some configurations, it is evaluated against anti-phishing heuristics and impersonation detection. Each of those outputs carries a weight, and the final disposition — deliver, quarantine, or reject — reflects them in combination.

What this means for a DMARC fail:

  • filter_1 A message that fails DMARC and receives a high spam confidence score will almost certainly be rejected or quarantined as intended.
  • filter_2 A message that fails DMARC but receives a low spam confidence score — because the content appears clean, the sending IP has no known bad reputation, and it looks like a routine transactional message — may be delivered, because the receiver's system weighs the low-risk signals more heavily than the authentication failure.
  • filter_3 In some filtering configurations, certain message types are explicitly excluded from DMARC policy application — for example, messages that trigger an impersonation detection path rather than the standard authentication path may not follow the usual DMARC disposition logic.

The result is that the same sender, sending the same type of content from the same IP, may be rejected on Monday and delivered on Wednesday, depending on how the receiver's scoring model is weighted at that moment. This inconsistency is a known behaviour — not a bug, but a deliberate trade-off. Receivers accept that they will occasionally deliver messages that should have been rejected in exchange for a much lower false positive rate on legitimate mail.

The specific scenarios where DMARC fails to stop delivery

Several specific scenarios produce DMARC failures that still result in delivery, and it's worth naming them explicitly so that when you see them in aggregate reports, you know what you're looking at.

Low-confidence failures on clean-looking content

Attackers who target specific recipients — rather than blasting phishing at scale — invest in making their messages look legitimate. Clean text, no suspicious links, plausible sender display names, no attachments. Against a well-crafted message from a fresh sending IP with no prior bad reputation, a receiver's ML model may produce a near-zero spam confidence score. Combined with a DMARC failure, the system faces conflicting signals. Some receivers will apply the DMARC policy strictly; others will favour the ML score and deliver.

Hosting provider constraints on IP blocking

When an attacker sends from a cloud VPS IP — a common pattern — blocking that specific IP at the receiving filtering layer is often ineffective. Cloud providers cycle IPs through their pools rapidly. An IP that originated the spoofed emails last week may be assigned to an entirely different tenant this week. Adding it to a block list treats a symptom rather than the cause. This is precisely why authentication-based approaches like DMARC matter — but it's also why gaps in DMARC enforcement that allow these messages through at all are worth tracking carefully.

Tenants that share receiving infrastructure with unusual configurations

Some enterprise email environments are hosted through third-party resellers who configure the underlying platform with non-default settings. In these tenants, the receiving filtering pipeline may behave differently from a directly managed equivalent — including different policy application behaviour for DMARC failures. This is not always visible to the MSP managing email records for the client, because it lives in the receiving infrastructure rather than the sending domain's DNS configuration.

Messages routed through connectors or relay servers

If a message arrives at a receiving mail server via a configured connector, a smart host relay, or a third-party email gateway rather than directly from the sending MTA, the authentication evaluation may happen at the gateway rather than the final recipient's mail server. Depending on how that gateway handles and forwards authentication results, the final server may not see the original DMARC failure at all. This is a configuration issue that can appear in complex multi-tenant Microsoft 365 deployments and is one of the harder scenarios to diagnose without full mail flow visibility.

Why DMARC aggregate reports are the only way to see this clearly

Without DMARC aggregate report monitoring, you will not know this is happening. That is not a hypothetical risk — it is the default state for any domain that has a DMARC record but no reporting infrastructure in place to collect and interpret the data.

Aggregate reports sent to your rua address contain, for every message processed by the reporting server, the source IP, the authentication result (SPF pass/fail, DKIM pass/fail), and the disposition applied — whether the receiver delivered, quarantined, or rejected. That last field is where the gap becomes visible. When you see messages with DMARC fail and a disposition of none or quarantine rather than reject, you're looking directly at receiver override behaviour.

What the disposition field tells you:

  • check_circle disposition: reject — The receiver honoured your DMARC policy and blocked the message. This is the expected behaviour at p=reject.
  • warning disposition: quarantine — The receiver downgraded your reject instruction to quarantine for this message. The message is not in the inbox but it was delivered to junk/spam. The authentication failure didn't result in rejection.
  • error disposition: none — The receiver delivered the message despite the DMARC failure and your p=reject policy. This is the override scenario described above — and if you're seeing this for your own domain spoofed by an external IP, it needs investigation.

A domain at p=reject that has significant volume of DMARC-fail traffic with none dispositions reported by one or two receiving providers is not evidence that DMARC is broken — it's evidence that those specific receivers are choosing to exercise override behaviour for that domain's traffic. That information is actionable: you can investigate the message content patterns, the sending IP reputation, and whether there's a configuration issue on the receiving tenant that's suppressing DMARC enforcement.

Without reading the aggregate reports, you'd have no visibility into any of this. The domain would look "protected" from the sending side while spoofed messages were being delivered on the other end.

What forensic failure reports add

Aggregate reports show patterns across many messages. Forensic failure reports (configured via the ruf tag in your DMARC record) can provide per-message detail about specific failures — subject line, headers, sending IP, authentication chain — which is useful for diagnosing individual incidents. Not all receiving providers send forensic reports, and availability varies, but where they are sent they provide the message-level evidence that aggregate data can only hint at.

The caveat is that forensic reports may contain sensitive header information from the messages, so they should be directed to a monitored address with appropriate access controls — not a shared inbox or an unmonitored reporting address. For MSPs managing multiple clients, that typically means a reporting address that feeds into the same platform processing aggregate reports, with per-client visibility.

What to actually do when you see DMARC failures delivering

When aggregate report monitoring shows spoofed messages failing DMARC but being delivered with a none or quarantine disposition by a specific receiver, there is a sequence of investigation steps worth working through before concluding the DMARC configuration is the problem.

Investigation sequence for spoofed delivery despite p=reject:

  • looks_one Confirm the DMARC record is being evaluated at all. Check the aggregate report's source provider. If the reporting receiver is a large consumer provider, override behaviour is more likely. If the reporting receiver is a filtering gateway, check whether that gateway is the correct DMARC evaluation point or whether a downstream server is the one making the delivery decision.
  • looks_two Check the sending IP's reputation. If the spoofed messages are coming from a fresh cloud IP with no bad reputation history, receiver ML models will score it lower. This doesn't fix the delivery gap but explains it, and knowing the pattern helps anticipate recurrence.
  • looks_3 Review the receiving client's mail flow configuration. If the affected recipient's email is hosted on a platform with non-default settings — third-party reseller hosting, custom connectors, or modified transport rules — check whether those configurations are interfering with DMARC policy application.
  • looks_4 Add mail flow rules as a secondary layer. Where the receiving platform supports it, a transport rule that explicitly rejects messages with a DMARC fail result from your client's own domain acts as a backup enforcement layer that doesn't depend on the receiver honouring the DNS policy. This is explicitly available in Microsoft 365 as a mail flow rule condition.
  • looks_5 Monitor for persistence. One or two incidents of a spoofed message slipping through may be isolated receiver override events. A sustained pattern from the same source IP suggests an active campaign that warrants escalation — and the aggregate reports are the data source that makes "persistent vs. isolated" distinguishable.

The layered security argument for DMARC monitoring

This scenario — p=reject in place, spoofed emails still occasionally delivered — is one of the stronger practical arguments for treating DMARC as an active monitoring function rather than a one-time configuration task. Publishing a DMARC record and walking away means you have no visibility into the cases where it isn't being honoured.

For an MSP managing email authentication across tens or hundreds of client domains, that monitoring overhead at scale is only tractable with tooling designed around the MSP model. The aggregate reports across all those domains need to live in one place. Dispositions need to be summarised so you can see at a glance which clients have active spoofing attempts that are being blocked, which are being blocked inconsistently, and which are getting through. Anomalies need to surface as alerts rather than requiring you to manually review XML data across every client zone.

When something like this happens — a client's domain at p=reject encountering spoofed delivery — the right response is already mapped out: check the aggregate report data, identify the receiver showing override behaviour, investigate the details, and decide on the appropriate secondary control. That response requires having the data. Without active DMARC report monitoring, you find out when a client notices a spoofed email in their inbox — not before.

Related reading

Explore Albaspot features: DMARC & email security, DNS management, monitoring & alerting, client portal.


See what's actually happening to your clients' DMARC-failed mail

Albaspot parses and surfaces DMARC aggregate report data across all your client domains — including disposition breakdown, so you can see exactly when receivers are overriding your enforcement policy. Try it free.

Start free trial arrow_forward