cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

IPv6 Fragmentation "Black Hole" on EE/BT 900Mb FTTP

planetf1b
Established Contributor
Established Contributor

I'd love to hear if anyone can validate this -- I've been fighting with an iOS power drain issue, but in fact it's actually a IPv6 issue which occurs both with the EE Homehub AND with a opnsense router. Basically it seems as if the EE network is breaking path mtu discovery both by dropping fragments, and in not returning the correct ICMP responses.

This is quite technical but it's quite a fundamental issue (it feels to me as it's a 'broken network'). Of course I may have made an error:

More complete AI-summarised version below.

Observation: The upstream network (EE/Openreach backhaul) appears to silently discard outbound IPv6 packets that require fragmentation. This breaks Path MTU Discovery (PMTUD), causing connectivity stalls and high battery drain on devices attempting to negotiate packet sizes (e.g., Apple devices using iCloud Private Relay).

Context: ISP/Plan: EE Full Fibre 900/110. Hardware: Tested on EE Smart Hub Plus and confirmed consistent on OPNsense (Intel N100). MTU: Link negotiates at 1492 bytes (PPPoE standard).

The Limitation: The network accepts unfragmented packets up to the link limit (1492 bytes) but drops any packet that is fragmented, regardless of total size. Crucially, it fails to send back ICMPv6 Type 2 (Packet Too Big) or Type 3 (Time Exceeded), creating a silent "black hole."


Test Methodology (Reproducible)

You can replicate this on any Linux/macOS device (Pi, Mac, etc.) connected to the router. You need two terminal windows open.

Step 1: The Monitor (Terminal A) Run this to listen for "Packet Too Big" or error messages from the ISP.

sudo tcpdump -n -i any "icmp6 && (icmp6[0] == 1 || icmp6[0] == 2 || icmp6[0] == 3)"

Step 2: The Trigger (Terminal B) Run these pings to test the boundary.

Test A (Standard Packet - 1492 bytes): Payload 1444 + 48 bytes headers = 1492 bytes (Fits in one frame)

ping6 -c 3 -s 1444 ipv6.google.com

Result: Success.

Test B (Fragmented Packet - 1493 bytes): Payload 1445 + 48 bytes headers = 1493 bytes (Forces fragmentation)

ping6 -c 3 -s 1445 ipv6.google.com

Result: 100% Packet Loss.

Step 3: Verification Look at Terminal A. Expected Behavior (RFC Compliant): You should see an ICMPv6 error message. Actual Behavior (Black Hole): The terminal remains completely blank. The packets are dropped silently.

20 REPLIES 20
bobpullen
Star Contributor
Star Contributor

@planetf1b wrote:

I'd love to hear if anyone can validate this...


Using your commands, can confirm the same outcome.

planetf1b
Established Contributor
Established Contributor

Thanks. Glad to know it’s not just me, but I’m surprised and disappointed.  Despite cs variability I really thought be/ee core network would be solid.

 

I’m going to try to report bot from reading around I think this has been an issue for ages

 Definitely now thinking I may move to another network once contract is up in a few months. 

planetf1b
Established Contributor
Established Contributor

Here's a clearer write-up after some more experiments tonight. Next the challenge is to get a ticket opened


EE Broadband – IPv6 PMTUD Failure (ICMPv6 Type‑2 Filtering) – Fault Report

Summary: IPv6 Path MTU Discovery (PMTUD) is failing on this EE broadband connection due to ICMPv6 Type‑2 (Packet Too Big) messages being filtered somewhere between EE’s access network and the customer premises. This breaks standards‑compliant IPv6 behaviour and causes real‑world service failures, including iCloud Private Relay.

 

1. Local MTU Verified Correct (1500 bytes)

The customer LAN and CPE are operating with a standard 1500‑byte Ethernet MTU.

Validation:

  • ping6 -s 1452 <host> → succeeds (1452 + 8 + 40 = 1500 bytes total)

  • ping6 -s 1453 <host> → fragments locally (expected)

This confirms the issue is not caused by the local network.

 

2. Public IPv6 Path with PMTU 1280 Identified

Testing against www.reuters.com (CloudFront) reveals a real PMTU of 1280 bytes:

  • ping6 -s 1232 (1280 total) → success

  • ping6 -s 1233 (1281 total) → consistent failure

This is a valid, standards‑compliant test case where routers must generate ICMPv6 Type‑2 messages.

 

3. ICMPv6 Type‑2 Messages Not Received

While sending packets larger than 1280 bytes, a packet capture was run:

Code
 
sudo tcpdump -i <interface> -n 'icmp6 && ip6[40] == 2'

Result:

  • No ICMPv6 Type‑2 messages observed

  • Pings above 1280 bytes hang with 100% loss

This is the exact condition where RFC 8201 requires routers to send Packet‑Too‑Big messages. Their absence indicates filtering of ICMPv6 Type‑2 on the EE path.

 

4. Other ICMPv6 Types Are Received Normally

Earlier tests produced:

  • ICMPv6 Type 3 Code 1 (fragment reassembly failures) from Akamai

This confirms:

  • ICMPv6 is not globally blocked

  • The filtering is selective, affecting Type‑2 specifically

 

5. Customer‑Visible Impact: iCloud Private Relay Fails

iCloud Private Relay relies on:

  • IPv6

  • QUIC/HTTP‑3

  • Encapsulation (additional headers)

  • PMTUD to discover the true MTU

With a real PMTU of 1280 and no Packet‑Too‑Big messages, Relay traffic cannot adjust packet size and stalls. The same Apple ID works correctly on other networks, confirming the issue is specific to this EE connection.

This is a direct customer‑visible service failure.

 

Conclusion

EE Broadband is filtering ICMPv6 Type‑2 (Packet Too Big) messages on this line or its associated BNG. This breaks IPv6 PMTUD and causes failures for:

  • iCloud Private Relay

  • QUIC/HTTP‑3

  • CDN‑backed services

  • VPNs and tunnels

  • Any IPv6 path with MTU <1500

The issue is fully reproducible using www.reuters.com, which exposes a real PMTU of 1280.

 

Requested Action

Please escalate to EE’s core network engineering team to:

  1. Review ICMPv6 filtering policy on the affected BNG/BRAS.

  2. Ensure ICMPv6 Type‑2 (Packet Too Big) messages are permitted end‑to‑end.

  3. Validate PMTUD behaviour on IPv6 paths with MTU <1500.

 

@planetf1b wrote:

Please escalate to EE’s core network engineering team to:

This user discussion group is not the correct platform for formally escalating issues & requiring their fix in EE systems. You previously requested we validate your findings which @bobpullen did. That was fine but this is a step too far. You need to report your issues to CS (& God help you they understand this). Failing any progress that way you could raise a complaint.

You can make a formal complaint to EE & if you don't get satisfaction after 8 weeks or come to a deadlock you can take it to EE's ADR provider. See Complaints code of practice and here is the Complaints Form . What the Ombudsman will make of this I dread to think.

If you think I helped please feel free to hit the "Thumbs Up" button below.

To phone EE CS: Dial Freephone +44 800 079 8586 - Option 1 for Home Broadband & Home Phone or Option 2 for Mobile Phone & Mobile Broadband

ISPs: 1999: Freeserve 48K Dial-Up > 2005: Wanadoo 1 Meg BB > 2007: Orange 2 Meg BB > 2008: Orange 8 Meg LLU > 2010: Orange 16 Meg LLU > 2011: Orange 20 Meg WBC > 2014: EE 20 Meg WBC > 2020: EE 40 Meg FTTC > 2022:EE 80 Meg FTTC SoGEA > 2025 EE 150 Meg FTTP

@bobpullen : Would this apply to all EE FTTP?

If you think I helped please feel free to hit the "Thumbs Up" button below.

To phone EE CS: Dial Freephone +44 800 079 8586 - Option 1 for Home Broadband & Home Phone or Option 2 for Mobile Phone & Mobile Broadband

ISPs: 1999: Freeserve 48K Dial-Up > 2005: Wanadoo 1 Meg BB > 2007: Orange 2 Meg BB > 2008: Orange 8 Meg LLU > 2010: Orange 16 Meg LLU > 2011: Orange 20 Meg WBC > 2014: EE 20 Meg WBC > 2020: EE 40 Meg FTTC > 2022:EE 80 Meg FTTC SoGEA > 2025 EE 150 Meg FTTP

@XRaySpeX - I strongly suspect the outcome of the OP's tests will be the same on any BT/EE connection. I can confirm  the same outcome on an IPv6-enabled Plusnet line I have access to.


@planetf1b wrote:

Customer‑Visible Impact: iCloud Private Relay Fails


How is Private Relay visibly failing? What are the symptoms? If I enable Private Relay on one of my iPads, I don't seem to notice any ill effects? 🤔

planetf1b
Established Contributor
Established Contributor

Oh wow, I had hoped it was just a one-off on a badly configured bit of kit. To intentionally block packet too big is a MASSIVE issue. 

I tried to go through the call centre today, but after 3 hours I gave up. I've instead emailed the ceo office - I completely couldn't figure out how to move forward as it's a very precise technical issue.


@planetf1b wrote:

Oh wow, I had hoped it was just a one-off on a badly configured bit of kit. To intentionally block packet too big is a MASSIVE issue. 


What massive issues are you experiencing though? Whilst I understand to a degree what your tests are doing, I'm also not confident enough to say that they're a conclusive indicator of ICMP too big responses being dropped out in the network. Have you repeated the tests on a non-BT connection with differing results?

Ewan15
Expert Contributor
Expert Contributor

I tested this just using ping on a Windows device on PowerShell using

  1. ping -6 2001:4860:4860::8888 -l 1400
  2. ping -6 2001:4860:4860::8888 -l 1440
  3. ping -6 2001:4860:4860::8888 -l 1472
  4. ping -6 www.reuters.com -l 1440
  5. ping -6 www.reuters.com -l 1280

I then asked CoPilot to interpret the results as below

The behaviour you’re seeing is not caused by “fragmentation”
Routers on IPv6 never fragment, so nothing is being “discarded because it requires fragmentation.”
Instead, what’s happening is:
• You send a packet larger than the path MTU
• A router along the path drops it
• That router is supposed to send back ICMPv6 Type‑2 (Packet Too Big)
• If that Type‑2 message never reaches you, PMTUD breaks
So the real question is:
Are Type‑2 messages being lost on the path?
Your tests give a mixed answer.

What your results actually show
To Google IPv6
• 1440 works
• 1452 fails
• Clean, repeatable boundary
This is exactly what working PMTUD looks like.
It means:
• EE/Openreach is not filtering Type‑2 on that path
• Your host is learning the MTU correctly
• The path MTU is ~1492 (classic PPPoE)
To Reuters IPv6
• 1400 fails
• 1280 works
• No clean boundary above 1280
• Behaviour is consistent with a remote MTU bottleneck, not EE
This is what happens when the far end (or its CDN) sits behind tunnels with a very small MTU.
Akamai, Cloudflare, Fastly, and similar CDNs often have edges behind:
• GRE
• IPsec
• DDoS scrubbers
• Multi‑layer encapsulation
Those can reduce MTU to 1280–1400.
Your results match that perfectly.

So is EE/Openreach “silently discarding outbound IPv6 packets that require fragmentation”?
No — that’s not what your data shows.
If EE were dropping Type‑2 messages, you would see:
• Google IPv6 failing at all large sizes
• No clean cutoff
• Random timeouts
• Broken PMTUD across all destinations
• TCP stalls everywhere
• iCloud Private Relay failing consistently
But instead:
• Google path shows perfect PMTUD
• Reuters path shows remote MTU constraints, not EE filtering
• Your EE line behaves exactly like a normal PPPoE IPv6 connection
So the upstream is not “silently discarding packets that require fragmentation.”
It’s simply enforcing its MTU, and PMTUD is working.

Why Apple devices sometimes struggle
This part of your observation is real, but the cause is different.
Apple’s iCloud Private Relay is extremely sensitive to:
• PMTUD failures
• Small MTUs
• CDNs with 1280‑byte edges
• Networks that block any ICMPv6 (not just Type‑2)
So if a remote service (like Reuters’ CDN) has a tiny MTU, Apple devices may:
• retry aggressively
• drain battery
• stall connections
• switch between IPv4/IPv6 repeatedly
This is a known behaviour of Private Relay, but it doesn’t mean EE is filtering Type‑2.

The accurate interpretation
A more precise version of your observation would be:
“Some IPv6 destinations (e.g., Reuters via Akamai) appear to sit behind very small MTUs. PMTUD works correctly on EE/Openreach, but the remote path MTU is low enough that certain services—especially Apple Private Relay—may experience stalls or retries.”
This matches your measurements exactly.

To phone EE CS: Dial Freephone +44 800 079 8586 - Option 1 for Home Broadband & Home Phone or Option 2 for Mobile Phone & Mobile Broadband.