[Bloat] Router congestion, slow ping/ack times with kernel 5.4.60

Jesper Dangaard Brouer brouer at redhat.com
Fri Nov 6 06:18:40 EST 2020


On Fri, 06 Nov 2020 10:18:10 +0100
"Thomas Rosenstein" <thomas.rosenstein at creamfinance.com> wrote:

> >> I just tested 5.9.4 seems to also fix it partly, I have long
> >> stretches where it looks good, and then some increases again. (3.10
> >> Stock has them too, but not so high, rather 1-3 ms)
> >>

That you have long stretches where latency looks good is interesting
information.   My theory is that your system have a periodic userspace
process that does a kernel syscall that takes too long, blocking
network card from processing packets. (Note it can also be a kernel
thread).

Another theory is the NIC HW does strange things, but it is not very
likely.  E.g. delaying the packets before generating the IRQ interrupt,
which hide it from my IRQ-to-softirq latency tool.

A question: What traffic control qdisc are you using on your system?

What you looked at the obvious case if any of your qdisc report a large
backlog? (during the incidents)


> >> for example:
> >>
> >> 64 bytes from x.x.x.x: icmp_seq=10 ttl=64 time=0.169 ms
> >> 64 bytes from x.x.x.x: icmp_seq=11 ttl=64 time=5.53 ms
> >> 64 bytes from x.x.x.x: icmp_seq=12 ttl=64 time=9.44 ms
> >> 64 bytes from x.x.x.x: icmp_seq=13 ttl=64 time=0.167 ms
> >> 64 bytes from x.x.x.x: icmp_seq=14 ttl=64 time=3.88 ms
> >>
> >> and then again:
> >>
> >> 64 bytes from x.x.x.x: icmp_seq=15 ttl=64 time=0.569 ms
> >> 64 bytes from x.x.x.x: icmp_seq=16 ttl=64 time=0.148 ms
> >> 64 bytes from x.x.x.x: icmp_seq=17 ttl=64 time=0.286 ms
> >> 64 bytes from x.x.x.x: icmp_seq=18 ttl=64 time=0.257 ms
> >> 64 bytes from x.x.x.x: icmp_seq=19 ttl=64 time=0.220 ms

These very low ping times tell me that you are measuring very close to
the target machine, which is good.  Here on the bufferbloat list, we are
always suspicious of network equipment being use in these kind of
setups.  As experience tells us that this can be the cause of
bufferbloat latency.

You mention some fs.com switches (your desc below signature), can you
tell us more?


[...]
> I have a feeling that maybe not all config options were correctly moved 
> to the newer kernel.
>
> Or there's a big bug somewhere ... (which would seem rather weird for me 
> to be the first one to discover this)

I really appreciate that you report this.  This is a periodic issue,
that often result in people not reporting this.

Even if we find this to be caused by some process running on your
system, or a bad config, this it is really important that we find the
root-cause.

> I'll rebuild the 5.9 kernel on one of the 3.10 kernel and see if it 
> makes a difference ...

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

On Wed, 04 Nov 2020 16:23:12 +0100
Thomas Rosenstein via Bloat <bloat at lists.bufferbloat.net> wrote:

> General Info:
> 
> Routers are connected between each other with 10G Mellanox Connect-X 
> cards via 10G SPF+ DAC cables via a 10G Switch from fs.com
> Latency generally is around 0.18 ms between all routers (4).
> Throughput is 9.4 Gbit/s with 0 retransmissions when tested with iperf3.
> 2 of the 4 routers are connected upstream with a 1G connection (separate 
> port, same network card)
> All routers have the full internet routing tables, i.e. 80k entries for 
> IPv6 and 830k entries for IPv4
> Conntrack is disabled (-j NOTRACK)
> Kernel 5.4.60 (custom)
> 2x Xeon X5670 @ 2.93 Ghz
> 96 GB RAM




More information about the Bloat mailing list