From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] Router congestion, slow ping/ack times with kernel 5.4.60
Date: Thu, 5 Nov 2020 14:33:17 +0100 [thread overview]
Message-ID: <20201105143317.78276bbc@carbon> (raw)
In-Reply-To: <D00929D6-E0BF-4C69-AD71-4986D3FB7857@creamfinance.com>
On Thu, 05 Nov 2020 13:22:10 +0100
Thomas Rosenstein via Bloat <bloat@lists.bufferbloat.net> wrote:
> On 5 Nov 2020, at 12:21, Toke Høiland-Jørgensen wrote:
>
> > "Thomas Rosenstein" <thomas.rosenstein@creamfinance.com> writes:
> >
> >>> If so, this sounds more like a driver issue, or maybe something to
> >>> do with scheduling. Does it only happen with ICMP? You could try this
> >>> tool for a userspace UDP measurement:
> >>
> >> It happens with all packets, therefore the transfer to backblaze with
> >> 40 threads goes down to ~8MB/s instead of >60MB/s
> >
> > Huh, right, definitely sounds like a kernel bug; or maybe the new
> > kernel is getting the hardware into a state where it bugs out when
> > there are lots of flows or something.
> >
> > You could try looking at the ethtool stats (ethtool -S) while
> > running the test and see if any error counters go up. Here's a
> > handy script to monitor changes in the counters:
> >
> > https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
> >
> >> I'll try what that reports!
> >>
> >>> Also, what happens if you ping a host on the internet (*through*
> >>> the router instead of *to* it)?
> >>
> >> Same issue, but twice pronounced, as it seems all interfaces are
> >> affected.
> >> So, ping on one interface and the second has the issue.
> >> Also all traffic across the host has the issue, but on both sides,
> >> so ping to the internet increased by 2x
> >
> > Right, so even an unloaded interface suffers? But this is the same
> > NIC, right? So it could still be a hardware issue...
> >
> >> Yep default that CentOS ships, I just tested 4.12.5 there the
> >> issue also does not happen. So I guess I can bisect it
> >> then...(really don't want to 😃)
> >
> > Well that at least narrows it down :)
>
> I just tested 5.9.4 seems to also fix it partly, I have long
> stretches where it looks good, and then some increases again. (3.10
> Stock has them too, but not so high, rather 1-3 ms)
>
> for example:
>
> 64 bytes from x.x.x.x: icmp_seq=10 ttl=64 time=0.169 ms
> 64 bytes from x.x.x.x: icmp_seq=11 ttl=64 time=5.53 ms
> 64 bytes from x.x.x.x: icmp_seq=12 ttl=64 time=9.44 ms
> 64 bytes from x.x.x.x: icmp_seq=13 ttl=64 time=0.167 ms
> 64 bytes from x.x.x.x: icmp_seq=14 ttl=64 time=3.88 ms
>
> and then again:
>
> 64 bytes from x.x.x.x: icmp_seq=15 ttl=64 time=0.569 ms
> 64 bytes from x.x.x.x: icmp_seq=16 ttl=64 time=0.148 ms
> 64 bytes from x.x.x.x: icmp_seq=17 ttl=64 time=0.286 ms
> 64 bytes from x.x.x.x: icmp_seq=18 ttl=64 time=0.257 ms
> 64 bytes from x.x.x.x: icmp_seq=19 ttl=64 time=0.220 ms
> 64 bytes from x.x.x.x: icmp_seq=20 ttl=64 time=0.125 ms
> 64 bytes from x.x.x.x: icmp_seq=21 ttl=64 time=0.188 ms
> 64 bytes from x.x.x.x: icmp_seq=22 ttl=64 time=0.202 ms
> 64 bytes from x.x.x.x: icmp_seq=23 ttl=64 time=0.195 ms
> 64 bytes from x.x.x.x: icmp_seq=24 ttl=64 time=0.177 ms
> 64 bytes from x.x.x.x: icmp_seq=25 ttl=64 time=0.242 ms
> 64 bytes from x.x.x.x: icmp_seq=26 ttl=64 time=0.339 ms
> 64 bytes from x.x.x.x: icmp_seq=27 ttl=64 time=0.183 ms
> 64 bytes from x.x.x.x: icmp_seq=28 ttl=64 time=0.221 ms
> 64 bytes from x.x.x.x: icmp_seq=29 ttl=64 time=0.317 ms
> 64 bytes from x.x.x.x: icmp_seq=30 ttl=64 time=0.210 ms
> 64 bytes from x.x.x.x: icmp_seq=31 ttl=64 time=0.242 ms
> 64 bytes from x.x.x.x: icmp_seq=32 ttl=64 time=0.127 ms
> 64 bytes from x.x.x.x: icmp_seq=33 ttl=64 time=0.217 ms
> 64 bytes from x.x.x.x: icmp_seq=34 ttl=64 time=0.184 ms
>
>
> For me it looks now that there was some fix between 5.4.60 and 5.9.4
> ... anyone can pinpoint it?
I have some bpftrace tools to measure these kind of latency spikes here:
[1] https://github.com/xdp-project/xdp-project/blob/master/areas/latency/
The tool you want is: softirq_net_latency.bt
[2] https://github.com/xdp-project/xdp-project/blob/master/areas/latency/softirq_net_latency.bt
Example output see[3]:
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1795049#c8
Based on the kernel versions, I don't expect this to be same latency
issue as described in the bugzilla[3] case (as IIRC it was fixed in
4.19). It can still be similar issue, where some userspace process is
reading information from the kernel (/sys/fs/cgroup/memory/memory.stat
in BZ case) that blocks softirq from running, and result in these
latency spikes.
Install guide to bpftrace[4]:
[4] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/bpftrace/INSTALL.org
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2020-11-05 13:33 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-04 15:23 Thomas Rosenstein
2020-11-04 16:10 ` Toke Høiland-Jørgensen
2020-11-04 16:24 ` Thomas Rosenstein
2020-11-05 0:10 ` Toke Høiland-Jørgensen
2020-11-05 8:48 ` Thomas Rosenstein
2020-11-05 11:21 ` Toke Høiland-Jørgensen
2020-11-05 12:22 ` Thomas Rosenstein
2020-11-05 12:38 ` Toke Høiland-Jørgensen
2020-11-05 12:41 ` Thomas Rosenstein
2020-11-05 12:47 ` Toke Høiland-Jørgensen
2020-11-05 13:33 ` Jesper Dangaard Brouer [this message]
2020-11-06 8:48 ` Thomas Rosenstein
2020-11-06 10:53 ` Jesper Dangaard Brouer
2020-11-06 9:18 ` Thomas Rosenstein
2020-11-06 11:18 ` Jesper Dangaard Brouer
2020-11-06 11:37 ` Thomas Rosenstein
2020-11-06 11:45 ` Toke Høiland-Jørgensen
2020-11-06 12:01 ` Thomas Rosenstein
2020-11-06 12:53 ` Jesper Dangaard Brouer
2020-11-06 14:13 ` Jesper Dangaard Brouer
2020-11-06 17:04 ` Thomas Rosenstein
2020-11-06 20:19 ` Jesper Dangaard Brouer
2020-11-07 12:37 ` Thomas Rosenstein
2020-11-07 12:40 ` Jan Ceuleers
2020-11-07 12:43 ` Thomas Rosenstein
2020-11-07 13:00 ` Thomas Rosenstein
2020-11-09 8:24 ` Jesper Dangaard Brouer
2020-11-09 10:09 ` Thomas Rosenstein
2020-11-09 11:40 ` Jesper Dangaard Brouer
2020-11-09 11:51 ` Toke Høiland-Jørgensen
2020-11-09 12:25 ` Thomas Rosenstein
2020-11-09 14:33 ` Thomas Rosenstein
2020-11-12 10:05 ` Jesper Dangaard Brouer
2020-11-12 11:26 ` Thomas Rosenstein
2020-11-12 13:31 ` Jesper Dangaard Brouer
2020-11-12 13:42 ` Thomas Rosenstein
2020-11-12 15:42 ` Jesper Dangaard Brouer
2020-11-13 6:31 ` Thomas Rosenstein
2020-11-16 11:56 ` Jesper Dangaard Brouer
2020-11-16 12:05 ` Thomas Rosenstein
2020-11-09 16:39 ` Thomas Rosenstein
2020-11-07 13:33 ` Thomas Rosenstein
2020-11-07 16:46 ` Jesper Dangaard Brouer
2020-11-07 17:01 ` Thomas Rosenstein
2020-11-07 17:26 ` Sebastian Moeller
2020-11-16 12:34 ` Jesper Dangaard Brouer
2020-11-16 12:49 ` Thomas Rosenstein
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201105143317.78276bbc@carbon \
--to=brouer@redhat.com \
--cc=bloat@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox