From: rjmcmahon <rjmcmahon@rjmcmahon.com>
To: Aaron Wood <woody77@gmail.com>
Cc: Dave Taht <dave.taht@gmail.com>, Rpm <rpm@lists.bufferbloat.net>,
bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] [Rpm] receive window bug fix
Date: Sat, 03 Jun 2023 12:15:31 -0700 [thread overview]
Message-ID: <6968461fa2076430aa8d379709488c5a@rjmcmahon.com> (raw)
In-Reply-To: <CALQXh-Ork51MPvbxJ53+RPUNWMV0_x-y2K7tCXgbXJ1Z6UYi3w@mail.gmail.com>
I think better tooling can help and I am always interested in
suggestions on what to add to iperf 2 for better coverages.
I've thought it good for iperf 2 to support some sort of graph which
drives socket read/write/delays vs a simplistic pattern of AFAP. It for
sure stresses things differently, even in drivers. I've seen huge delays
in some 10G drivers where some UDP packets seem to get stuck in queues
and where the e2e latency is driven by the socket write rates vs the
network delays. This is most obvious using burst patterns where the last
packet of a latency burst is coupled to the first packet of the
subsequent burst. The coupling between the syscalls to network
performance is nonobvious and sometimes hard to believe.
We've been adding more "traffic profile" knobs for socket testing and
have much of the latency metrics incorporated. Most don't use these.
They seem to be hard to generalize. Cloudflare seems to have crafted
specific tests after obtaining knowledge of causality.
Bob
PS. As a side note, I'm now being asked how to generate "AI loads" into
switch fabrics, though there it probably won't be based upon socket
syscalls but maybe using io_urings - not sure.
> This is good work! I love reading their posts on scale like this.
>
> It’s wild to me that the Linux kernel has (apparently) never
> implemented shrinking the receive window, or handling the case of
> userspace starting a large transfer and then just not ever reading
> it… the latter is less surprising, I guess, because that’s an
> application bug that you probably would catch separately, and would be
> focused on fixing in the application layer…
>
> -Aaron
>
> On Sat, Jun 3, 2023 at 1:04 AM Dave Taht via Rpm
> <rpm@lists.bufferbloat.net> wrote:
>
>> these folk do good work, and I loved the graphs
>>
>>
> https://blog.cloudflare.com/unbounded-memory-usage-by-tcp-for-receive-buffers-and-how-we-fixed-it/
>>
>> --
>> Podcast:
>>
> https://www.linkedin.com/feed/update/urn:li:activity:7058793910227111937/
>> Dave Täht CSO, LibreQos
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
> --
> - Sent from my iPhone.
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
next prev parent reply other threads:[~2023-06-03 19:15 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-03 8:03 [Bloat] " Dave Taht
2023-06-03 18:20 ` [Bloat] [Rpm] " Aaron Wood
2023-06-03 19:15 ` rjmcmahon [this message]
2023-06-03 18:56 ` rjmcmahon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6968461fa2076430aa8d379709488c5a@rjmcmahon.com \
--to=rjmcmahon@rjmcmahon.com \
--cc=bloat@lists.bufferbloat.net \
--cc=dave.taht@gmail.com \
--cc=rpm@lists.bufferbloat.net \
--cc=woody77@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox