From: Sebastian Moeller <moeller0@gmx.de>
To: "Livingood, Jason" <jason_livingood@comcast.com>
Cc: Jonathan Morton <chromatix99@gmail.com>,
Frantisek Borsik <frantisek.borsik@gmail.com>,
bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] "Very interesting L4S presentation from Nokia Bell Labs on tap for RIPE 88 in Krakow this week! "
Date: Tue, 21 May 2024 19:32:47 +0200 [thread overview]
Message-ID: <C7D64E36-1A7D-449D-9708-D8B4E8CF3B39@gmx.de> (raw)
In-Reply-To: <02D6BB37-B944-478D-947F-E08EF77A7764@comcast.com>
[-- Attachment #1: Type: text/plain, Size: 1192 bytes --]
Hi Jason,
> On 21. May 2024, at 19:13, Livingood, Jason via Bloat <bloat@lists.bufferbloat.net> wrote:
>
> On 5/21/24, 12:19, "Bloat on behalf of Jonathan Morton via Bloat wrote:
>
>> Notice in particular that the only *performance* comparisons they make are between L4S and no AQM at all, not between L4S and conventional AQM - even though they now mention that the latter *exists*.
>
> I cannot speak to the Nokia deck. But in our field trials we have certainly compared single queue AQM to L4S, and L4S flows perform better.
>
>> There's also no mention whatsoever of what happens when L4S traffic meets a conventional AQM.
>
> We also tested this and all is well; the performance of classic queue with AQM is fine.
[SM] I think you are thinking of a different case than Jonathan, not classic traffic in the C-queue, but L4S traffic (ECT(1)) that by chance is not hiting abottleneck employing DualQ but the traditional FIFO...
This is the case where at least TCP Prague just folds it, gives up and goes home...
Here is Pete's data showing that, the middle two bars show what happens when the bottleneck is not treating TCP Prague to the expected signalling...
[-- Attachment #2: 687474703a2f2f7363652e646e736d67722e6e65742f726573756c74732f6c34732d323032302d31312d3131543132303030302d66696e616c2f73312d6368617274732f727474666169725f63635f71646973635f38306d735f38306d732e737667.svg --]
[-- Type: image/svg+xml, Size: 194422 bytes --]
[-- Attachment #3: Type: text/plain, Size: 266 bytes --]
That is not really fit for use over the open internet...
Regards
Sebastian
>
> Jason
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
next prev parent reply other threads:[~2024-05-21 17:33 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-21 15:31 Frantisek Borsik
2024-05-21 16:18 ` Jonathan Morton
2024-05-21 17:13 ` Livingood, Jason
2024-05-21 17:32 ` Sebastian Moeller [this message]
2024-05-22 5:40 ` [Bloat] Fwd: " Sebastian Moeller
2024-05-22 8:47 ` Frantisek Borsik
2024-05-22 12:48 ` Livingood, Jason
2024-05-22 13:10 ` [Bloat] " Sebastian Moeller
2024-05-22 9:37 ` Jonathan Morton
2024-05-22 12:30 ` [Bloat] [EXTERNAL] " Livingood, Jason
2024-05-22 12:27 ` Livingood, Jason
2024-05-22 12:54 ` [Bloat] [EXTERNAL] " Sebastian Moeller
2024-05-22 17:37 ` Sebastian Moeller
2024-05-23 0:06 [Bloat] " Livingood, Jason
2024-05-23 6:16 ` Sebastian Moeller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=C7D64E36-1A7D-449D-9708-D8B4E8CF3B39@gmx.de \
--to=moeller0@gmx.de \
--cc=bloat@lists.bufferbloat.net \
--cc=chromatix99@gmail.com \
--cc=frantisek.borsik@gmail.com \
--cc=jason_livingood@comcast.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox