From: Sebastian Moeller <moeller0@gmx.de>
To: Bob McMahon <bob.mcmahon@broadcom.com>
Cc: David Lang <david@lang.hm>, Rpm <rpm@lists.bufferbloat.net>,
Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>,
Cake List <cake@lists.bufferbloat.net>,
Taraldsen Erik <erik.taraldsen@telenor.no>,
bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] [Make-wifi-fast] [Cake] The most wonderful video ever about bufferbloat
Date: Tue, 11 Oct 2022 19:26:28 +0200 [thread overview]
Message-ID: <C9F3E5F0-490A-4E7A-AE43-4529DD18C590@gmx.de> (raw)
In-Reply-To: <CAHb6Lvpw9SV-Ybqcj+2ucQOiLhC8oR=fqU91MsuPhiwk8XCUpA@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 12291 bytes --]
Hi Bob,
Sweet, thanks! Will go and set this up in my home network, but that will take a while. Also any proposal how to convert the output into some graphs by any chance?
Regards
Sebastian
On 11 October 2022 18:58:05 CEST, Bob McMahon <bob.mcmahon@broadcom.com> wrote:
>> Saturate a link in both directions simultaneously with multiple greedy
>flows while measuring load-dependent latency changes for small isochronous
>probe flows.
>
>This functionality is released in iperf 2.1.8 per the bounceback feature
>but, unfortunately, OpenWRT doesn't maintain iperf 2 as a package anymore
>and uses 2.0.13
>CLIENT SPECIFIC OPTIONS*--bounceback[=**n**]*run a TCP bounceback or rps
>test with optional number writes in a burst per value of n. The default is
>ten writes every period and the default period is one second (Note: set
>size with -l or --len which defaults to 100 bytes)
>*--bounceback-congest[=up|down|bidir][*,*n**]*request a concurrent working
>load or TCP stream(s), defaults to full duplex (or bidir) unless the *up*
> or *down* option is provided. The number of TCP streams defaults to 1 and
>can be changed via the n value, e.g. *--bounceback-congest=down,4* will use
>four TCP streams from server to the client as the working load. The IP ToS
>will be BE (0x0) for working load traffic.*--bounceback-hold **n*request
>the server to insert a delay of n milliseconds between its read and write
>(default is no delay)*--bounceback-period[=**n**]*request the client
>schedule its send(s) every n seconds (default is one second, use zero value
>for immediate or continuous back to back)*--bounceback-no-quickack*request
>the server not set the TCP_QUICKACK socket option (disabling TCP ACK
>delays) during a bounceback test (see NOTES)*--bounceback-txdelay **n*request
>the client to delay n seconds between the start of the working load and the
>bounceback traffic (default is no delay)
>
>On Tue, Oct 11, 2022 at 12:15 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>
>> Hi Bob,
>>
>> On 11 October 2022 02:05:40 CEST, Bob McMahon <bob.mcmahon@broadcom.com>
>> wrote:
>> >It's too big because it's oversized so it's in the size domain. It's
>> >basically Little's law's value for the number of items in a queue.
>> >
>> >*Number of items in the system = (the rate items enter and leave the
>> >system) x (the average amount of time items spend in the system)*
>> >
>> >
>> >Which gets driven to the standing queue size when the arrival rate
>> >exceeds the service rate - so the driving factor isn't the service and
>> >arrival rates, but *the queue size *when *any service rate is less than an
>> >arrival rate.*
>>
>> [SM] You could also argue it is the ratio of arrival to service rates,
>> with the queue size being a measure correlating with how long the system
>> will tolerate ratios larger than one...
>>
>>
>> >
>> >In other words, one can find and measure bloat regardless of the
>> >enter/leave rates (as long as the leave rate is too slow) and the value of
>> >memory units found will always be the same.
>> >
>> >Things like prioritizations to jump the line are somewhat of hacks at
>> >reducing the service time for a specialized class of packets but nobody
>> >really knows which packets should jump.
>>
>> [SM] Au contraire most everybody 'knows' it is their packets that should
>> jump ahead of the rest ;) For intermediate hop queues however that endpoint
>> perception is not really actionable due to lack of robust and reliable
>> importance identifiers on packets. In side a 'domain' dscps might work if
>> treated to strict admission control, but that typically will not help
>> end2end traffic over the internet. This is BTW why I think FQ is a great
>> concept, as it mostly results in the desirable outcome of not picking
>> winners and losers (like arbitrarily starving a flow), but I digress.
>>
>> >Also, nobody can define what
>> >working conditions are so that's another problem with this class of tests.
>>
>> [SM] While real working conditions will be different for each link and
>> probably vary over time, it seems achievable to come up with a set of
>> pessimistic assumptions how to model a challenging work condition against
>> which to test potential remedies, assuming that such remedies will also
>> work well under less challenging conditions, no?
>>
>>
>> >
>> >Better maybe just to shrink the queue and eliminate all unneeded queueing
>> >delays.
>>
>> [SM] The 'unneeded' does a lot of work in that sentence ;). I like Van's?
>> Description of queues as shock absorbers so queue size will have a lower
>> acceptable limit assuming users want to achieve 'acceptable' throughput
>> even with existing bursty senders. (Not all applications are suited for
>> pacing so some level of burstiness seems unavoidable).
>>
>>
>> > Also, measure the performance per "user conditions" which is going
>> >to be different for almost every environment (and is correlated to time
>> and
>> >space.) So any engineering solution is fundamentally suboptimal.
>>
>> [SM] A matter of definition, if the requirement is to cover many user
>> conditions the optimality measure simply needs to be changed from per
>> individual condition to over many/all conditions, no?
>>
>> >Even
>> >pacing the source doesn't necessarily do the right thing because that's
>> >like waiting in the waitlist while at home vs the restaurant lobby.
>>
>> [SM] +1.
>>
>> > Few
>> >care about where messages wait (unless the pitch is AQM is the only
>> >solution that drives to a self-fulfilling prophecy - that's why the tests
>> >have to come up with artificial conditions that can't be simply defined.)
>>
>> Hrm, so the RRUL test, while not the end all of bufferbloat/working
>> conditions tests, is not that complicated:
>> Saturate a link in both directions simultaneously with multiple greedy
>> flows while measuring load-dependent latency changes for small isochronous
>> probe flows.
>>
>> Yes, the it would be nice to have additional higher rate probe flows also
>> bursty ones to emulate on-linev games, and 'pumped' greedy flows to emulate
>> DASH 'streaming', and a horde of small greedy flows that mostly end inside
>> the initial window and slow start. But at its core existing RRUL already
>> gives a useful estimate on how a link behaves under saturating loads all
>> the while being relatively simple.
>> The responsiveness under working condition seems similar in that it tries
>> to saturate a link with an increasing number of greedy flows, in a sense to
>> create a reasonable bad case that ideally rarely happens.
>>
>> Regards
>> Sebastian
>>
>>
>> >
>> >Bob
>> >
>> >On Mon, Oct 10, 2022 at 3:57 PM David Lang <david@lang.hm> wrote:
>> >
>> >> On Mon, 10 Oct 2022, Bob McMahon via Bloat wrote:
>> >>
>> >> > I think conflating bufferbloat with latency misses the subtle point in
>> >> that
>> >> > bufferbloat is a measurement in memory units more than a measurement
>> in
>> >> > time units. The first design flaw is a queue that is too big. This
>> >> youtube
>> >> > video analogy doesn't help one understand this important point.
>> >>
>> >> but the queue is only too big because of the time it takes to empty the
>> >> queue,
>> >> which puts us back into the time domain.
>> >>
>> >> David Lang
>> >>
>> >> > Another subtle point is that the video assumes AQM as the only
>> solution
>> >> and
>> >> > ignores others, i.e. pacing at the source(s) and/or faster service
>> >> rates. A
>> >> > restaurant that let's one call ahead to put their name on the waitlist
>> >> > doesn't change the wait time. Just because a transport layer slowed
>> down
>> >> > and hasn't congested a downstream queue doesn't mean the e2e latency
>> >> > performance will meet the gaming needs as an example. The delay is
>> still
>> >> > there it's just not manifesting itself in a shared queue that may or
>> may
>> >> > not negatively impact others using that shared queue.
>> >> >
>> >> > Bob
>> >> >
>> >> >
>> >> >
>> >> > On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via Make-wifi-fast <
>> >> > make-wifi-fast@lists.bufferbloat.net> wrote:
>> >> >
>> >> >> Hi Erik,
>> >> >>
>> >> >>
>> >> >>> On Oct 10, 2022, at 11:32, Taraldsen Erik <
>> erik.taraldsen@telenor.no>
>> >> >> wrote:
>> >> >>>
>> >> >>> On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de> wrote:
>> >> >>>
>> >> >>> Nice!
>> >> >>>
>> >> >>>> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <
>> >> >> cake@lists.bufferbloat.net> wrote:
>> >> >>>>
>> >> >>>> It took about 3 hours from the video was release before we got the
>> >> >> first request to have SQM on the CPE's we manage as a ISP. Finally
>> >> >> getting some customer response on the issue.
>> >> >>>
>> >> >>> [SM] Will you be able to bump these requests to higher-ups
>> and at
>> >> >> least change some perception of customer demand for tighter latency
>> >> >> performance?
>> >> >>>
>> >> >>> That would be the hope.
>> >> >>
>> >> >> [SM} Excellent, hope this plays out as we wish for.
>> >> >>
>> >> >>
>> >> >>> We actually have fq_codel implemented on the two latest
>> generations of
>> >> >> DSL routers. Use sync rate as input to set the rate. Works quite
>> well.
>> >> >>
>> >> >> [SM] Cool, if I might ask what fraction of the sync are you
>> >> >> setting the traffic shaper for and are you doing fine grained
>> overhead
>> >> >> accounting (or simply fold that into a grand "de-rating"-factor)?
>> >> >>
>> >> >>
>> >> >>> There is also a bit of traction around speedtest.net's inclusion of
>> >> >> latency under load internally.
>> >> >>
>> >> >> [SM] Yes, although IIUC they are reporting the interquartile
>> >> mean
>> >> >> for the two loaded latency estimates, which is pretty conservative
>> and
>> >> only
>> >> >> really "triggers" for massive consistently elevated latency; so I
>> expect
>> >> >> this to be great for detecting really bad cases, but I fear it is too
>> >> >> conservative and will make a number of problematic links look OK. But
>> >> hey,
>> >> >> even that is leaps and bounds better than the old only idle latency
>> >> report.
>> >> >>
>> >> >>
>> >> >>> My hope is that some publication in Norway will pick up on that
>> score
>> >> >> and do a test and get some mainstream publicity with the results.
>> >> >>
>> >> >> [SM] Inside the EU the challenge is to get national
>> regulators
>> >> and
>> >> >> the BEREC to start bothering about latency-under-load at all, "some
>> >> >> mainstream publicity" would probably help here as well.
>> >> >>
>> >> >> Regards
>> >> >> Sebastian
>> >> >>
>> >> >>
>> >> >>>
>> >> >>> -Erik
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>
>> >> >> _______________________________________________
>> >> >> Make-wifi-fast mailing list
>> >> >> Make-wifi-fast@lists.bufferbloat.net
>> >> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>> >> >
>> >> >_______________________________________________
>> >> Bloat mailing list
>> >> Bloat@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/bloat
>> >>
>> >
>>
>> --
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>
>
>--
>This electronic communication and the information and any files transmitted
>with it, or attached to it, are confidential and are intended solely for
>the use of the individual or entity to whom it is addressed and may contain
>information that is confidential, legally privileged, protected by privacy
>laws, or otherwise restricted from disclosure to anyone else. If you are
>not the intended recipient or the person responsible for delivering the
>e-mail to the intended recipient, you are hereby notified that any use,
>copying, distributing, dissemination, forwarding, printing, or copying of
>this e-mail is strictly prohibited. If you received this e-mail in error,
>please return the e-mail to the sender, delete it from your computer, and
>destroy any printed copy of it.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
[-- Attachment #2: Type: text/html, Size: 15648 bytes --]
next prev parent reply other threads:[~2022-10-11 17:26 UTC|newest]
Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-09 13:14 [Bloat] " Dave Taht
2022-10-09 13:23 ` Nathan Owens
2022-10-10 5:52 ` Taraldsen Erik
2022-10-10 9:09 ` [Bloat] [Cake] " Sebastian Moeller
2022-10-10 9:32 ` Taraldsen Erik
2022-10-10 9:40 ` Sebastian Moeller
2022-10-10 11:46 ` Taraldsen Erik
2022-10-10 20:23 ` Sebastian Moeller
2022-10-11 6:08 ` Taraldsen Erik
2022-10-11 6:35 ` Sebastian Moeller
2022-10-11 6:38 ` Dave Taht
2022-10-11 11:34 ` Taraldsen Erik
2022-10-10 16:45 ` [Bloat] [Make-wifi-fast] " Bob McMahon
2022-10-10 22:57 ` David Lang
2022-10-11 0:05 ` Bob McMahon
2022-10-11 7:15 ` Sebastian Moeller
2022-10-11 16:58 ` Bob McMahon
2022-10-11 17:00 ` [Bloat] [Rpm] " Dave Taht
2022-10-11 17:26 ` Sebastian Moeller [this message]
2022-10-11 17:47 ` [Bloat] " Bob McMahon
2022-10-11 13:57 ` [Bloat] [Rpm] " Rich Brown
2022-10-11 14:43 ` [Bloat] [Make-wifi-fast] [Rpm] " Dave Taht
2022-10-11 17:05 ` [Bloat] [Rpm] [Make-wifi-fast] " Bob McMahon
2022-10-11 18:44 ` Rich Brown
2022-10-11 22:24 ` Dave Taht
2022-10-12 17:39 ` Bob McMahon
2022-10-12 21:44 ` [Bloat] [Cake] [Rpm] [Make-wifi-fast] " David P. Reed
2022-10-13 17:45 ` [Bloat] [Rpm] [Make-wifi-fast] [Cake] " Livingood, Jason
2022-10-13 17:49 ` Dave Taht
2022-10-11 6:28 ` [Bloat] " Sebastian Moeller
2022-10-18 0:02 ` [Bloat] [Make-wifi-fast] " Stuart Cheshire
2022-10-18 2:44 ` Dave Taht
2022-10-18 2:50 ` Sina Khanifar
2022-10-18 3:15 ` [Bloat] A quick report from the WISPA conference Dave Taht
2022-10-18 17:17 ` Sina Khanifar
2022-10-18 19:04 ` Sebastian Moeller
2022-10-20 5:15 ` Sina Khanifar
2022-10-20 9:01 ` Sebastian Moeller
2022-10-18 19:17 ` Sebastian Moeller
2022-10-18 2:58 ` [Bloat] [Make-wifi-fast] The most wonderful video ever about bufferbloat David Lang
2022-10-18 17:03 ` Bob McMahon
2022-10-18 18:19 ` [Bloat] [Rpm] " Sebastian Moeller
2022-10-18 19:30 ` Bob McMahon
2022-10-19 7:09 ` David Lang
2022-10-19 19:18 ` Bob McMahon
2022-10-19 19:23 ` David Lang
2022-10-19 21:26 ` [Bloat] [Cake] " David P. Reed
2022-10-19 21:37 ` David Lang
2022-10-22 18:37 ` [Bloat] " Matt Taggart
2022-10-22 18:58 ` Dave Taht
2022-10-22 20:13 ` Sebastian Moeller
2022-10-22 19:47 ` Sebastian Moeller
2022-10-22 20:34 ` Matt Taggart
2022-10-19 20:44 ` Stuart Cheshire
2022-10-19 21:33 ` David Lang
2022-10-19 23:36 ` Stephen Hemminger
2022-10-20 14:26 ` [Bloat] [Rpm] [Make-wifi-fast] Traffic analogies (was: Wonderful video) Rich Brown
2022-10-19 21:46 ` [Bloat] [Make-wifi-fast] The most wonderful video ever about bufferbloat Michael Richardson
2022-12-06 19:17 ` Bob McMahon
2022-10-20 9:36 ` [Bloat] [Rpm] " Sebastian Moeller
2022-10-20 18:32 ` Stuart Cheshire
2022-10-20 19:04 ` [Bloat] [Make-wifi-fast] [Rpm] " Bob McMahon
2022-10-20 19:12 ` Dave Taht
2022-10-20 19:31 ` Bob McMahon
2022-10-20 19:40 ` [Bloat] [Rpm] [Make-wifi-fast] " Sebastian Moeller
2022-10-21 17:48 ` Bob McMahon
2022-10-20 19:33 ` [Bloat] [Make-wifi-fast] [Rpm] " Sebastian Moeller
2022-10-20 19:33 ` [Bloat] [Rpm] [Make-wifi-fast] " Dave Taht
2022-10-26 20:38 ` Sebastian Moeller
2022-10-26 20:42 ` Dave Taht
2022-10-26 20:53 ` Sebastian Moeller
2022-10-18 18:07 ` [Bloat] " Sebastian Moeller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=C9F3E5F0-490A-4E7A-AE43-4529DD18C590@gmx.de \
--to=moeller0@gmx.de \
--cc=bloat@lists.bufferbloat.net \
--cc=bob.mcmahon@broadcom.com \
--cc=cake@lists.bufferbloat.net \
--cc=david@lang.hm \
--cc=erik.taraldsen@telenor.no \
--cc=make-wifi-fast@lists.bufferbloat.net \
--cc=rpm@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox