Lets make wifi fast again!
 help / color / mirror / Atom feed
From: Sebastian Moeller <moeller0@gmx.de>
To: Bob McMahon <bob.mcmahon@broadcom.com>, David Lang <david@lang.hm>
Cc: Rpm <rpm@lists.bufferbloat.net>,
	Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>,
	Cake List <cake@lists.bufferbloat.net>,
	Taraldsen Erik <erik.taraldsen@telenor.no>,
	bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Make-wifi-fast] [Bloat] [Cake] The most wonderful video ever about bufferbloat
Date: Tue, 11 Oct 2022 09:15:20 +0200	[thread overview]
Message-ID: <ACE51BEA-99AF-4CC9-B2A7-695C8F8B5946@gmx.de> (raw)
In-Reply-To: <CAHb6Lvqbj0MDhWvLaEk8Hbr_vZwDs1NdCj1X9Xvxp+x+Mbs0Vw@mail.gmail.com>

Hi Bob,

On 11 October 2022 02:05:40 CEST, Bob McMahon <bob.mcmahon@broadcom.com> wrote:
>It's too big because it's oversized so it's in the size domain. It's
>basically Little's law's value for the number of items in a queue.
>
>*Number of items in the system = (the rate items enter and leave the
>system) x (the average amount of time items spend in the system)*
>
>
>Which gets driven to the standing queue size when the arrival rate
>exceeds the service rate - so the driving factor isn't the service and
>arrival rates, but *the queue size *when *any service rate is less than an
>arrival rate.*

[SM] You could also argue it is the ratio of arrival to service rates, with the queue size being a measure correlating with how long the system will tolerate ratios larger than one...


>
>In other words, one can find and measure bloat regardless of the
>enter/leave rates (as long as the leave rate is too slow) and the value of
>memory units found will always be the same.
>
>Things like prioritizations to jump the line are somewhat of hacks at
>reducing the service time for a specialized class of packets but nobody
>really knows which packets should jump. 

[SM] Au contraire most everybody 'knows' it is their packets that should jump ahead of the rest ;) For intermediate hop queues however that endpoint perception is not really actionable due to lack of robust and reliable importance identifiers on packets. In side a 'domain' dscps might work if treated to strict admission control, but that typically will not help end2end traffic over the internet. This is BTW why I think FQ is a great concept, as it mostly results in the desirable outcome of not picking winners and losers (like arbitrarily starving a flow), but I digress.

>Also, nobody can define what
>working conditions are so that's another problem with this class of tests.

[SM] While real working conditions will be different for each link and probably vary over time, it seems achievable to come up with a set of pessimistic assumptions how to model a challenging work condition against which to test potential remedies, assuming that such remedies will also work well under less challenging conditions, no?


>
>Better maybe just to shrink the queue and eliminate all unneeded queueing
>delays. 

[SM] The 'unneeded' does a lot of work in that sentence ;). I like Van's? Description of queues as shock absorbers so queue size will have a lower acceptable limit assuming users want to achieve 'acceptable' throughput even with existing bursty senders. (Not all applications are suited for pacing so some level of burstiness seems unavoidable).


> Also, measure the performance per "user conditions" which is going
>to be different for almost every environment (and is correlated to time and
>space.) So any engineering solution is fundamentally suboptimal. 

[SM] A matter of definition, if the requirement is to cover many user conditions the optimality measure simply needs to be changed from per individual condition to over many/all conditions, no?

>Even
>pacing the source doesn't necessarily do the right thing because that's
>like waiting in the waitlist while at home vs the restaurant lobby. 

[SM] +1.

> Few
>care about where messages wait (unless the pitch is AQM is the only
>solution that drives to a self-fulfilling prophecy - that's why the tests
>have to come up with artificial conditions that can't be simply defined.)

Hrm, so the RRUL test, while not the end all of bufferbloat/working conditions tests, is not that complicated:
Saturate a link in both directions simultaneously with multiple greedy flows while measuring load-dependent latency changes for small isochronous probe flows.

Yes, the it would be nice to have additional higher rate probe flows also bursty ones to emulate on-linev games, and 'pumped' greedy flows to emulate DASH 'streaming', and a horde of small greedy flows that mostly end inside the initial window and slow start. But at its core existing RRUL already gives a useful estimate on how a link behaves under saturating loads all the while being relatively simple.
The responsiveness under working condition seems similar in that it tries to saturate a link with an increasing number of greedy flows, in a sense to create a reasonable bad case that ideally rarely happens.

Regards
      Sebastian


>
>Bob
>
>On Mon, Oct 10, 2022 at 3:57 PM David Lang <david@lang.hm> wrote:
>
>> On Mon, 10 Oct 2022, Bob McMahon via Bloat wrote:
>>
>> > I think conflating bufferbloat with latency misses the subtle point in
>> that
>> > bufferbloat is a measurement in memory units more than a measurement in
>> > time units. The first design flaw is a queue that is too big. This
>> youtube
>> > video analogy doesn't help one understand this important point.
>>
>> but the queue is only too big because of the time it takes to empty the
>> queue,
>> which puts us back into the time domain.
>>
>> David Lang
>>
>> > Another subtle point is that the video assumes AQM as the only solution
>> and
>> > ignores others, i.e. pacing at the source(s) and/or faster service
>> rates. A
>> > restaurant that let's one call ahead to put their name on the waitlist
>> > doesn't change the wait time. Just because a transport layer slowed down
>> > and hasn't congested a downstream queue doesn't mean the e2e latency
>> > performance will meet the gaming needs as an example. The delay is still
>> > there it's just not manifesting itself in a shared queue that may or may
>> > not negatively impact others using that shared queue.
>> >
>> > Bob
>> >
>> >
>> >
>> > On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via Make-wifi-fast <
>> > make-wifi-fast@lists.bufferbloat.net> wrote:
>> >
>> >> Hi Erik,
>> >>
>> >>
>> >>> On Oct 10, 2022, at 11:32, Taraldsen Erik <erik.taraldsen@telenor.no>
>> >> wrote:
>> >>>
>> >>> On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de> wrote:
>> >>>
>> >>>    Nice!
>> >>>
>> >>>> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <
>> >> cake@lists.bufferbloat.net> wrote:
>> >>>>
>> >>>> It took about 3 hours from the video was release before we got the
>> >> first request to have SQM on the CPE's  we manage as a ISP.  Finally
>> >> getting some customer response on the issue.
>> >>>
>> >>>       [SM] Will you be able to bump these requests to higher-ups and at
>> >> least change some perception of customer demand for tighter latency
>> >> performance?
>> >>>
>> >>> That would be the hope.
>> >>
>> >>         [SM} Excellent, hope this plays out as we wish for.
>> >>
>> >>
>> >>>  We actually have fq_codel implemented on the two latest generations of
>> >> DSL routers.  Use sync rate as input to set the rate.  Works quite well.
>> >>
>> >>         [SM] Cool, if I might ask what fraction of the sync are you
>> >> setting the traffic shaper for and are you doing fine grained overhead
>> >> accounting (or simply fold that into a grand "de-rating"-factor)?
>> >>
>> >>
>> >>> There is also a bit of traction around speedtest.net's inclusion of
>> >> latency under load internally.
>> >>
>> >>         [SM] Yes, although IIUC they are reporting the interquartile
>> mean
>> >> for the two loaded latency estimates, which is pretty conservative and
>> only
>> >> really "triggers" for massive consistently elevated latency; so I expect
>> >> this to be great for detecting really bad cases, but I fear it is too
>> >> conservative and will make a number of problematic links look OK. But
>> hey,
>> >> even that is leaps and bounds better than the old only idle latency
>> report.
>> >>
>> >>
>> >>> My hope is that some publication in Norway will pick up on that score
>> >> and do a test and get some mainstream publicity with the results.
>> >>
>> >>         [SM] Inside the EU the challenge is to get national regulators
>> and
>> >> the BEREC to start bothering about latency-under-load at all, "some
>> >> mainstream publicity" would probably help here as well.
>> >>
>> >> Regards
>> >>         Sebastian
>> >>
>> >>
>> >>>
>> >>> -Erik
>> >>>
>> >>>
>> >>>
>> >>
>> >> _______________________________________________
>> >> Make-wifi-fast mailing list
>> >> Make-wifi-fast@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>> >
>> >_______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

  reply	other threads:[~2022-10-11  7:15 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-09 13:14 [Make-wifi-fast] " Dave Taht
2022-10-09 13:23 ` [Make-wifi-fast] [Bloat] " Nathan Owens
2022-10-10  5:52 ` Taraldsen Erik
2022-10-10  9:09   ` [Make-wifi-fast] [Cake] " Sebastian Moeller
2022-10-10  9:33     ` Taraldsen Erik
2022-10-10  9:40       ` Sebastian Moeller
2022-10-10 11:46         ` [Make-wifi-fast] [Bloat] [Cake] " Taraldsen Erik
2022-10-10 20:23           ` Sebastian Moeller
2022-10-11  6:08             ` [Make-wifi-fast] [Cake] [Bloat] " Taraldsen Erik
2022-10-11  6:35               ` Sebastian Moeller
2022-10-11  6:38                 ` [Make-wifi-fast] [Bloat] [Cake] " Dave Taht
2022-10-11 11:34                   ` Taraldsen Erik
2022-10-10 16:45         ` [Make-wifi-fast] [Cake] [Bloat] " Bob McMahon
2022-10-10 22:57           ` [Make-wifi-fast] [Bloat] [Cake] " David Lang
2022-10-11  0:05             ` Bob McMahon
2022-10-11  7:15               ` Sebastian Moeller [this message]
2022-10-11 16:58                 ` Bob McMahon
2022-10-11 17:00                   ` [Make-wifi-fast] [Rpm] " Dave Taht
2022-10-11 17:26                   ` [Make-wifi-fast] " Sebastian Moeller
2022-10-11 17:47                     ` Bob McMahon
2022-10-11 13:57               ` [Make-wifi-fast] [Rpm] " Rich Brown
2022-10-11 14:43                 ` Dave Taht
2022-10-11 17:05                 ` Bob McMahon
2022-10-11 18:44                   ` Rich Brown
2022-10-11 22:24                     ` Dave Taht
2022-10-12 17:39                       ` Bob McMahon
2022-10-12 21:44                         ` [Make-wifi-fast] [Cake] [Rpm] [Bloat] " David P. Reed
2022-10-13 17:45                   ` [Make-wifi-fast] [Bloat] [Rpm] [Cake] " Livingood, Jason
2022-10-13 17:49                     ` [Make-wifi-fast] [Rpm] [Bloat] " Dave Taht
2022-10-11  6:28           ` [Make-wifi-fast] [Cake] [Bloat] " Sebastian Moeller
2022-10-18  0:02 ` [Make-wifi-fast] " Stuart Cheshire
2022-10-18  2:44   ` Dave Taht
2022-10-18  2:51     ` [Make-wifi-fast] [Bloat] " Sina Khanifar
2022-10-18  3:15       ` [Make-wifi-fast] A quick report from the WISPA conference Dave Taht
2022-10-18 17:17         ` Sina Khanifar
2022-10-18 19:04           ` [Make-wifi-fast] [Bloat] " Sebastian Moeller
2022-10-20  5:15             ` Sina Khanifar
2022-10-20  9:01               ` Sebastian Moeller
2022-10-20 14:50                 ` Jeremy Harris
2022-10-20 15:56                   ` Sebastian Moeller
2022-10-20 17:59                     ` Bob McMahon
2022-10-18 19:17           ` Sebastian Moeller
2022-10-18  2:58     ` [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat David Lang
2022-10-18 17:03       ` Bob McMahon
2022-10-18 18:19         ` [Make-wifi-fast] [Rpm] " Sebastian Moeller
2022-10-18 19:30           ` Bob McMahon
2022-10-19  7:09           ` David Lang
2022-10-19 19:18             ` Bob McMahon
2022-10-19 19:23               ` David Lang
2022-10-19 21:26                 ` [Make-wifi-fast] [Cake] " David P. Reed
2022-10-19 21:37                   ` David Lang
2022-10-19 20:44     ` [Make-wifi-fast] " Stuart Cheshire
2022-10-19 21:33       ` [Make-wifi-fast] [Bloat] " David Lang
2022-10-19 23:36         ` Stephen Hemminger
2022-10-20 14:26           ` [Make-wifi-fast] [Rpm] [Bloat] Traffic analogies (was: Wonderful video) Rich Brown
2022-10-19 21:46       ` [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat Michael Richardson
2022-12-06 19:17         ` Bob McMahon
2022-10-20  9:36       ` [Make-wifi-fast] [Rpm] " Sebastian Moeller
2022-10-20 18:32         ` Stuart Cheshire
2022-10-20 19:04           ` Bob McMahon
2022-10-20 19:12             ` Dave Taht
2022-10-20 19:31               ` Bob McMahon
2022-10-20 19:40               ` Sebastian Moeller
2022-10-21 17:48                 ` Bob McMahon
2022-10-20 19:33             ` Sebastian Moeller
2022-10-20 19:33           ` Dave Taht
2022-10-26 20:38           ` Sebastian Moeller
2022-10-26 20:42             ` Dave Taht
2022-10-26 20:53               ` Sebastian Moeller
2022-10-18 18:07   ` [Make-wifi-fast] [Bloat] " Sebastian Moeller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/make-wifi-fast.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ACE51BEA-99AF-4CC9-B2A7-695C8F8B5946@gmx.de \
    --to=moeller0@gmx.de \
    --cc=bloat@lists.bufferbloat.net \
    --cc=bob.mcmahon@broadcom.com \
    --cc=cake@lists.bufferbloat.net \
    --cc=david@lang.hm \
    --cc=erik.taraldsen@telenor.no \
    --cc=make-wifi-fast@lists.bufferbloat.net \
    --cc=rpm@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox