* [Make-wifi-fast] The most wonderful video ever about bufferbloat
@ 2022-10-09 13:14 Dave Taht
2022-10-09 13:23 ` [Make-wifi-fast] [Bloat] " Nathan Owens
` (2 more replies)
0 siblings, 3 replies; 70+ messages in thread
From: Dave Taht @ 2022-10-09 13:14 UTC (permalink / raw)
To: Rpm, bloat, Make-Wifi-fast, Cake List
This was so massively well done, I cried. Does anyone know how to get
in touch with the ifxit folk?
https://www.youtube.com/watch?v=UICh3ScfNWI
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat
2022-10-09 13:14 [Make-wifi-fast] The most wonderful video ever about bufferbloat Dave Taht
@ 2022-10-09 13:23 ` Nathan Owens
2022-10-10 5:52 ` Taraldsen Erik
2022-10-18 0:02 ` [Make-wifi-fast] " Stuart Cheshire
2 siblings, 0 replies; 70+ messages in thread
From: Nathan Owens @ 2022-10-09 13:23 UTC (permalink / raw)
To: Dave Taht; +Cc: Rpm, bloat, Make-Wifi-fast, Cake List
[-- Attachment #1: Type: text/plain, Size: 702 bytes --]
I think Tech Quickie is part of Linus Tech Tips (Linus Media Group), not
iFixit, FWIW.
On Sun, Oct 9, 2022 at 6:15 AM Dave Taht via Bloat <
bloat@lists.bufferbloat.net> wrote:
> This was so massively well done, I cried. Does anyone know how to get
> in touch with the ifxit folk?
>
> https://www.youtube.com/watch?v=UICh3ScfNWI
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 1394 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat
2022-10-09 13:14 [Make-wifi-fast] The most wonderful video ever about bufferbloat Dave Taht
2022-10-09 13:23 ` [Make-wifi-fast] [Bloat] " Nathan Owens
@ 2022-10-10 5:52 ` Taraldsen Erik
2022-10-10 9:09 ` [Make-wifi-fast] [Cake] " Sebastian Moeller
2022-10-18 0:02 ` [Make-wifi-fast] " Stuart Cheshire
2 siblings, 1 reply; 70+ messages in thread
From: Taraldsen Erik @ 2022-10-10 5:52 UTC (permalink / raw)
To: Dave Taht, Rpm, bloat, Make-Wifi-fast, Cake List
It took about 3 hours from the video was release before we got the first request to have SQM on the CPE's we manage as a ISP. Finally getting some customer response on the issue.
On 09/10/2022, 15:15, "Dave Taht via Bloat" <bloat@lists.bufferbloat.net> wrote:
This was so massively well done, I cried. Does anyone know how to get
in touch with the ifxit folk?
https://www.youtube.com/watch?v=UICh3ScfNWI
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] The most wonderful video ever about bufferbloat
2022-10-10 5:52 ` Taraldsen Erik
@ 2022-10-10 9:09 ` Sebastian Moeller
2022-10-10 9:33 ` Taraldsen Erik
0 siblings, 1 reply; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-10 9:09 UTC (permalink / raw)
To: Taraldsen Erik; +Cc: Dave Täht, Rpm, bloat, Make-Wifi-fast, Cake List
Nice!
> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <cake@lists.bufferbloat.net> wrote:
>
> It took about 3 hours from the video was release before we got the first request to have SQM on the CPE's we manage as a ISP. Finally getting some customer response on the issue.
[SM] Will you be able to bump these requests to higher-ups and at least change some perception of customer demand for tighter latency performance?
Regards
Sebastian
>
>
>
> On 09/10/2022, 15:15, "Dave Taht via Bloat" <bloat@lists.bufferbloat.net> wrote:
>
> This was so massively well done, I cried. Does anyone know how to get
> in touch with the ifxit folk?
>
> https://www.youtube.com/watch?v=UICh3ScfNWI
>
> --
> This song goes out to all the folk that thought Stadia would work:
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] The most wonderful video ever about bufferbloat
2022-10-10 9:09 ` [Make-wifi-fast] [Cake] " Sebastian Moeller
@ 2022-10-10 9:33 ` Taraldsen Erik
2022-10-10 9:40 ` Sebastian Moeller
0 siblings, 1 reply; 70+ messages in thread
From: Taraldsen Erik @ 2022-10-10 9:33 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Dave Täht, Rpm, bloat, Make-Wifi-fast, Cake List
On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de> wrote:
Nice!
> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <cake@lists.bufferbloat.net> wrote:
>
> It took about 3 hours from the video was release before we got the first request to have SQM on the CPE's we manage as a ISP. Finally getting some customer response on the issue.
[SM] Will you be able to bump these requests to higher-ups and at least change some perception of customer demand for tighter latency performance?
That would be the hope. We actually have fq_codel implemented on the two latest generations of DSL routers. Use sync rate as input to set the rate. Works quite well.
There is also a bit of traction around speedtest.net's inclusion of latency under load internally. My hope is that some publication in Norway will pick up on that score and do a test and get some mainstream publicity with the results.
-Erik
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] The most wonderful video ever about bufferbloat
2022-10-10 9:33 ` Taraldsen Erik
@ 2022-10-10 9:40 ` Sebastian Moeller
2022-10-10 11:46 ` [Make-wifi-fast] [Bloat] [Cake] " Taraldsen Erik
2022-10-10 16:45 ` [Make-wifi-fast] [Cake] [Bloat] " Bob McMahon
0 siblings, 2 replies; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-10 9:40 UTC (permalink / raw)
To: Taraldsen Erik; +Cc: Dave Täht, Rpm, bloat, Make-Wifi-fast, Cake List
Hi Erik,
> On Oct 10, 2022, at 11:32, Taraldsen Erik <erik.taraldsen@telenor.no> wrote:
>
> On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de> wrote:
>
> Nice!
>
>> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <cake@lists.bufferbloat.net> wrote:
>>
>> It took about 3 hours from the video was release before we got the first request to have SQM on the CPE's we manage as a ISP. Finally getting some customer response on the issue.
>
> [SM] Will you be able to bump these requests to higher-ups and at least change some perception of customer demand for tighter latency performance?
>
> That would be the hope.
[SM} Excellent, hope this plays out as we wish for.
> We actually have fq_codel implemented on the two latest generations of DSL routers. Use sync rate as input to set the rate. Works quite well.
[SM] Cool, if I might ask what fraction of the sync are you setting the traffic shaper for and are you doing fine grained overhead accounting (or simply fold that into a grand "de-rating"-factor)?
> There is also a bit of traction around speedtest.net's inclusion of latency under load internally.
[SM] Yes, although IIUC they are reporting the interquartile mean for the two loaded latency estimates, which is pretty conservative and only really "triggers" for massive consistently elevated latency; so I expect this to be great for detecting really bad cases, but I fear it is too conservative and will make a number of problematic links look OK. But hey, even that is leaps and bounds better than the old only idle latency report.
> My hope is that some publication in Norway will pick up on that score and do a test and get some mainstream publicity with the results.
[SM] Inside the EU the challenge is to get national regulators and the BEREC to start bothering about latency-under-load at all, "some mainstream publicity" would probably help here as well.
Regards
Sebastian
>
> -Erik
>
>
>
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-10 9:40 ` Sebastian Moeller
@ 2022-10-10 11:46 ` Taraldsen Erik
2022-10-10 20:23 ` Sebastian Moeller
2022-10-10 16:45 ` [Make-wifi-fast] [Cake] [Bloat] " Bob McMahon
1 sibling, 1 reply; 70+ messages in thread
From: Taraldsen Erik @ 2022-10-10 11:46 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Rpm, Make-Wifi-fast, Cake List, bloat
On 10/10/2022, 11:41, "Bloat on behalf of Sebastian Moeller via Bloat" <bloat-bounces@lists.bufferbloat.net on behalf of bloat@lists.bufferbloat.net> wrote:
[SM] Cool, if I might ask what fraction of the sync are you setting the traffic shaper for and are you doing fine grained overhead accounting (or simply fold that into a grand "de-rating"-factor)?
We ended up just using a fraction. Can't remember the exact fraction, but we were not conservative. It was hard to push through this change so leaving any bw on the table was sacrilegious to a lot of people.
-Erik
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] The most wonderful video ever about bufferbloat
2022-10-10 9:40 ` Sebastian Moeller
2022-10-10 11:46 ` [Make-wifi-fast] [Bloat] [Cake] " Taraldsen Erik
@ 2022-10-10 16:45 ` Bob McMahon
2022-10-10 22:57 ` [Make-wifi-fast] [Bloat] [Cake] " David Lang
2022-10-11 6:28 ` [Make-wifi-fast] [Cake] [Bloat] " Sebastian Moeller
1 sibling, 2 replies; 70+ messages in thread
From: Bob McMahon @ 2022-10-10 16:45 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Taraldsen Erik, Rpm, Make-Wifi-fast, Cake List, bloat
[-- Attachment #1.1: Type: text/plain, Size: 4036 bytes --]
I think conflating bufferbloat with latency misses the subtle point in that
bufferbloat is a measurement in memory units more than a measurement in
time units. The first design flaw is a queue that is too big. This youtube
video analogy doesn't help one understand this important point.
Another subtle point is that the video assumes AQM as the only solution and
ignores others, i.e. pacing at the source(s) and/or faster service rates. A
restaurant that let's one call ahead to put their name on the waitlist
doesn't change the wait time. Just because a transport layer slowed down
and hasn't congested a downstream queue doesn't mean the e2e latency
performance will meet the gaming needs as an example. The delay is still
there it's just not manifesting itself in a shared queue that may or may
not negatively impact others using that shared queue.
Bob
On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via Make-wifi-fast <
make-wifi-fast@lists.bufferbloat.net> wrote:
> Hi Erik,
>
>
> > On Oct 10, 2022, at 11:32, Taraldsen Erik <erik.taraldsen@telenor.no>
> wrote:
> >
> > On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de> wrote:
> >
> > Nice!
> >
> >> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <
> cake@lists.bufferbloat.net> wrote:
> >>
> >> It took about 3 hours from the video was release before we got the
> first request to have SQM on the CPE's we manage as a ISP. Finally
> getting some customer response on the issue.
> >
> > [SM] Will you be able to bump these requests to higher-ups and at
> least change some perception of customer demand for tighter latency
> performance?
> >
> > That would be the hope.
>
> [SM} Excellent, hope this plays out as we wish for.
>
>
> > We actually have fq_codel implemented on the two latest generations of
> DSL routers. Use sync rate as input to set the rate. Works quite well.
>
> [SM] Cool, if I might ask what fraction of the sync are you
> setting the traffic shaper for and are you doing fine grained overhead
> accounting (or simply fold that into a grand "de-rating"-factor)?
>
>
> > There is also a bit of traction around speedtest.net's inclusion of
> latency under load internally.
>
> [SM] Yes, although IIUC they are reporting the interquartile mean
> for the two loaded latency estimates, which is pretty conservative and only
> really "triggers" for massive consistently elevated latency; so I expect
> this to be great for detecting really bad cases, but I fear it is too
> conservative and will make a number of problematic links look OK. But hey,
> even that is leaps and bounds better than the old only idle latency report.
>
>
> > My hope is that some publication in Norway will pick up on that score
> and do a test and get some mainstream publicity with the results.
>
> [SM] Inside the EU the challenge is to get national regulators and
> the BEREC to start bothering about latency-under-load at all, "some
> mainstream publicity" would probably help here as well.
>
> Regards
> Sebastian
>
>
> >
> > -Erik
> >
> >
> >
>
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.2: Type: text/html, Size: 5173 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-10 11:46 ` [Make-wifi-fast] [Bloat] [Cake] " Taraldsen Erik
@ 2022-10-10 20:23 ` Sebastian Moeller
2022-10-11 6:08 ` [Make-wifi-fast] [Cake] [Bloat] " Taraldsen Erik
0 siblings, 1 reply; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-10 20:23 UTC (permalink / raw)
To: Taraldsen Erik; +Cc: Rpm, Make-Wifi-fast, Cake List, bloat
Hi Erik,
> On Oct 10, 2022, at 13:46, Taraldsen Erik <erik.taraldsen@telenor.no> wrote:
>
>
>
> On 10/10/2022, 11:41, "Bloat on behalf of Sebastian Moeller via Bloat" <bloat-bounces@lists.bufferbloat.net on behalf of bloat@lists.bufferbloat.net> wrote:
>
> [SM] Cool, if I might ask what fraction of the sync are you setting the traffic shaper for and are you doing fine grained overhead accounting (or simply fold that into a grand "de-rating"-factor)?
>
> We ended up just using a fraction.
[SM] Fair enough, for ATM/AAL5 that is challenging but for VDSL2/PTM that seems workable...
> Can't remember the exact fraction, but we were not conservative. It was hard to push through this change so leaving any bw on the table was sacrilegious to a lot of people.
[SM] Tricky... e.g. vectoring enabled CPE can be instructed by the DSLAM to send error samples in-band with the data, but that traffic is never seen by our shapers, so to account for that we need to set a fraction that allows for that (more or less) periodic traffic. I guess one can reach a point of "goog enough" even when ignoring such eventualities, especially if having to convince through-put hot-rodders. Always interesting to hear experience from the real world, thanks!
Regards
Sebastian
>
>
>
> -Erik
>
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-10 16:45 ` [Make-wifi-fast] [Cake] [Bloat] " Bob McMahon
@ 2022-10-10 22:57 ` David Lang
2022-10-11 0:05 ` Bob McMahon
2022-10-11 6:28 ` [Make-wifi-fast] [Cake] [Bloat] " Sebastian Moeller
1 sibling, 1 reply; 70+ messages in thread
From: David Lang @ 2022-10-10 22:57 UTC (permalink / raw)
To: Bob McMahon
Cc: Sebastian Moeller, Rpm, Make-Wifi-fast, Cake List, Taraldsen Erik, bloat
[-- Attachment #1: Type: text/plain, Size: 3518 bytes --]
On Mon, 10 Oct 2022, Bob McMahon via Bloat wrote:
> I think conflating bufferbloat with latency misses the subtle point in that
> bufferbloat is a measurement in memory units more than a measurement in
> time units. The first design flaw is a queue that is too big. This youtube
> video analogy doesn't help one understand this important point.
but the queue is only too big because of the time it takes to empty the queue,
which puts us back into the time domain.
David Lang
> Another subtle point is that the video assumes AQM as the only solution and
> ignores others, i.e. pacing at the source(s) and/or faster service rates. A
> restaurant that let's one call ahead to put their name on the waitlist
> doesn't change the wait time. Just because a transport layer slowed down
> and hasn't congested a downstream queue doesn't mean the e2e latency
> performance will meet the gaming needs as an example. The delay is still
> there it's just not manifesting itself in a shared queue that may or may
> not negatively impact others using that shared queue.
>
> Bob
>
>
>
> On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via Make-wifi-fast <
> make-wifi-fast@lists.bufferbloat.net> wrote:
>
>> Hi Erik,
>>
>>
>>> On Oct 10, 2022, at 11:32, Taraldsen Erik <erik.taraldsen@telenor.no>
>> wrote:
>>>
>>> On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de> wrote:
>>>
>>> Nice!
>>>
>>>> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <
>> cake@lists.bufferbloat.net> wrote:
>>>>
>>>> It took about 3 hours from the video was release before we got the
>> first request to have SQM on the CPE's we manage as a ISP. Finally
>> getting some customer response on the issue.
>>>
>>> [SM] Will you be able to bump these requests to higher-ups and at
>> least change some perception of customer demand for tighter latency
>> performance?
>>>
>>> That would be the hope.
>>
>> [SM} Excellent, hope this plays out as we wish for.
>>
>>
>>> We actually have fq_codel implemented on the two latest generations of
>> DSL routers. Use sync rate as input to set the rate. Works quite well.
>>
>> [SM] Cool, if I might ask what fraction of the sync are you
>> setting the traffic shaper for and are you doing fine grained overhead
>> accounting (or simply fold that into a grand "de-rating"-factor)?
>>
>>
>>> There is also a bit of traction around speedtest.net's inclusion of
>> latency under load internally.
>>
>> [SM] Yes, although IIUC they are reporting the interquartile mean
>> for the two loaded latency estimates, which is pretty conservative and only
>> really "triggers" for massive consistently elevated latency; so I expect
>> this to be great for detecting really bad cases, but I fear it is too
>> conservative and will make a number of problematic links look OK. But hey,
>> even that is leaps and bounds better than the old only idle latency report.
>>
>>
>>> My hope is that some publication in Norway will pick up on that score
>> and do a test and get some mainstream publicity with the results.
>>
>> [SM] Inside the EU the challenge is to get national regulators and
>> the BEREC to start bothering about latency-under-load at all, "some
>> mainstream publicity" would probably help here as well.
>>
>> Regards
>> Sebastian
>>
>>
>>>
>>> -Erik
>>>
>>>
>>>
>>
>> _______________________________________________
>> Make-wifi-fast mailing list
>> Make-wifi-fast@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
>
[-- Attachment #2: Type: text/plain, Size: 140 bytes --]
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-10 22:57 ` [Make-wifi-fast] [Bloat] [Cake] " David Lang
@ 2022-10-11 0:05 ` Bob McMahon
2022-10-11 7:15 ` Sebastian Moeller
2022-10-11 13:57 ` [Make-wifi-fast] [Rpm] " Rich Brown
0 siblings, 2 replies; 70+ messages in thread
From: Bob McMahon @ 2022-10-11 0:05 UTC (permalink / raw)
To: David Lang
Cc: Sebastian Moeller, Rpm, Make-Wifi-fast, Cake List, Taraldsen Erik, bloat
[-- Attachment #1.1: Type: text/plain, Size: 6363 bytes --]
It's too big because it's oversized so it's in the size domain. It's
basically Little's law's value for the number of items in a queue.
*Number of items in the system = (the rate items enter and leave the
system) x (the average amount of time items spend in the system)*
Which gets driven to the standing queue size when the arrival rate
exceeds the service rate - so the driving factor isn't the service and
arrival rates, but *the queue size *when *any service rate is less than an
arrival rate.*
In other words, one can find and measure bloat regardless of the
enter/leave rates (as long as the leave rate is too slow) and the value of
memory units found will always be the same.
Things like prioritizations to jump the line are somewhat of hacks at
reducing the service time for a specialized class of packets but nobody
really knows which packets should jump. Also, nobody can define what
working conditions are so that's another problem with this class of tests.
Better maybe just to shrink the queue and eliminate all unneeded queueing
delays. Also, measure the performance per "user conditions" which is going
to be different for almost every environment (and is correlated to time and
space.) So any engineering solution is fundamentally suboptimal. Even
pacing the source doesn't necessarily do the right thing because that's
like waiting in the waitlist while at home vs the restaurant lobby. Few
care about where messages wait (unless the pitch is AQM is the only
solution that drives to a self-fulfilling prophecy - that's why the tests
have to come up with artificial conditions that can't be simply defined.)
Bob
On Mon, Oct 10, 2022 at 3:57 PM David Lang <david@lang.hm> wrote:
> On Mon, 10 Oct 2022, Bob McMahon via Bloat wrote:
>
> > I think conflating bufferbloat with latency misses the subtle point in
> that
> > bufferbloat is a measurement in memory units more than a measurement in
> > time units. The first design flaw is a queue that is too big. This
> youtube
> > video analogy doesn't help one understand this important point.
>
> but the queue is only too big because of the time it takes to empty the
> queue,
> which puts us back into the time domain.
>
> David Lang
>
> > Another subtle point is that the video assumes AQM as the only solution
> and
> > ignores others, i.e. pacing at the source(s) and/or faster service
> rates. A
> > restaurant that let's one call ahead to put their name on the waitlist
> > doesn't change the wait time. Just because a transport layer slowed down
> > and hasn't congested a downstream queue doesn't mean the e2e latency
> > performance will meet the gaming needs as an example. The delay is still
> > there it's just not manifesting itself in a shared queue that may or may
> > not negatively impact others using that shared queue.
> >
> > Bob
> >
> >
> >
> > On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via Make-wifi-fast <
> > make-wifi-fast@lists.bufferbloat.net> wrote:
> >
> >> Hi Erik,
> >>
> >>
> >>> On Oct 10, 2022, at 11:32, Taraldsen Erik <erik.taraldsen@telenor.no>
> >> wrote:
> >>>
> >>> On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de> wrote:
> >>>
> >>> Nice!
> >>>
> >>>> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <
> >> cake@lists.bufferbloat.net> wrote:
> >>>>
> >>>> It took about 3 hours from the video was release before we got the
> >> first request to have SQM on the CPE's we manage as a ISP. Finally
> >> getting some customer response on the issue.
> >>>
> >>> [SM] Will you be able to bump these requests to higher-ups and at
> >> least change some perception of customer demand for tighter latency
> >> performance?
> >>>
> >>> That would be the hope.
> >>
> >> [SM} Excellent, hope this plays out as we wish for.
> >>
> >>
> >>> We actually have fq_codel implemented on the two latest generations of
> >> DSL routers. Use sync rate as input to set the rate. Works quite well.
> >>
> >> [SM] Cool, if I might ask what fraction of the sync are you
> >> setting the traffic shaper for and are you doing fine grained overhead
> >> accounting (or simply fold that into a grand "de-rating"-factor)?
> >>
> >>
> >>> There is also a bit of traction around speedtest.net's inclusion of
> >> latency under load internally.
> >>
> >> [SM] Yes, although IIUC they are reporting the interquartile
> mean
> >> for the two loaded latency estimates, which is pretty conservative and
> only
> >> really "triggers" for massive consistently elevated latency; so I expect
> >> this to be great for detecting really bad cases, but I fear it is too
> >> conservative and will make a number of problematic links look OK. But
> hey,
> >> even that is leaps and bounds better than the old only idle latency
> report.
> >>
> >>
> >>> My hope is that some publication in Norway will pick up on that score
> >> and do a test and get some mainstream publicity with the results.
> >>
> >> [SM] Inside the EU the challenge is to get national regulators
> and
> >> the BEREC to start bothering about latency-under-load at all, "some
> >> mainstream publicity" would probably help here as well.
> >>
> >> Regards
> >> Sebastian
> >>
> >>
> >>>
> >>> -Erik
> >>>
> >>>
> >>>
> >>
> >> _______________________________________________
> >> Make-wifi-fast mailing list
> >> Make-wifi-fast@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
> >
> >_______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.2: Type: text/html, Size: 8533 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] The most wonderful video ever about bufferbloat
2022-10-10 20:23 ` Sebastian Moeller
@ 2022-10-11 6:08 ` Taraldsen Erik
2022-10-11 6:35 ` Sebastian Moeller
0 siblings, 1 reply; 70+ messages in thread
From: Taraldsen Erik @ 2022-10-11 6:08 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Rpm, Make-Wifi-fast, Cake List, bloat
On 10/10/2022, 22:23, "Cake on behalf of Sebastian Moeller via Cake" <cake-bounces@lists.bufferbloat.net on behalf of cake@lists.bufferbloat.net> wrote:
[SM] Tricky... e.g. vectoring enabled CPE can be instructed by the DSLAM to send error samples in-band with the data, but that traffic is never seen by our shapers, so to account for that we need to set a fraction that allows for that (more or less) periodic traffic. I guess one can reach a point of "goog enough" even when ignoring such eventualities, especially if having to convince through-put hot-rodders. Always interesting to hear experience from the real world, thanks!
In my bussiness we can't let perfect be the enemy of good. If we were to wait for the perfect firmware, nobody would have internet access at all. Our team moto is "suck less". Meaning we know there are issues with all products we take to market. And to get to market at all we unfortunately need to accept some suckiness in one domain or another. So when we follow up the vendors each new firmware has to suck less.
-Erik
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] The most wonderful video ever about bufferbloat
2022-10-10 16:45 ` [Make-wifi-fast] [Cake] [Bloat] " Bob McMahon
2022-10-10 22:57 ` [Make-wifi-fast] [Bloat] [Cake] " David Lang
@ 2022-10-11 6:28 ` Sebastian Moeller
1 sibling, 0 replies; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-11 6:28 UTC (permalink / raw)
To: Bob McMahon; +Cc: Taraldsen Erik, Rpm, Make-Wifi-fast, Cake List, bloat
Hi Bob,
On 10 October 2022 18:45:31 CEST, Bob McMahon <bob.mcmahon@broadcom.com> wrote:
>I think conflating bufferbloat with latency misses the subtle point in that
>bufferbloat is a measurement in memory units more than a measurement in
>time units. The first design flaw is a queue that is too big.
[SM] I tend to describe the problem as on-path queues being "over-sized and under-managed". IMHO this makes it easier to see both potential approaches to address the consequences:
a) the back-bone solution of working hard to never/rarely actually fill the buffer noticeably.
b) the better-manage-(developing)-queues solution.
While a crude systematic, this covers all approaches to tackle the problem I am aware of.
This youtube
>video analogy doesn't help one understand this important point.
>
>Another subtle point is that the video assumes AQM as the only solution and
>ignores others, i.e. pacing at the source(s) and/or faster service rates.
[SM] Not all applications/use-cases interested in low latency are fully compatible with pacing. E.g. reaction-time gated multiplayer on-line games tend to send and receive on a 'tick' which looks like natural pacing, except especially on receive depending on circumstances world state updates can consist out of quite a number of packets which the client needs all ASAP so these burst should not be paced out over a tick.
Faster service rates is a solution, that IMHO mostly moves the location of the problematic queue around, in fact the opposite, lowering the service rate is how sqm achieves its goal to get the problematic queue under its control. Also on variable rate links faster service rate is a tricky proposition (as you well know).
>A restaurant that let's one call ahead to put their name on the waitlist
>doesn't change the wait time. Just because a transport layer slowed down
>and hasn't congested a downstream queue doesn't mean the e2e latency
>performance will meet the gaming needs as an example. The delay is still
>there it's just not manifesting itself in a shared queue that may or may
>not negatively impact others using that shared queue.
[SM] +1, this is part of my criticism of how the L4S proponents tout their 1ms queueing delay goal, ignoring that the receiver only cares about total delay and not so much where exactly the delay was 'collected'.
However, experience with sqm makes me believe that trying to avoid naively shared queues can help a lot.
Trying as L4S does to change all senders to play nicer with shared queues, is simply less robust and reliable than actually enforcing the desired behavior at the bottleneck queue in a fine grained and targeted fashion.
>
>Bob
>
>
>
>On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via Make-wifi-fast <
>make-wifi-fast@lists.bufferbloat.net> wrote:
>
>> Hi Erik,
>>
>>
>> > On Oct 10, 2022, at 11:32, Taraldsen Erik <erik.taraldsen@telenor.no>
>> wrote:
>> >
>> > On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de> wrote:
>> >
>> > Nice!
>> >
>> >> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <
>> cake@lists.bufferbloat.net> wrote:
>> >>
>> >> It took about 3 hours from the video was release before we got the
>> first request to have SQM on the CPE's we manage as a ISP. Finally
>> getting some customer response on the issue.
>> >
>> > [SM] Will you be able to bump these requests to higher-ups and at
>> least change some perception of customer demand for tighter latency
>> performance?
>> >
>> > That would be the hope.
>>
>> [SM} Excellent, hope this plays out as we wish for.
>>
>>
>> > We actually have fq_codel implemented on the two latest generations of
>> DSL routers. Use sync rate as input to set the rate. Works quite well.
>>
>> [SM] Cool, if I might ask what fraction of the sync are you
>> setting the traffic shaper for and are you doing fine grained overhead
>> accounting (or simply fold that into a grand "de-rating"-factor)?
>>
>>
>> > There is also a bit of traction around speedtest.net's inclusion of
>> latency under load internally.
>>
>> [SM] Yes, although IIUC they are reporting the interquartile mean
>> for the two loaded latency estimates, which is pretty conservative and only
>> really "triggers" for massive consistently elevated latency; so I expect
>> this to be great for detecting really bad cases, but I fear it is too
>> conservative and will make a number of problematic links look OK. But hey,
>> even that is leaps and bounds better than the old only idle latency report.
>>
>>
>> > My hope is that some publication in Norway will pick up on that score
>> and do a test and get some mainstream publicity with the results.
>>
>> [SM] Inside the EU the challenge is to get national regulators and
>> the BEREC to start bothering about latency-under-load at all, "some
>> mainstream publicity" would probably help here as well.
>>
>> Regards
>> Sebastian
>>
>>
>> >
>> > -Erik
>> >
>> >
>> >
>>
>> _______________________________________________
>> Make-wifi-fast mailing list
>> Make-wifi-fast@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] The most wonderful video ever about bufferbloat
2022-10-11 6:08 ` [Make-wifi-fast] [Cake] [Bloat] " Taraldsen Erik
@ 2022-10-11 6:35 ` Sebastian Moeller
2022-10-11 6:38 ` [Make-wifi-fast] [Bloat] [Cake] " Dave Taht
0 siblings, 1 reply; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-11 6:35 UTC (permalink / raw)
To: Taraldsen Erik; +Cc: Rpm, Make-Wifi-fast, Cake List, bloat
Hi Erik,
On 11 October 2022 08:08:14 CEST, Taraldsen Erik <erik.taraldsen@telenor.no> wrote:
>
>
>On 10/10/2022, 22:23, "Cake on behalf of Sebastian Moeller via Cake" <cake-bounces@lists.bufferbloat.net on behalf of cake@lists.bufferbloat.net> wrote:
>
>
> [SM] Tricky... e.g. vectoring enabled CPE can be instructed by the DSLAM to send error samples in-band with the data, but that traffic is never seen by our shapers, so to account for that we need to set a fraction that allows for that (more or less) periodic traffic. I guess one can reach a point of "goog enough" even when ignoring such eventualities, especially if having to convince through-put hot-rodders. Always interesting to hear experience from the real world, thanks!
>
>In my bussiness we can't let perfect be the enemy of good. If we were to wait for the perfect firmware, nobody would have internet access at all. Our team moto is "suck less". Meaning we know there are issues with all products we take to market. And to get to market at all we unfortunately need to accept some suckiness in one domain or another. So when we follow up the vendors each new firmware has to suck less.
[SM] Great I like the focus on improvement! I wish my ISP were similarly enlightend.
Regards
Sebastian
>
>-Erik
>
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 6:35 ` Sebastian Moeller
@ 2022-10-11 6:38 ` Dave Taht
2022-10-11 11:34 ` Taraldsen Erik
0 siblings, 1 reply; 70+ messages in thread
From: Dave Taht @ 2022-10-11 6:38 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Taraldsen Erik, Rpm, Make-Wifi-fast, Cake List, bloat
I guess my question is, erik, do you sell these routers commercially?
There is a huge latent market in the US that could use upgrades.....
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 0:05 ` Bob McMahon
@ 2022-10-11 7:15 ` Sebastian Moeller
2022-10-11 16:58 ` Bob McMahon
2022-10-11 13:57 ` [Make-wifi-fast] [Rpm] " Rich Brown
1 sibling, 1 reply; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-11 7:15 UTC (permalink / raw)
To: Bob McMahon, David Lang
Cc: Rpm, Make-Wifi-fast, Cake List, Taraldsen Erik, bloat
Hi Bob,
On 11 October 2022 02:05:40 CEST, Bob McMahon <bob.mcmahon@broadcom.com> wrote:
>It's too big because it's oversized so it's in the size domain. It's
>basically Little's law's value for the number of items in a queue.
>
>*Number of items in the system = (the rate items enter and leave the
>system) x (the average amount of time items spend in the system)*
>
>
>Which gets driven to the standing queue size when the arrival rate
>exceeds the service rate - so the driving factor isn't the service and
>arrival rates, but *the queue size *when *any service rate is less than an
>arrival rate.*
[SM] You could also argue it is the ratio of arrival to service rates, with the queue size being a measure correlating with how long the system will tolerate ratios larger than one...
>
>In other words, one can find and measure bloat regardless of the
>enter/leave rates (as long as the leave rate is too slow) and the value of
>memory units found will always be the same.
>
>Things like prioritizations to jump the line are somewhat of hacks at
>reducing the service time for a specialized class of packets but nobody
>really knows which packets should jump.
[SM] Au contraire most everybody 'knows' it is their packets that should jump ahead of the rest ;) For intermediate hop queues however that endpoint perception is not really actionable due to lack of robust and reliable importance identifiers on packets. In side a 'domain' dscps might work if treated to strict admission control, but that typically will not help end2end traffic over the internet. This is BTW why I think FQ is a great concept, as it mostly results in the desirable outcome of not picking winners and losers (like arbitrarily starving a flow), but I digress.
>Also, nobody can define what
>working conditions are so that's another problem with this class of tests.
[SM] While real working conditions will be different for each link and probably vary over time, it seems achievable to come up with a set of pessimistic assumptions how to model a challenging work condition against which to test potential remedies, assuming that such remedies will also work well under less challenging conditions, no?
>
>Better maybe just to shrink the queue and eliminate all unneeded queueing
>delays.
[SM] The 'unneeded' does a lot of work in that sentence ;). I like Van's? Description of queues as shock absorbers so queue size will have a lower acceptable limit assuming users want to achieve 'acceptable' throughput even with existing bursty senders. (Not all applications are suited for pacing so some level of burstiness seems unavoidable).
> Also, measure the performance per "user conditions" which is going
>to be different for almost every environment (and is correlated to time and
>space.) So any engineering solution is fundamentally suboptimal.
[SM] A matter of definition, if the requirement is to cover many user conditions the optimality measure simply needs to be changed from per individual condition to over many/all conditions, no?
>Even
>pacing the source doesn't necessarily do the right thing because that's
>like waiting in the waitlist while at home vs the restaurant lobby.
[SM] +1.
> Few
>care about where messages wait (unless the pitch is AQM is the only
>solution that drives to a self-fulfilling prophecy - that's why the tests
>have to come up with artificial conditions that can't be simply defined.)
Hrm, so the RRUL test, while not the end all of bufferbloat/working conditions tests, is not that complicated:
Saturate a link in both directions simultaneously with multiple greedy flows while measuring load-dependent latency changes for small isochronous probe flows.
Yes, the it would be nice to have additional higher rate probe flows also bursty ones to emulate on-linev games, and 'pumped' greedy flows to emulate DASH 'streaming', and a horde of small greedy flows that mostly end inside the initial window and slow start. But at its core existing RRUL already gives a useful estimate on how a link behaves under saturating loads all the while being relatively simple.
The responsiveness under working condition seems similar in that it tries to saturate a link with an increasing number of greedy flows, in a sense to create a reasonable bad case that ideally rarely happens.
Regards
Sebastian
>
>Bob
>
>On Mon, Oct 10, 2022 at 3:57 PM David Lang <david@lang.hm> wrote:
>
>> On Mon, 10 Oct 2022, Bob McMahon via Bloat wrote:
>>
>> > I think conflating bufferbloat with latency misses the subtle point in
>> that
>> > bufferbloat is a measurement in memory units more than a measurement in
>> > time units. The first design flaw is a queue that is too big. This
>> youtube
>> > video analogy doesn't help one understand this important point.
>>
>> but the queue is only too big because of the time it takes to empty the
>> queue,
>> which puts us back into the time domain.
>>
>> David Lang
>>
>> > Another subtle point is that the video assumes AQM as the only solution
>> and
>> > ignores others, i.e. pacing at the source(s) and/or faster service
>> rates. A
>> > restaurant that let's one call ahead to put their name on the waitlist
>> > doesn't change the wait time. Just because a transport layer slowed down
>> > and hasn't congested a downstream queue doesn't mean the e2e latency
>> > performance will meet the gaming needs as an example. The delay is still
>> > there it's just not manifesting itself in a shared queue that may or may
>> > not negatively impact others using that shared queue.
>> >
>> > Bob
>> >
>> >
>> >
>> > On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via Make-wifi-fast <
>> > make-wifi-fast@lists.bufferbloat.net> wrote:
>> >
>> >> Hi Erik,
>> >>
>> >>
>> >>> On Oct 10, 2022, at 11:32, Taraldsen Erik <erik.taraldsen@telenor.no>
>> >> wrote:
>> >>>
>> >>> On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de> wrote:
>> >>>
>> >>> Nice!
>> >>>
>> >>>> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <
>> >> cake@lists.bufferbloat.net> wrote:
>> >>>>
>> >>>> It took about 3 hours from the video was release before we got the
>> >> first request to have SQM on the CPE's we manage as a ISP. Finally
>> >> getting some customer response on the issue.
>> >>>
>> >>> [SM] Will you be able to bump these requests to higher-ups and at
>> >> least change some perception of customer demand for tighter latency
>> >> performance?
>> >>>
>> >>> That would be the hope.
>> >>
>> >> [SM} Excellent, hope this plays out as we wish for.
>> >>
>> >>
>> >>> We actually have fq_codel implemented on the two latest generations of
>> >> DSL routers. Use sync rate as input to set the rate. Works quite well.
>> >>
>> >> [SM] Cool, if I might ask what fraction of the sync are you
>> >> setting the traffic shaper for and are you doing fine grained overhead
>> >> accounting (or simply fold that into a grand "de-rating"-factor)?
>> >>
>> >>
>> >>> There is also a bit of traction around speedtest.net's inclusion of
>> >> latency under load internally.
>> >>
>> >> [SM] Yes, although IIUC they are reporting the interquartile
>> mean
>> >> for the two loaded latency estimates, which is pretty conservative and
>> only
>> >> really "triggers" for massive consistently elevated latency; so I expect
>> >> this to be great for detecting really bad cases, but I fear it is too
>> >> conservative and will make a number of problematic links look OK. But
>> hey,
>> >> even that is leaps and bounds better than the old only idle latency
>> report.
>> >>
>> >>
>> >>> My hope is that some publication in Norway will pick up on that score
>> >> and do a test and get some mainstream publicity with the results.
>> >>
>> >> [SM] Inside the EU the challenge is to get national regulators
>> and
>> >> the BEREC to start bothering about latency-under-load at all, "some
>> >> mainstream publicity" would probably help here as well.
>> >>
>> >> Regards
>> >> Sebastian
>> >>
>> >>
>> >>>
>> >>> -Erik
>> >>>
>> >>>
>> >>>
>> >>
>> >> _______________________________________________
>> >> Make-wifi-fast mailing list
>> >> Make-wifi-fast@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>> >
>> >_______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 6:38 ` [Make-wifi-fast] [Bloat] [Cake] " Dave Taht
@ 2022-10-11 11:34 ` Taraldsen Erik
0 siblings, 0 replies; 70+ messages in thread
From: Taraldsen Erik @ 2022-10-11 11:34 UTC (permalink / raw)
To: Dave Taht, Sebastian Moeller; +Cc: Rpm, Make-Wifi-fast, Cake List, bloat
No we don't. I think someone without all the legacy imposed on an old ISP as Telenor needs to go for that market.
And when I see how difficult it is to get customer to swap devices even when we give it to them for free - I don't see a viable business made from selling CPEs when not backed by the subsidy of the access product.
-Erik
On 11/10/2022, 08:38, "Dave Taht" <dave.taht@gmail.com> wrote:
I guess my question is, erik, do you sell these routers commercially?
There is a huge latent market in the US that could use upgrades.....
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 0:05 ` Bob McMahon
2022-10-11 7:15 ` Sebastian Moeller
@ 2022-10-11 13:57 ` Rich Brown
2022-10-11 14:43 ` Dave Taht
2022-10-11 17:05 ` Bob McMahon
1 sibling, 2 replies; 70+ messages in thread
From: Rich Brown @ 2022-10-11 13:57 UTC (permalink / raw)
To: Bob McMahon; +Cc: David Lang, Cake List, Make-Wifi-fast, Rpm, bloat
[-- Attachment #1: Type: text/plain, Size: 732 bytes --]
> On Oct 10, 2022, at 8:05 PM, Bob McMahon via Rpm <rpm@lists.bufferbloat.net> wrote:
>
> > I think conflating bufferbloat with latency misses the subtle point in that
> > bufferbloat is a measurement in memory units more than a measurement in
> > time units.
Yes, but... I am going to praise this video, even as I encourage all the techies to be sure that they have the units correct.
I've been yammering about the evils of latency/excess queueing for 10 years on my blog, in forums, etc. I have not achieved anywhere near the notoriety of this video (almost a third of a million views).
I am delighted that there's an engaging, mass-market Youtube video that makes the case that bufferbloat even exists.
Rich
[-- Attachment #2: Type: text/html, Size: 3154 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 13:57 ` [Make-wifi-fast] [Rpm] " Rich Brown
@ 2022-10-11 14:43 ` Dave Taht
2022-10-11 17:05 ` Bob McMahon
1 sibling, 0 replies; 70+ messages in thread
From: Dave Taht @ 2022-10-11 14:43 UTC (permalink / raw)
To: Rich Brown; +Cc: Bob McMahon, Cake List, bloat, Rpm, Make-Wifi-fast
On Tue, Oct 11, 2022 at 6:57 AM Rich Brown via Make-wifi-fast
<make-wifi-fast@lists.bufferbloat.net> wrote:
>
>
>
> On Oct 10, 2022, at 8:05 PM, Bob McMahon via Rpm <rpm@lists.bufferbloat.net> wrote:
>
> > I think conflating bufferbloat with latency misses the subtle point in that
> > bufferbloat is a measurement in memory units more than a measurement in
> > time units.
>
>
> Yes, but... I am going to praise this video, even as I encourage all the techies to be sure that they have the units correct.
>
> I've been yammering about the evils of latency/excess queueing for 10 years on my blog, in forums, etc. I have not achieved anywhere near the notoriety of this video (almost a third of a million views).
>
> I am delighted that there's an engaging, mass-market Youtube video that makes the case that bufferbloat even exists.
>
> Rich
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
I have to admit my ideal presenter is more of a neal degrasse tyson
https://twitter.com/neiltyson/status/1579165291434897409
but ya know, I ended up thinking about doing a funny script along the
flow of what's wrong with wifi...
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 7:15 ` Sebastian Moeller
@ 2022-10-11 16:58 ` Bob McMahon
2022-10-11 17:00 ` [Make-wifi-fast] [Rpm] " Dave Taht
2022-10-11 17:26 ` [Make-wifi-fast] " Sebastian Moeller
0 siblings, 2 replies; 70+ messages in thread
From: Bob McMahon @ 2022-10-11 16:58 UTC (permalink / raw)
To: Sebastian Moeller
Cc: David Lang, Rpm, Make-Wifi-fast, Cake List, Taraldsen Erik, bloat
[-- Attachment #1.1: Type: text/plain, Size: 11378 bytes --]
> Saturate a link in both directions simultaneously with multiple greedy
flows while measuring load-dependent latency changes for small isochronous
probe flows.
This functionality is released in iperf 2.1.8 per the bounceback feature
but, unfortunately, OpenWRT doesn't maintain iperf 2 as a package anymore
and uses 2.0.13
CLIENT SPECIFIC OPTIONS*--bounceback[=**n**]*run a TCP bounceback or rps
test with optional number writes in a burst per value of n. The default is
ten writes every period and the default period is one second (Note: set
size with -l or --len which defaults to 100 bytes)
*--bounceback-congest[=up|down|bidir][*,*n**]*request a concurrent working
load or TCP stream(s), defaults to full duplex (or bidir) unless the *up*
or *down* option is provided. The number of TCP streams defaults to 1 and
can be changed via the n value, e.g. *--bounceback-congest=down,4* will use
four TCP streams from server to the client as the working load. The IP ToS
will be BE (0x0) for working load traffic.*--bounceback-hold **n*request
the server to insert a delay of n milliseconds between its read and write
(default is no delay)*--bounceback-period[=**n**]*request the client
schedule its send(s) every n seconds (default is one second, use zero value
for immediate or continuous back to back)*--bounceback-no-quickack*request
the server not set the TCP_QUICKACK socket option (disabling TCP ACK
delays) during a bounceback test (see NOTES)*--bounceback-txdelay **n*request
the client to delay n seconds between the start of the working load and the
bounceback traffic (default is no delay)
On Tue, Oct 11, 2022 at 12:15 AM Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Bob,
>
> On 11 October 2022 02:05:40 CEST, Bob McMahon <bob.mcmahon@broadcom.com>
> wrote:
> >It's too big because it's oversized so it's in the size domain. It's
> >basically Little's law's value for the number of items in a queue.
> >
> >*Number of items in the system = (the rate items enter and leave the
> >system) x (the average amount of time items spend in the system)*
> >
> >
> >Which gets driven to the standing queue size when the arrival rate
> >exceeds the service rate - so the driving factor isn't the service and
> >arrival rates, but *the queue size *when *any service rate is less than an
> >arrival rate.*
>
> [SM] You could also argue it is the ratio of arrival to service rates,
> with the queue size being a measure correlating with how long the system
> will tolerate ratios larger than one...
>
>
> >
> >In other words, one can find and measure bloat regardless of the
> >enter/leave rates (as long as the leave rate is too slow) and the value of
> >memory units found will always be the same.
> >
> >Things like prioritizations to jump the line are somewhat of hacks at
> >reducing the service time for a specialized class of packets but nobody
> >really knows which packets should jump.
>
> [SM] Au contraire most everybody 'knows' it is their packets that should
> jump ahead of the rest ;) For intermediate hop queues however that endpoint
> perception is not really actionable due to lack of robust and reliable
> importance identifiers on packets. In side a 'domain' dscps might work if
> treated to strict admission control, but that typically will not help
> end2end traffic over the internet. This is BTW why I think FQ is a great
> concept, as it mostly results in the desirable outcome of not picking
> winners and losers (like arbitrarily starving a flow), but I digress.
>
> >Also, nobody can define what
> >working conditions are so that's another problem with this class of tests.
>
> [SM] While real working conditions will be different for each link and
> probably vary over time, it seems achievable to come up with a set of
> pessimistic assumptions how to model a challenging work condition against
> which to test potential remedies, assuming that such remedies will also
> work well under less challenging conditions, no?
>
>
> >
> >Better maybe just to shrink the queue and eliminate all unneeded queueing
> >delays.
>
> [SM] The 'unneeded' does a lot of work in that sentence ;). I like Van's?
> Description of queues as shock absorbers so queue size will have a lower
> acceptable limit assuming users want to achieve 'acceptable' throughput
> even with existing bursty senders. (Not all applications are suited for
> pacing so some level of burstiness seems unavoidable).
>
>
> > Also, measure the performance per "user conditions" which is going
> >to be different for almost every environment (and is correlated to time
> and
> >space.) So any engineering solution is fundamentally suboptimal.
>
> [SM] A matter of definition, if the requirement is to cover many user
> conditions the optimality measure simply needs to be changed from per
> individual condition to over many/all conditions, no?
>
> >Even
> >pacing the source doesn't necessarily do the right thing because that's
> >like waiting in the waitlist while at home vs the restaurant lobby.
>
> [SM] +1.
>
> > Few
> >care about where messages wait (unless the pitch is AQM is the only
> >solution that drives to a self-fulfilling prophecy - that's why the tests
> >have to come up with artificial conditions that can't be simply defined.)
>
> Hrm, so the RRUL test, while not the end all of bufferbloat/working
> conditions tests, is not that complicated:
> Saturate a link in both directions simultaneously with multiple greedy
> flows while measuring load-dependent latency changes for small isochronous
> probe flows.
>
> Yes, the it would be nice to have additional higher rate probe flows also
> bursty ones to emulate on-linev games, and 'pumped' greedy flows to emulate
> DASH 'streaming', and a horde of small greedy flows that mostly end inside
> the initial window and slow start. But at its core existing RRUL already
> gives a useful estimate on how a link behaves under saturating loads all
> the while being relatively simple.
> The responsiveness under working condition seems similar in that it tries
> to saturate a link with an increasing number of greedy flows, in a sense to
> create a reasonable bad case that ideally rarely happens.
>
> Regards
> Sebastian
>
>
> >
> >Bob
> >
> >On Mon, Oct 10, 2022 at 3:57 PM David Lang <david@lang.hm> wrote:
> >
> >> On Mon, 10 Oct 2022, Bob McMahon via Bloat wrote:
> >>
> >> > I think conflating bufferbloat with latency misses the subtle point in
> >> that
> >> > bufferbloat is a measurement in memory units more than a measurement
> in
> >> > time units. The first design flaw is a queue that is too big. This
> >> youtube
> >> > video analogy doesn't help one understand this important point.
> >>
> >> but the queue is only too big because of the time it takes to empty the
> >> queue,
> >> which puts us back into the time domain.
> >>
> >> David Lang
> >>
> >> > Another subtle point is that the video assumes AQM as the only
> solution
> >> and
> >> > ignores others, i.e. pacing at the source(s) and/or faster service
> >> rates. A
> >> > restaurant that let's one call ahead to put their name on the waitlist
> >> > doesn't change the wait time. Just because a transport layer slowed
> down
> >> > and hasn't congested a downstream queue doesn't mean the e2e latency
> >> > performance will meet the gaming needs as an example. The delay is
> still
> >> > there it's just not manifesting itself in a shared queue that may or
> may
> >> > not negatively impact others using that shared queue.
> >> >
> >> > Bob
> >> >
> >> >
> >> >
> >> > On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via Make-wifi-fast <
> >> > make-wifi-fast@lists.bufferbloat.net> wrote:
> >> >
> >> >> Hi Erik,
> >> >>
> >> >>
> >> >>> On Oct 10, 2022, at 11:32, Taraldsen Erik <
> erik.taraldsen@telenor.no>
> >> >> wrote:
> >> >>>
> >> >>> On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de> wrote:
> >> >>>
> >> >>> Nice!
> >> >>>
> >> >>>> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <
> >> >> cake@lists.bufferbloat.net> wrote:
> >> >>>>
> >> >>>> It took about 3 hours from the video was release before we got the
> >> >> first request to have SQM on the CPE's we manage as a ISP. Finally
> >> >> getting some customer response on the issue.
> >> >>>
> >> >>> [SM] Will you be able to bump these requests to higher-ups
> and at
> >> >> least change some perception of customer demand for tighter latency
> >> >> performance?
> >> >>>
> >> >>> That would be the hope.
> >> >>
> >> >> [SM} Excellent, hope this plays out as we wish for.
> >> >>
> >> >>
> >> >>> We actually have fq_codel implemented on the two latest
> generations of
> >> >> DSL routers. Use sync rate as input to set the rate. Works quite
> well.
> >> >>
> >> >> [SM] Cool, if I might ask what fraction of the sync are you
> >> >> setting the traffic shaper for and are you doing fine grained
> overhead
> >> >> accounting (or simply fold that into a grand "de-rating"-factor)?
> >> >>
> >> >>
> >> >>> There is also a bit of traction around speedtest.net's inclusion of
> >> >> latency under load internally.
> >> >>
> >> >> [SM] Yes, although IIUC they are reporting the interquartile
> >> mean
> >> >> for the two loaded latency estimates, which is pretty conservative
> and
> >> only
> >> >> really "triggers" for massive consistently elevated latency; so I
> expect
> >> >> this to be great for detecting really bad cases, but I fear it is too
> >> >> conservative and will make a number of problematic links look OK. But
> >> hey,
> >> >> even that is leaps and bounds better than the old only idle latency
> >> report.
> >> >>
> >> >>
> >> >>> My hope is that some publication in Norway will pick up on that
> score
> >> >> and do a test and get some mainstream publicity with the results.
> >> >>
> >> >> [SM] Inside the EU the challenge is to get national
> regulators
> >> and
> >> >> the BEREC to start bothering about latency-under-load at all, "some
> >> >> mainstream publicity" would probably help here as well.
> >> >>
> >> >> Regards
> >> >> Sebastian
> >> >>
> >> >>
> >> >>>
> >> >>> -Erik
> >> >>>
> >> >>>
> >> >>>
> >> >>
> >> >> _______________________________________________
> >> >> Make-wifi-fast mailing list
> >> >> Make-wifi-fast@lists.bufferbloat.net
> >> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
> >> >
> >> >_______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >>
> >
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.2: Type: text/html, Size: 14970 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 16:58 ` Bob McMahon
@ 2022-10-11 17:00 ` Dave Taht
2022-10-11 17:26 ` [Make-wifi-fast] " Sebastian Moeller
1 sibling, 0 replies; 70+ messages in thread
From: Dave Taht @ 2022-10-11 17:00 UTC (permalink / raw)
To: Bob McMahon
Cc: Sebastian Moeller, David Lang, Cake List, Make-Wifi-fast, Rpm, bloat
On Tue, Oct 11, 2022 at 9:58 AM Bob McMahon via Rpm
<rpm@lists.bufferbloat.net> wrote:
>
> > Saturate a link in both directions simultaneously with multiple greedy flows while measuring load-dependent latency changes for small isochronous probe flows.
>
> This functionality is released in iperf 2.1.8 per the bounceback feature but, unfortunately, OpenWRT doesn't maintain iperf 2 as a package anymore and uses 2.0.13
iperf 2.1.8 was pushed into the openwrt mainline and may appear as of
22.03.1. I'll check.
>
> CLIENT SPECIFIC OPTIONS
>
> --bounceback[=n]run a TCP bounceback or rps test with optional number writes in a burst per value of n. The default is ten writes every period and the default period is one second (Note: set size with -l or --len which defaults to 100 bytes)--bounceback-congest[=up|down|bidir][,n]request a concurrent working load or TCP stream(s), defaults to full duplex (or bidir) unless the up or down option is provided. The number of TCP streams defaults to 1 and can be changed via the n value, e.g. --bounceback-congest=down,4 will use four TCP streams from server to the client as the working load. The IP ToS will be BE (0x0) for working load traffic.--bounceback-hold nrequest the server to insert a delay of n milliseconds between its read and write (default is no delay)--bounceback-period[=n]request the client schedule its send(s) every n seconds (default is one second, use zero value for immediate or continuous back to back)--bounceback-no-quickackrequest the server not set the TCP_QUICKACK socket option (disabling TCP ACK delays) during a bounceback test (see NOTES)--bounceback-txdelay nrequest the client to delay n seconds between the start of the working load and the bounceback traffic (default is no delay)
>
> On Tue, Oct 11, 2022 at 12:15 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi Bob,
>>
>> On 11 October 2022 02:05:40 CEST, Bob McMahon <bob.mcmahon@broadcom.com> wrote:
>> >It's too big because it's oversized so it's in the size domain. It's
>> >basically Little's law's value for the number of items in a queue.
>> >
>> >*Number of items in the system = (the rate items enter and leave the
>> >system) x (the average amount of time items spend in the system)*
>> >
>> >
>> >Which gets driven to the standing queue size when the arrival rate
>> >exceeds the service rate - so the driving factor isn't the service and
>> >arrival rates, but *the queue size *when *any service rate is less than an
>> >arrival rate.*
>>
>> [SM] You could also argue it is the ratio of arrival to service rates, with the queue size being a measure correlating with how long the system will tolerate ratios larger than one...
>>
>>
>> >
>> >In other words, one can find and measure bloat regardless of the
>> >enter/leave rates (as long as the leave rate is too slow) and the value of
>> >memory units found will always be the same.
>> >
>> >Things like prioritizations to jump the line are somewhat of hacks at
>> >reducing the service time for a specialized class of packets but nobody
>> >really knows which packets should jump.
>>
>> [SM] Au contraire most everybody 'knows' it is their packets that should jump ahead of the rest ;) For intermediate hop queues however that endpoint perception is not really actionable due to lack of robust and reliable importance identifiers on packets. In side a 'domain' dscps might work if treated to strict admission control, but that typically will not help end2end traffic over the internet. This is BTW why I think FQ is a great concept, as it mostly results in the desirable outcome of not picking winners and losers (like arbitrarily starving a flow), but I digress.
>>
>> >Also, nobody can define what
>> >working conditions are so that's another problem with this class of tests.
>>
>> [SM] While real working conditions will be different for each link and probably vary over time, it seems achievable to come up with a set of pessimistic assumptions how to model a challenging work condition against which to test potential remedies, assuming that such remedies will also work well under less challenging conditions, no?
>>
>>
>> >
>> >Better maybe just to shrink the queue and eliminate all unneeded queueing
>> >delays.
>>
>> [SM] The 'unneeded' does a lot of work in that sentence ;). I like Van's? Description of queues as shock absorbers so queue size will have a lower acceptable limit assuming users want to achieve 'acceptable' throughput even with existing bursty senders. (Not all applications are suited for pacing so some level of burstiness seems unavoidable).
>>
>>
>> > Also, measure the performance per "user conditions" which is going
>> >to be different for almost every environment (and is correlated to time and
>> >space.) So any engineering solution is fundamentally suboptimal.
>>
>> [SM] A matter of definition, if the requirement is to cover many user conditions the optimality measure simply needs to be changed from per individual condition to over many/all conditions, no?
>>
>> >Even
>> >pacing the source doesn't necessarily do the right thing because that's
>> >like waiting in the waitlist while at home vs the restaurant lobby.
>>
>> [SM] +1.
>>
>> > Few
>> >care about where messages wait (unless the pitch is AQM is the only
>> >solution that drives to a self-fulfilling prophecy - that's why the tests
>> >have to come up with artificial conditions that can't be simply defined.)
>>
>> Hrm, so the RRUL test, while not the end all of bufferbloat/working conditions tests, is not that complicated:
>> Saturate a link in both directions simultaneously with multiple greedy flows while measuring load-dependent latency changes for small isochronous probe flows.
>>
>> Yes, the it would be nice to have additional higher rate probe flows also bursty ones to emulate on-linev games, and 'pumped' greedy flows to emulate DASH 'streaming', and a horde of small greedy flows that mostly end inside the initial window and slow start. But at its core existing RRUL already gives a useful estimate on how a link behaves under saturating loads all the while being relatively simple.
>> The responsiveness under working condition seems similar in that it tries to saturate a link with an increasing number of greedy flows, in a sense to create a reasonable bad case that ideally rarely happens.
>>
>> Regards
>> Sebastian
>>
>>
>> >
>> >Bob
>> >
>> >On Mon, Oct 10, 2022 at 3:57 PM David Lang <david@lang.hm> wrote:
>> >
>> >> On Mon, 10 Oct 2022, Bob McMahon via Bloat wrote:
>> >>
>> >> > I think conflating bufferbloat with latency misses the subtle point in
>> >> that
>> >> > bufferbloat is a measurement in memory units more than a measurement in
>> >> > time units. The first design flaw is a queue that is too big. This
>> >> youtube
>> >> > video analogy doesn't help one understand this important point.
>> >>
>> >> but the queue is only too big because of the time it takes to empty the
>> >> queue,
>> >> which puts us back into the time domain.
>> >>
>> >> David Lang
>> >>
>> >> > Another subtle point is that the video assumes AQM as the only solution
>> >> and
>> >> > ignores others, i.e. pacing at the source(s) and/or faster service
>> >> rates. A
>> >> > restaurant that let's one call ahead to put their name on the waitlist
>> >> > doesn't change the wait time. Just because a transport layer slowed down
>> >> > and hasn't congested a downstream queue doesn't mean the e2e latency
>> >> > performance will meet the gaming needs as an example. The delay is still
>> >> > there it's just not manifesting itself in a shared queue that may or may
>> >> > not negatively impact others using that shared queue.
>> >> >
>> >> > Bob
>> >> >
>> >> >
>> >> >
>> >> > On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via Make-wifi-fast <
>> >> > make-wifi-fast@lists.bufferbloat.net> wrote:
>> >> >
>> >> >> Hi Erik,
>> >> >>
>> >> >>
>> >> >>> On Oct 10, 2022, at 11:32, Taraldsen Erik <erik.taraldsen@telenor.no>
>> >> >> wrote:
>> >> >>>
>> >> >>> On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de> wrote:
>> >> >>>
>> >> >>> Nice!
>> >> >>>
>> >> >>>> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <
>> >> >> cake@lists.bufferbloat.net> wrote:
>> >> >>>>
>> >> >>>> It took about 3 hours from the video was release before we got the
>> >> >> first request to have SQM on the CPE's we manage as a ISP. Finally
>> >> >> getting some customer response on the issue.
>> >> >>>
>> >> >>> [SM] Will you be able to bump these requests to higher-ups and at
>> >> >> least change some perception of customer demand for tighter latency
>> >> >> performance?
>> >> >>>
>> >> >>> That would be the hope.
>> >> >>
>> >> >> [SM} Excellent, hope this plays out as we wish for.
>> >> >>
>> >> >>
>> >> >>> We actually have fq_codel implemented on the two latest generations of
>> >> >> DSL routers. Use sync rate as input to set the rate. Works quite well.
>> >> >>
>> >> >> [SM] Cool, if I might ask what fraction of the sync are you
>> >> >> setting the traffic shaper for and are you doing fine grained overhead
>> >> >> accounting (or simply fold that into a grand "de-rating"-factor)?
>> >> >>
>> >> >>
>> >> >>> There is also a bit of traction around speedtest.net's inclusion of
>> >> >> latency under load internally.
>> >> >>
>> >> >> [SM] Yes, although IIUC they are reporting the interquartile
>> >> mean
>> >> >> for the two loaded latency estimates, which is pretty conservative and
>> >> only
>> >> >> really "triggers" for massive consistently elevated latency; so I expect
>> >> >> this to be great for detecting really bad cases, but I fear it is too
>> >> >> conservative and will make a number of problematic links look OK. But
>> >> hey,
>> >> >> even that is leaps and bounds better than the old only idle latency
>> >> report.
>> >> >>
>> >> >>
>> >> >>> My hope is that some publication in Norway will pick up on that score
>> >> >> and do a test and get some mainstream publicity with the results.
>> >> >>
>> >> >> [SM] Inside the EU the challenge is to get national regulators
>> >> and
>> >> >> the BEREC to start bothering about latency-under-load at all, "some
>> >> >> mainstream publicity" would probably help here as well.
>> >> >>
>> >> >> Regards
>> >> >> Sebastian
>> >> >>
>> >> >>
>> >> >>>
>> >> >>> -Erik
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>
>> >> >> _______________________________________________
>> >> >> Make-wifi-fast mailing list
>> >> >> Make-wifi-fast@lists.bufferbloat.net
>> >> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>> >> >
>> >> >_______________________________________________
>> >> Bloat mailing list
>> >> Bloat@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/bloat
>> >>
>> >
>>
>> --
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
>
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it._______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 13:57 ` [Make-wifi-fast] [Rpm] " Rich Brown
2022-10-11 14:43 ` Dave Taht
@ 2022-10-11 17:05 ` Bob McMahon
2022-10-11 18:44 ` Rich Brown
2022-10-13 17:45 ` [Make-wifi-fast] [Bloat] [Rpm] [Cake] " Livingood, Jason
1 sibling, 2 replies; 70+ messages in thread
From: Bob McMahon @ 2022-10-11 17:05 UTC (permalink / raw)
To: Rich Brown; +Cc: David Lang, Cake List, Make-Wifi-fast, Rpm, bloat
[-- Attachment #1.1: Type: text/plain, Size: 2034 bytes --]
I agree that bufferbloat awareness is a good thing. The issue I have is the
approach - ask consumers to "detect it" and replace a device with a new
one, that may or may not, meet all the needs of the users.
Better is that network engineers "design bloat out" from the beginning
starting by properly sizing queues to service jitter, and for WiFi, to also
enable aggregation techniques that minimize TXOP consumption.
Bob
On Tue, Oct 11, 2022 at 6:57 AM Rich Brown <richb.hanover@gmail.com> wrote:
>
>
> On Oct 10, 2022, at 8:05 PM, Bob McMahon via Rpm <
> rpm@lists.bufferbloat.net> wrote:
>
> > I think conflating bufferbloat with latency misses the subtle point in
> that
> > bufferbloat is a measurement in memory units more than a measurement in
> > time units.
>
>
> Yes, but... I am going to praise this video, even as I encourage all the
> techies to be sure that they have the units correct.
>
> I've been yammering about the evils of latency/excess queueing for 10
> years on my blog, in forums, etc. I have not achieved anywhere near the
> notoriety of this video (almost a third of a million views).
>
> I am delighted that there's an engaging, mass-market Youtube video that
> makes the case that bufferbloat even exists.
>
> Rich
>
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.2: Type: text/html, Size: 4001 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 16:58 ` Bob McMahon
2022-10-11 17:00 ` [Make-wifi-fast] [Rpm] " Dave Taht
@ 2022-10-11 17:26 ` Sebastian Moeller
2022-10-11 17:47 ` Bob McMahon
1 sibling, 1 reply; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-11 17:26 UTC (permalink / raw)
To: Bob McMahon
Cc: David Lang, Rpm, Make-Wifi-fast, Cake List, Taraldsen Erik, bloat
[-- Attachment #1: Type: text/plain, Size: 12291 bytes --]
Hi Bob,
Sweet, thanks! Will go and set this up in my home network, but that will take a while. Also any proposal how to convert the output into some graphs by any chance?
Regards
Sebastian
On 11 October 2022 18:58:05 CEST, Bob McMahon <bob.mcmahon@broadcom.com> wrote:
>> Saturate a link in both directions simultaneously with multiple greedy
>flows while measuring load-dependent latency changes for small isochronous
>probe flows.
>
>This functionality is released in iperf 2.1.8 per the bounceback feature
>but, unfortunately, OpenWRT doesn't maintain iperf 2 as a package anymore
>and uses 2.0.13
>CLIENT SPECIFIC OPTIONS*--bounceback[=**n**]*run a TCP bounceback or rps
>test with optional number writes in a burst per value of n. The default is
>ten writes every period and the default period is one second (Note: set
>size with -l or --len which defaults to 100 bytes)
>*--bounceback-congest[=up|down|bidir][*,*n**]*request a concurrent working
>load or TCP stream(s), defaults to full duplex (or bidir) unless the *up*
> or *down* option is provided. The number of TCP streams defaults to 1 and
>can be changed via the n value, e.g. *--bounceback-congest=down,4* will use
>four TCP streams from server to the client as the working load. The IP ToS
>will be BE (0x0) for working load traffic.*--bounceback-hold **n*request
>the server to insert a delay of n milliseconds between its read and write
>(default is no delay)*--bounceback-period[=**n**]*request the client
>schedule its send(s) every n seconds (default is one second, use zero value
>for immediate or continuous back to back)*--bounceback-no-quickack*request
>the server not set the TCP_QUICKACK socket option (disabling TCP ACK
>delays) during a bounceback test (see NOTES)*--bounceback-txdelay **n*request
>the client to delay n seconds between the start of the working load and the
>bounceback traffic (default is no delay)
>
>On Tue, Oct 11, 2022 at 12:15 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>
>> Hi Bob,
>>
>> On 11 October 2022 02:05:40 CEST, Bob McMahon <bob.mcmahon@broadcom.com>
>> wrote:
>> >It's too big because it's oversized so it's in the size domain. It's
>> >basically Little's law's value for the number of items in a queue.
>> >
>> >*Number of items in the system = (the rate items enter and leave the
>> >system) x (the average amount of time items spend in the system)*
>> >
>> >
>> >Which gets driven to the standing queue size when the arrival rate
>> >exceeds the service rate - so the driving factor isn't the service and
>> >arrival rates, but *the queue size *when *any service rate is less than an
>> >arrival rate.*
>>
>> [SM] You could also argue it is the ratio of arrival to service rates,
>> with the queue size being a measure correlating with how long the system
>> will tolerate ratios larger than one...
>>
>>
>> >
>> >In other words, one can find and measure bloat regardless of the
>> >enter/leave rates (as long as the leave rate is too slow) and the value of
>> >memory units found will always be the same.
>> >
>> >Things like prioritizations to jump the line are somewhat of hacks at
>> >reducing the service time for a specialized class of packets but nobody
>> >really knows which packets should jump.
>>
>> [SM] Au contraire most everybody 'knows' it is their packets that should
>> jump ahead of the rest ;) For intermediate hop queues however that endpoint
>> perception is not really actionable due to lack of robust and reliable
>> importance identifiers on packets. In side a 'domain' dscps might work if
>> treated to strict admission control, but that typically will not help
>> end2end traffic over the internet. This is BTW why I think FQ is a great
>> concept, as it mostly results in the desirable outcome of not picking
>> winners and losers (like arbitrarily starving a flow), but I digress.
>>
>> >Also, nobody can define what
>> >working conditions are so that's another problem with this class of tests.
>>
>> [SM] While real working conditions will be different for each link and
>> probably vary over time, it seems achievable to come up with a set of
>> pessimistic assumptions how to model a challenging work condition against
>> which to test potential remedies, assuming that such remedies will also
>> work well under less challenging conditions, no?
>>
>>
>> >
>> >Better maybe just to shrink the queue and eliminate all unneeded queueing
>> >delays.
>>
>> [SM] The 'unneeded' does a lot of work in that sentence ;). I like Van's?
>> Description of queues as shock absorbers so queue size will have a lower
>> acceptable limit assuming users want to achieve 'acceptable' throughput
>> even with existing bursty senders. (Not all applications are suited for
>> pacing so some level of burstiness seems unavoidable).
>>
>>
>> > Also, measure the performance per "user conditions" which is going
>> >to be different for almost every environment (and is correlated to time
>> and
>> >space.) So any engineering solution is fundamentally suboptimal.
>>
>> [SM] A matter of definition, if the requirement is to cover many user
>> conditions the optimality measure simply needs to be changed from per
>> individual condition to over many/all conditions, no?
>>
>> >Even
>> >pacing the source doesn't necessarily do the right thing because that's
>> >like waiting in the waitlist while at home vs the restaurant lobby.
>>
>> [SM] +1.
>>
>> > Few
>> >care about where messages wait (unless the pitch is AQM is the only
>> >solution that drives to a self-fulfilling prophecy - that's why the tests
>> >have to come up with artificial conditions that can't be simply defined.)
>>
>> Hrm, so the RRUL test, while not the end all of bufferbloat/working
>> conditions tests, is not that complicated:
>> Saturate a link in both directions simultaneously with multiple greedy
>> flows while measuring load-dependent latency changes for small isochronous
>> probe flows.
>>
>> Yes, the it would be nice to have additional higher rate probe flows also
>> bursty ones to emulate on-linev games, and 'pumped' greedy flows to emulate
>> DASH 'streaming', and a horde of small greedy flows that mostly end inside
>> the initial window and slow start. But at its core existing RRUL already
>> gives a useful estimate on how a link behaves under saturating loads all
>> the while being relatively simple.
>> The responsiveness under working condition seems similar in that it tries
>> to saturate a link with an increasing number of greedy flows, in a sense to
>> create a reasonable bad case that ideally rarely happens.
>>
>> Regards
>> Sebastian
>>
>>
>> >
>> >Bob
>> >
>> >On Mon, Oct 10, 2022 at 3:57 PM David Lang <david@lang.hm> wrote:
>> >
>> >> On Mon, 10 Oct 2022, Bob McMahon via Bloat wrote:
>> >>
>> >> > I think conflating bufferbloat with latency misses the subtle point in
>> >> that
>> >> > bufferbloat is a measurement in memory units more than a measurement
>> in
>> >> > time units. The first design flaw is a queue that is too big. This
>> >> youtube
>> >> > video analogy doesn't help one understand this important point.
>> >>
>> >> but the queue is only too big because of the time it takes to empty the
>> >> queue,
>> >> which puts us back into the time domain.
>> >>
>> >> David Lang
>> >>
>> >> > Another subtle point is that the video assumes AQM as the only
>> solution
>> >> and
>> >> > ignores others, i.e. pacing at the source(s) and/or faster service
>> >> rates. A
>> >> > restaurant that let's one call ahead to put their name on the waitlist
>> >> > doesn't change the wait time. Just because a transport layer slowed
>> down
>> >> > and hasn't congested a downstream queue doesn't mean the e2e latency
>> >> > performance will meet the gaming needs as an example. The delay is
>> still
>> >> > there it's just not manifesting itself in a shared queue that may or
>> may
>> >> > not negatively impact others using that shared queue.
>> >> >
>> >> > Bob
>> >> >
>> >> >
>> >> >
>> >> > On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via Make-wifi-fast <
>> >> > make-wifi-fast@lists.bufferbloat.net> wrote:
>> >> >
>> >> >> Hi Erik,
>> >> >>
>> >> >>
>> >> >>> On Oct 10, 2022, at 11:32, Taraldsen Erik <
>> erik.taraldsen@telenor.no>
>> >> >> wrote:
>> >> >>>
>> >> >>> On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de> wrote:
>> >> >>>
>> >> >>> Nice!
>> >> >>>
>> >> >>>> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <
>> >> >> cake@lists.bufferbloat.net> wrote:
>> >> >>>>
>> >> >>>> It took about 3 hours from the video was release before we got the
>> >> >> first request to have SQM on the CPE's we manage as a ISP. Finally
>> >> >> getting some customer response on the issue.
>> >> >>>
>> >> >>> [SM] Will you be able to bump these requests to higher-ups
>> and at
>> >> >> least change some perception of customer demand for tighter latency
>> >> >> performance?
>> >> >>>
>> >> >>> That would be the hope.
>> >> >>
>> >> >> [SM} Excellent, hope this plays out as we wish for.
>> >> >>
>> >> >>
>> >> >>> We actually have fq_codel implemented on the two latest
>> generations of
>> >> >> DSL routers. Use sync rate as input to set the rate. Works quite
>> well.
>> >> >>
>> >> >> [SM] Cool, if I might ask what fraction of the sync are you
>> >> >> setting the traffic shaper for and are you doing fine grained
>> overhead
>> >> >> accounting (or simply fold that into a grand "de-rating"-factor)?
>> >> >>
>> >> >>
>> >> >>> There is also a bit of traction around speedtest.net's inclusion of
>> >> >> latency under load internally.
>> >> >>
>> >> >> [SM] Yes, although IIUC they are reporting the interquartile
>> >> mean
>> >> >> for the two loaded latency estimates, which is pretty conservative
>> and
>> >> only
>> >> >> really "triggers" for massive consistently elevated latency; so I
>> expect
>> >> >> this to be great for detecting really bad cases, but I fear it is too
>> >> >> conservative and will make a number of problematic links look OK. But
>> >> hey,
>> >> >> even that is leaps and bounds better than the old only idle latency
>> >> report.
>> >> >>
>> >> >>
>> >> >>> My hope is that some publication in Norway will pick up on that
>> score
>> >> >> and do a test and get some mainstream publicity with the results.
>> >> >>
>> >> >> [SM] Inside the EU the challenge is to get national
>> regulators
>> >> and
>> >> >> the BEREC to start bothering about latency-under-load at all, "some
>> >> >> mainstream publicity" would probably help here as well.
>> >> >>
>> >> >> Regards
>> >> >> Sebastian
>> >> >>
>> >> >>
>> >> >>>
>> >> >>> -Erik
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>
>> >> >> _______________________________________________
>> >> >> Make-wifi-fast mailing list
>> >> >> Make-wifi-fast@lists.bufferbloat.net
>> >> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>> >> >
>> >> >_______________________________________________
>> >> Bloat mailing list
>> >> Bloat@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/bloat
>> >>
>> >
>>
>> --
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>
>
>--
>This electronic communication and the information and any files transmitted
>with it, or attached to it, are confidential and are intended solely for
>the use of the individual or entity to whom it is addressed and may contain
>information that is confidential, legally privileged, protected by privacy
>laws, or otherwise restricted from disclosure to anyone else. If you are
>not the intended recipient or the person responsible for delivering the
>e-mail to the intended recipient, you are hereby notified that any use,
>copying, distributing, dissemination, forwarding, printing, or copying of
>this e-mail is strictly prohibited. If you received this e-mail in error,
>please return the e-mail to the sender, delete it from your computer, and
>destroy any printed copy of it.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
[-- Attachment #2: Type: text/html, Size: 15648 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 17:26 ` [Make-wifi-fast] " Sebastian Moeller
@ 2022-10-11 17:47 ` Bob McMahon
0 siblings, 0 replies; 70+ messages in thread
From: Bob McMahon @ 2022-10-11 17:47 UTC (permalink / raw)
To: Sebastian Moeller
Cc: David Lang, Rpm, Make-Wifi-fast, Cake List, Taraldsen Erik, bloat
[-- Attachment #1.1: Type: text/plain, Size: 14603 bytes --]
Graphs are on the todo list but not a high priority. Too many
different ways to graph, e.g. gnuplot, matplotlib. Also, if one wants full
features including one way delays (OWD,) the GPS & pulse per second is
useful. A GPS hat on a raspberry pi
<https://www.satsignal.eu/ntp/Raspberry-Pi-quickstart.html> works for me.
Then one can use Little's law to get the memory size driving bloat.
There is some graph support in the flows python code
<https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/> but not for
bounceback (yet) It's the e2e latency OWD histogram plots
<https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/flows.py> that
most network engineers are using to date. No real ask for BB plots as once
the clocks are sync'd, engineers tend to gravitate to latency
analysis based upon OWD. What also may be of interest is some form of web
interface into probe devices that could support asks of basic questions
like, when did the network underperform and was WiFi to blame? What was
going on with other RF carriers at this time? One could also use
statistical process controls (SPC) to monitor the non-parametric
distributions to help answer questions about intermittent phenomena. As an
aside, I do notice on some devices that bloat is intermittent so one-shot
tests can miss it per the lack of continuous monitoring.
Bob
On Tue, Oct 11, 2022 at 10:26 AM Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Bob,
>
> Sweet, thanks! Will go and set this up in my home network, but that will
> take a while. Also any proposal how to convert the output into some graphs
> by any chance?
>
> Regards
> Sebastian
>
> On 11 October 2022 18:58:05 CEST, Bob McMahon <bob.mcmahon@broadcom.com>
> wrote:
>>
>> > Saturate a link in both directions simultaneously with multiple greedy
>> flows while measuring load-dependent latency changes for small isochronous
>> probe flows.
>>
>> This functionality is released in iperf 2.1.8 per the bounceback feature
>> but, unfortunately, OpenWRT doesn't maintain iperf 2 as a package anymore
>> and uses 2.0.13
>> CLIENT SPECIFIC OPTIONS*--bounceback[=**n**]*run a TCP bounceback or rps
>> test with optional number writes in a burst per value of n. The default is
>> ten writes every period and the default period is one second (Note: set
>> size with -l or --len which defaults to 100 bytes)
>> *--bounceback-congest[=up|down|bidir][*,*n**]*request a concurrent
>> working load or TCP stream(s), defaults to full duplex (or bidir) unless
>> the *up* or *down* option is provided. The number of TCP streams
>> defaults to 1 and can be changed via the n value, e.g.
>> *--bounceback-congest=down,4* will use four TCP streams from server to
>> the client as the working load. The IP ToS will be BE (0x0) for working
>> load traffic.*--bounceback-hold **n*request the server to insert a delay
>> of n milliseconds between its read and write (default is no delay)
>> *--bounceback-period[=**n**]*request the client schedule its send(s)
>> every n seconds (default is one second, use zero value for immediate or
>> continuous back to back)*--bounceback-no-quickack*request the server not
>> set the TCP_QUICKACK socket option (disabling TCP ACK delays) during a
>> bounceback test (see NOTES)*--bounceback-txdelay **n*request the client
>> to delay n seconds between the start of the working load and the bounceback
>> traffic (default is no delay)
>>
>> On Tue, Oct 11, 2022 at 12:15 AM Sebastian Moeller <moeller0@gmx.de>
>> wrote:
>>
>>> Hi Bob,
>>>
>>> On 11 October 2022 02:05:40 CEST, Bob McMahon <bob.mcmahon@broadcom.com>
>>> wrote:
>>> >It's too big because it's oversized so it's in the size domain. It's
>>> >basically Little's law's value for the number of items in a queue.
>>> >
>>> >*Number of items in the system = (the rate items enter and leave the
>>> >system) x (the average amount of time items spend in the system)*
>>> >
>>> >
>>> >Which gets driven to the standing queue size when the arrival rate
>>> >exceeds the service rate - so the driving factor isn't the service and
>>> >arrival rates, but *the queue size *when *any service rate is less than
>>> an
>>> >arrival rate.*
>>>
>>> [SM] You could also argue it is the ratio of arrival to service rates,
>>> with the queue size being a measure correlating with how long the system
>>> will tolerate ratios larger than one...
>>>
>>>
>>> >
>>> >In other words, one can find and measure bloat regardless of the
>>> >enter/leave rates (as long as the leave rate is too slow) and the value
>>> of
>>> >memory units found will always be the same.
>>> >
>>> >Things like prioritizations to jump the line are somewhat of hacks at
>>> >reducing the service time for a specialized class of packets but nobody
>>> >really knows which packets should jump.
>>>
>>> [SM] Au contraire most everybody 'knows' it is their packets that should
>>> jump ahead of the rest ;) For intermediate hop queues however that endpoint
>>> perception is not really actionable due to lack of robust and reliable
>>> importance identifiers on packets. In side a 'domain' dscps might work if
>>> treated to strict admission control, but that typically will not help
>>> end2end traffic over the internet. This is BTW why I think FQ is a great
>>> concept, as it mostly results in the desirable outcome of not picking
>>> winners and losers (like arbitrarily starving a flow), but I digress.
>>>
>>> >Also, nobody can define what
>>> >working conditions are so that's another problem with this class of
>>> tests.
>>>
>>> [SM] While real working conditions will be different for each link and
>>> probably vary over time, it seems achievable to come up with a set of
>>> pessimistic assumptions how to model a challenging work condition against
>>> which to test potential remedies, assuming that such remedies will also
>>> work well under less challenging conditions, no?
>>>
>>>
>>> >
>>> >Better maybe just to shrink the queue and eliminate all unneeded
>>> queueing
>>> >delays.
>>>
>>> [SM] The 'unneeded' does a lot of work in that sentence ;). I like
>>> Van's? Description of queues as shock absorbers so queue size will have a
>>> lower acceptable limit assuming users want to achieve 'acceptable'
>>> throughput even with existing bursty senders. (Not all applications are
>>> suited for pacing so some level of burstiness seems unavoidable).
>>>
>>>
>>> > Also, measure the performance per "user conditions" which is going
>>> >to be different for almost every environment (and is correlated to time
>>> and
>>> >space.) So any engineering solution is fundamentally suboptimal.
>>>
>>> [SM] A matter of definition, if the requirement is to cover many user
>>> conditions the optimality measure simply needs to be changed from per
>>> individual condition to over many/all conditions, no?
>>>
>>> >Even
>>> >pacing the source doesn't necessarily do the right thing because that's
>>> >like waiting in the waitlist while at home vs the restaurant lobby.
>>>
>>> [SM] +1.
>>>
>>> > Few
>>> >care about where messages wait (unless the pitch is AQM is the only
>>> >solution that drives to a self-fulfilling prophecy - that's why the
>>> tests
>>> >have to come up with artificial conditions that can't be simply
>>> defined.)
>>>
>>> Hrm, so the RRUL test, while not the end all of bufferbloat/working
>>> conditions tests, is not that complicated:
>>> Saturate a link in both directions simultaneously with multiple greedy
>>> flows while measuring load-dependent latency changes for small isochronous
>>> probe flows.
>>>
>>> Yes, the it would be nice to have additional higher rate probe flows
>>> also bursty ones to emulate on-linev games, and 'pumped' greedy flows to
>>> emulate DASH 'streaming', and a horde of small greedy flows that mostly end
>>> inside the initial window and slow start. But at its core existing RRUL
>>> already gives a useful estimate on how a link behaves under saturating
>>> loads all the while being relatively simple.
>>> The responsiveness under working condition seems similar in that it
>>> tries to saturate a link with an increasing number of greedy flows, in a
>>> sense to create a reasonable bad case that ideally rarely happens.
>>>
>>> Regards
>>> Sebastian
>>>
>>>
>>> >
>>> >Bob
>>> >
>>> >On Mon, Oct 10, 2022 at 3:57 PM David Lang <david@lang.hm> wrote:
>>> >
>>> >> On Mon, 10 Oct 2022, Bob McMahon via Bloat wrote:
>>> >>
>>> >> > I think conflating bufferbloat with latency misses the subtle point
>>> in
>>> >> that
>>> >> > bufferbloat is a measurement in memory units more than a
>>> measurement in
>>> >> > time units. The first design flaw is a queue that is too big. This
>>> >> youtube
>>> >> > video analogy doesn't help one understand this important point.
>>> >>
>>> >> but the queue is only too big because of the time it takes to empty
>>> the
>>> >> queue,
>>> >> which puts us back into the time domain.
>>> >>
>>> >> David Lang
>>> >>
>>> >> > Another subtle point is that the video assumes AQM as the only
>>> solution
>>> >> and
>>> >> > ignores others, i.e. pacing at the source(s) and/or faster service
>>> >> rates. A
>>> >> > restaurant that let's one call ahead to put their name on the
>>> waitlist
>>> >> > doesn't change the wait time. Just because a transport layer slowed
>>> down
>>> >> > and hasn't congested a downstream queue doesn't mean the e2e latency
>>> >> > performance will meet the gaming needs as an example. The delay is
>>> still
>>> >> > there it's just not manifesting itself in a shared queue that may
>>> or may
>>> >> > not negatively impact others using that shared queue.
>>> >> >
>>> >> > Bob
>>> >> >
>>> >> >
>>> >> >
>>> >> > On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via
>>> Make-wifi-fast <
>>> >> > make-wifi-fast@lists.bufferbloat.net> wrote:
>>> >> >
>>> >> >> Hi Erik,
>>> >> >>
>>> >> >>
>>> >> >>> On Oct 10, 2022, at 11:32, Taraldsen Erik <
>>> erik.taraldsen@telenor.no>
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0@gmx.de>
>>> wrote:
>>> >> >>>
>>> >> >>> Nice!
>>> >> >>>
>>> >> >>>> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <
>>> >> >> cake@lists.bufferbloat.net> wrote:
>>> >> >>>>
>>> >> >>>> It took about 3 hours from the video was release before we got
>>> the
>>> >> >> first request to have SQM on the CPE's we manage as a ISP.
>>> Finally
>>> >> >> getting some customer response on the issue.
>>> >> >>>
>>> >> >>> [SM] Will you be able to bump these requests to higher-ups
>>> and at
>>> >> >> least change some perception of customer demand for tighter latency
>>> >> >> performance?
>>> >> >>>
>>> >> >>> That would be the hope.
>>> >> >>
>>> >> >> [SM} Excellent, hope this plays out as we wish for.
>>> >> >>
>>> >> >>
>>> >> >>> We actually have fq_codel implemented on the two latest
>>> generations of
>>> >> >> DSL routers. Use sync rate as input to set the rate. Works quite
>>> well.
>>> >> >>
>>> >> >> [SM] Cool, if I might ask what fraction of the sync are you
>>> >> >> setting the traffic shaper for and are you doing fine grained
>>> overhead
>>> >> >> accounting (or simply fold that into a grand "de-rating"-factor)?
>>> >> >>
>>> >> >>
>>> >> >>> There is also a bit of traction around speedtest.net's inclusion
>>> of
>>> >> >> latency under load internally.
>>> >> >>
>>> >> >> [SM] Yes, although IIUC they are reporting the
>>> interquartile
>>> >> mean
>>> >> >> for the two loaded latency estimates, which is pretty conservative
>>> and
>>> >> only
>>> >> >> really "triggers" for massive consistently elevated latency; so I
>>> expect
>>> >> >> this to be great for detecting really bad cases, but I fear it is
>>> too
>>> >> >> conservative and will make a number of problematic links look OK.
>>> But
>>> >> hey,
>>> >> >> even that is leaps and bounds better than the old only idle latency
>>> >> report.
>>> >> >>
>>> >> >>
>>> >> >>> My hope is that some publication in Norway will pick up on that
>>> score
>>> >> >> and do a test and get some mainstream publicity with the results.
>>> >> >>
>>> >> >> [SM] Inside the EU the challenge is to get national
>>> regulators
>>> >> and
>>> >> >> the BEREC to start bothering about latency-under-load at all, "some
>>> >> >> mainstream publicity" would probably help here as well.
>>> >> >>
>>> >> >> Regards
>>> >> >> Sebastian
>>> >> >>
>>> >> >>
>>> >> >>>
>>> >> >>> -Erik
>>> >> >>>
>>> >> >>>
>>> >> >>>
>>> >> >>
>>> >> >> _______________________________________________
>>> >> >> Make-wifi-fast mailing list
>>> >> >> Make-wifi-fast@lists.bufferbloat.net
>>> >> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>>> >> >
>>> >> >_______________________________________________
>>> >> Bloat mailing list
>>> >> Bloat@lists.bufferbloat.net
>>> >> https://lists.bufferbloat.net/listinfo/bloat
>>> >>
>>> >
>>>
>>> --
>>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>>
>>
>> This electronic communication and the information and any files
>> transmitted with it, or attached to it, are confidential and are intended
>> solely for the use of the individual or entity to whom it is addressed and
>> may contain information that is confidential, legally privileged, protected
>> by privacy laws, or otherwise restricted from disclosure to anyone else. If
>> you are not the intended recipient or the person responsible for delivering
>> the e-mail to the intended recipient, you are hereby notified that any use,
>> copying, distributing, dissemination, forwarding, printing, or copying of
>> this e-mail is strictly prohibited. If you received this e-mail in error,
>> please return the e-mail to the sender, delete it from your computer, and
>> destroy any printed copy of it.
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.2: Type: text/html, Size: 18752 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 17:05 ` Bob McMahon
@ 2022-10-11 18:44 ` Rich Brown
2022-10-11 22:24 ` Dave Taht
2022-10-13 17:45 ` [Make-wifi-fast] [Bloat] [Rpm] [Cake] " Livingood, Jason
1 sibling, 1 reply; 70+ messages in thread
From: Rich Brown @ 2022-10-11 18:44 UTC (permalink / raw)
To: Bob McMahon; +Cc: David Lang, Cake List, Make-Wifi-fast, Rpm, bloat
[-- Attachment #1: Type: text/plain, Size: 2889 bytes --]
> On Oct 11, 2022, at 1:05 PM, Bob McMahon <bob.mcmahon@broadcom.com> wrote:
>
> I agree that bufferbloat awareness is a good thing. The issue I have is the approach - ask consumers to "detect it" and replace a device with a new one, that may or may not, meet all the needs of the users.
>
> Better is that network engineers "design bloat out" from the beginning starting by properly sizing queues to service jitter, and for WiFi, to also enable aggregation techniques that minimize TXOP consumption.
The Yes, but... part of my answer emphasizes awareness. How are the network engineers going to know it's worth the (minor) effort of creating properly-sized queues?
There are two fronts to attack:
- Manufacturers - This video is a start on getting their customers to use these responsiveness test tools and call the support lines.
- Hardware (especially router) reviewers - It kills me that there is radio silence whenever I ask a reviewer if they have ever measured latency/responsiveness. (BTW: Has anyone heard from Ben Moskowitz from Consumer Reports? We had a very encouraging phone call about a year ago, and they were going to get back to us...)
Rich
> Bob
>
> On Tue, Oct 11, 2022 at 6:57 AM Rich Brown <richb.hanover@gmail.com <mailto:richb.hanover@gmail.com>> wrote:
>
>
>> On Oct 10, 2022, at 8:05 PM, Bob McMahon via Rpm <rpm@lists.bufferbloat.net <mailto:rpm@lists.bufferbloat.net>> wrote:
>>
>> > I think conflating bufferbloat with latency misses the subtle point in that
>> > bufferbloat is a measurement in memory units more than a measurement in
>> > time units.
>
> Yes, but... I am going to praise this video, even as I encourage all the techies to be sure that they have the units correct.
>
> I've been yammering about the evils of latency/excess queueing for 10 years on my blog, in forums, etc. I have not achieved anywhere near the notoriety of this video (almost a third of a million views).
>
> I am delighted that there's an engaging, mass-market Youtube video that makes the case that bufferbloat even exists.
>
> Rich
>
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.
[-- Attachment #2: Type: text/html, Size: 6005 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 18:44 ` Rich Brown
@ 2022-10-11 22:24 ` Dave Taht
2022-10-12 17:39 ` Bob McMahon
0 siblings, 1 reply; 70+ messages in thread
From: Dave Taht @ 2022-10-11 22:24 UTC (permalink / raw)
To: Rich Brown; +Cc: Bob McMahon, David Lang, Cake List, bloat, Rpm, Make-Wifi-fast
Well, we've all been yammering for many years, and the message is
getting through. Yes, at this point, changing the message to be more
directed at engineers than users would help, and to this day, I don't
know how to get to anyone in the
C suite, except through the complaints of their kids. Jim got on this
problem because of his kids. The guy that did dslreports, also. "my"
kids are
At the risk of burying the lede, our very own dave reed just did a
podcast on different stuff:
https://twit.tv/shows/floss-weekly/episodes/701?autostart=false
Sometimes my own (shared with most of you) motivations tend to leak
through. I really encourage the independent growth of user created and
owned software, running on their own routers, and I'm very pleased to
see the level of activity on the openwrt forums showing how healthy
that part of our culture is. It would be a very different world if
we'd decided to settle for whatever an ISP was willing to give us, and
for things as they were, and I'm probably difficult to employ because
of my
fervent beliefs in anti-patenting, free and open source, and the right
to repair...
... but I wouldn't have my world any other way. I might die broke, but
I'll die free.
On Tue, Oct 11, 2022 at 11:44 AM Rich Brown via Rpm
<rpm@lists.bufferbloat.net> wrote:
>
>
>
>
> On Oct 11, 2022, at 1:05 PM, Bob McMahon <bob.mcmahon@broadcom.com> wrote:
>
> I agree that bufferbloat awareness is a good thing. The issue I have is the approach - ask consumers to "detect it" and replace a device with a new one, that may or may not, meet all the needs of the users.
>
>
> Better is that network engineers "design bloat out" from the beginning starting by properly sizing queues to service jitter, and for WiFi, to also enable aggregation techniques that minimize TXOP consumption.
>
>
> The Yes, but... part of my answer emphasizes awareness. How are the network engineers going to know it's worth the (minor) effort of creating properly-sized queues?
>
> There are two fronts to attack:
>
> - Manufacturers - This video is a start on getting their customers to use these responsiveness test tools and call the support lines.
>
> - Hardware (especially router) reviewers - It kills me that there is radio silence whenever I ask a reviewer if they have ever measured latency/responsiveness. (BTW: Has anyone heard from Ben Moskowitz from Consumer Reports? We had a very encouraging phone call about a year ago, and they were going to get back to us...)
>
> Rich
>
>
> Bob
>
> On Tue, Oct 11, 2022 at 6:57 AM Rich Brown <richb.hanover@gmail.com> wrote:
>>
>>
>>
>> On Oct 10, 2022, at 8:05 PM, Bob McMahon via Rpm <rpm@lists.bufferbloat.net> wrote:
>>
>> > I think conflating bufferbloat with latency misses the subtle point in that
>> > bufferbloat is a measurement in memory units more than a measurement in
>> > time units.
>>
>>
>> Yes, but... I am going to praise this video, even as I encourage all the techies to be sure that they have the units correct.
>>
>> I've been yammering about the evils of latency/excess queueing for 10 years on my blog, in forums, etc. I have not achieved anywhere near the notoriety of this video (almost a third of a million views).
>>
>> I am delighted that there's an engaging, mass-market Youtube video that makes the case that bufferbloat even exists.
>>
>> Rich
>
>
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.
>
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 22:24 ` Dave Taht
@ 2022-10-12 17:39 ` Bob McMahon
2022-10-12 21:44 ` [Make-wifi-fast] [Cake] [Rpm] [Bloat] " David P. Reed
0 siblings, 1 reply; 70+ messages in thread
From: Bob McMahon @ 2022-10-12 17:39 UTC (permalink / raw)
To: Dave Taht; +Cc: Rich Brown, David Lang, Cake List, bloat, Rpm, Make-Wifi-fast
[-- Attachment #1.1: Type: text/plain, Size: 7902 bytes --]
With full respect to open source projects like OpenWRT, I think from an
energy, performance & going forward perspective the AP forwarding plane
will be realized by "transistor engineers." This makes the awareness around
bloat by network engineers needed even more because those design cycles
take awhile. A tape out <https://anysilicon.com/tapeout/> is very different
from a sw compile. The driving force for ASIC & CMOS radio features
typically will come from IAPs or enterprise customers, mostly per revenues
adds to their businesses. Customer complaints are years down the road from
such design decisions so bloat mitigation or elimination needs to be
designed in from the get-go.
Bob
PS. As a side note, data center switch architecture addressed latency &
bloat with things like AFD & DPP
<https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-738488.html>
as
described per a Cisco Nexus 9000. Notice their formula for queue size can
be defined by a math calculation. A challenge with WiFi is that the phy
rates are dynamic and have a large range so such tables aren't so
straightforward and C cannot be so simply defined. In many ways the data
center architects had an easier problem than we in the shared RF, battery
powered, no waveguides, etc. have.
The needed buffer size is the bandwidth delay product value divided by the
square root of the number of flows:
[image: white-paper-c11-738488_16.jpg]
<https://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-738488.docx/_jcr_content/renditions/white-paper-c11-738488_16.jpg>
Here, C is the link bandwidth, RTT is round-trip time, and N is the number
of long-lived flows (see reference 6 at the end of this document).
Using an average RTT of 100 microseconds in a data center network, Figure
11 shows the buffer size for different link speeds and various numbers of
flows. Note that the buffer size decreases rapidly as the number of flows
increases. For instance, on a 100-Gbps link with 2500 flows, only a 25-KB
buffer is needed.
Figure 11. Buffer Sizing for Different Link Speeds and Numbers of Flows
[image: image.png]
On Tue, Oct 11, 2022 at 3:24 PM Dave Taht <dave.taht@gmail.com> wrote:
> Well, we've all been yammering for many years, and the message is
> getting through. Yes, at this point, changing the message to be more
> directed at engineers than users would help, and to this day, I don't
> know how to get to anyone in the
> C suite, except through the complaints of their kids. Jim got on this
> problem because of his kids. The guy that did dslreports, also. "my"
> kids are
>
> At the risk of burying the lede, our very own dave reed just did a
> podcast on different stuff:
> https://twit.tv/shows/floss-weekly/episodes/701?autostart=false
>
> Sometimes my own (shared with most of you) motivations tend to leak
> through. I really encourage the independent growth of user created and
> owned software, running on their own routers, and I'm very pleased to
> see the level of activity on the openwrt forums showing how healthy
> that part of our culture is. It would be a very different world if
> we'd decided to settle for whatever an ISP was willing to give us, and
> for things as they were, and I'm probably difficult to employ because
> of my
> fervent beliefs in anti-patenting, free and open source, and the right
> to repair...
>
> ... but I wouldn't have my world any other way. I might die broke, but
> I'll die free.
>
> On Tue, Oct 11, 2022 at 11:44 AM Rich Brown via Rpm
> <rpm@lists.bufferbloat.net> wrote:
> >
> >
> >
> >
> > On Oct 11, 2022, at 1:05 PM, Bob McMahon <bob.mcmahon@broadcom.com>
> wrote:
> >
> > I agree that bufferbloat awareness is a good thing. The issue I have is
> the approach - ask consumers to "detect it" and replace a device with a new
> one, that may or may not, meet all the needs of the users.
> >
> >
> > Better is that network engineers "design bloat out" from the beginning
> starting by properly sizing queues to service jitter, and for WiFi, to also
> enable aggregation techniques that minimize TXOP consumption.
> >
> >
> > The Yes, but... part of my answer emphasizes awareness. How are the
> network engineers going to know it's worth the (minor) effort of creating
> properly-sized queues?
> >
> > There are two fronts to attack:
> >
> > - Manufacturers - This video is a start on getting their customers to
> use these responsiveness test tools and call the support lines.
> >
> > - Hardware (especially router) reviewers - It kills me that there is
> radio silence whenever I ask a reviewer if they have ever measured
> latency/responsiveness. (BTW: Has anyone heard from Ben Moskowitz from
> Consumer Reports? We had a very encouraging phone call about a year ago,
> and they were going to get back to us...)
> >
> > Rich
> >
> >
> > Bob
> >
> > On Tue, Oct 11, 2022 at 6:57 AM Rich Brown <richb.hanover@gmail.com>
> wrote:
> >>
> >>
> >>
> >> On Oct 10, 2022, at 8:05 PM, Bob McMahon via Rpm <
> rpm@lists.bufferbloat.net> wrote:
> >>
> >> > I think conflating bufferbloat with latency misses the subtle point
> in that
> >> > bufferbloat is a measurement in memory units more than a measurement
> in
> >> > time units.
> >>
> >>
> >> Yes, but... I am going to praise this video, even as I encourage all
> the techies to be sure that they have the units correct.
> >>
> >> I've been yammering about the evils of latency/excess queueing for 10
> years on my blog, in forums, etc. I have not achieved anywhere near the
> notoriety of this video (almost a third of a million views).
> >>
> >> I am delighted that there's an engaging, mass-market Youtube video that
> makes the case that bufferbloat even exists.
> >>
> >> Rich
> >
> >
> > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
> >
> >
> > _______________________________________________
> > Rpm mailing list
> > Rpm@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/rpm
>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
>
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.2: Type: text/html, Size: 12747 bytes --]
[-- Attachment #2: image.png --]
[-- Type: image/png, Size: 68900 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Rpm] [Bloat] The most wonderful video ever about bufferbloat
2022-10-12 17:39 ` Bob McMahon
@ 2022-10-12 21:44 ` David P. Reed
0 siblings, 0 replies; 70+ messages in thread
From: David P. Reed @ 2022-10-12 21:44 UTC (permalink / raw)
To: Bob McMahon; +Cc: Dave Taht, Rich Brown, Make-Wifi-fast, Cake List, Rpm, bloat
[-- Attachment #1: Type: text/html, Size: 18887 bytes --]
[-- Attachment #2: img-1.png --]
[-- Type: image/png, Size: 68900 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Rpm] [Cake] The most wonderful video ever about bufferbloat
2022-10-11 17:05 ` Bob McMahon
2022-10-11 18:44 ` Rich Brown
@ 2022-10-13 17:45 ` Livingood, Jason
2022-10-13 17:49 ` [Make-wifi-fast] [Rpm] [Bloat] " Dave Taht
1 sibling, 1 reply; 70+ messages in thread
From: Livingood, Jason @ 2022-10-13 17:45 UTC (permalink / raw)
To: Bob McMahon, Rich Brown; +Cc: Cake List, bloat, Rpm, Make-Wifi-fast
[-- Attachment #1: Type: text/plain, Size: 358 bytes --]
> Better is that network engineers "design bloat out" from the beginning starting by properly sizing queues to service jitter, and for WiFi, to also enable aggregation techniques that minimize TXOP consumption.
Maybe – like ‘security by design’ and ‘privacy by design’ – we need ‘low latency by design’ for network engineers! ;-)
JL
[-- Attachment #2: Type: text/html, Size: 2060 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] [Cake] The most wonderful video ever about bufferbloat
2022-10-13 17:45 ` [Make-wifi-fast] [Bloat] [Rpm] [Cake] " Livingood, Jason
@ 2022-10-13 17:49 ` Dave Taht
0 siblings, 0 replies; 70+ messages in thread
From: Dave Taht @ 2022-10-13 17:49 UTC (permalink / raw)
To: Livingood, Jason
Cc: Bob McMahon, Rich Brown, Cake List, Rpm, Make-Wifi-fast, bloat
There is a really good looking conference oct 19th, for people that
care about p99 stuff. It's free:
https://www.p99conf.io/
It looks like great fun, so I'm attending.
... while I figure I will have FOUND MY PEOPLE, it's more, trying to
get more people interested in stuff along the edge
of reliability is always on my mind, so, like, feel free to give all
your employees and managers and people in the c-suite the day off so
they can hear about the benefits of OCD for all.
On Thu, Oct 13, 2022 at 10:45 AM Livingood, Jason via Rpm
<rpm@lists.bufferbloat.net> wrote:
>
> > Better is that network engineers "design bloat out" from the beginning starting by properly sizing queues to service jitter, and for WiFi, to also enable aggregation techniques that minimize TXOP consumption.
>
>
>
> Maybe – like ‘security by design’ and ‘privacy by design’ – we need ‘low latency by design’ for network engineers! ;-)
>
>
>
> JL
>
>
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] The most wonderful video ever about bufferbloat
2022-10-09 13:14 [Make-wifi-fast] The most wonderful video ever about bufferbloat Dave Taht
2022-10-09 13:23 ` [Make-wifi-fast] [Bloat] " Nathan Owens
2022-10-10 5:52 ` Taraldsen Erik
@ 2022-10-18 0:02 ` Stuart Cheshire
2022-10-18 2:44 ` Dave Taht
2022-10-18 18:07 ` [Make-wifi-fast] [Bloat] " Sebastian Moeller
2 siblings, 2 replies; 70+ messages in thread
From: Stuart Cheshire @ 2022-10-18 0:02 UTC (permalink / raw)
To: Dave Täht; +Cc: Rpm, bloat, Make-Wifi-fast, Cake List
On 9 Oct 2022, at 06:14, Dave Taht via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
> This was so massively well done, I cried. Does anyone know how to get in touch with the ifxit folk?
>
> https://www.youtube.com/watch?v=UICh3ScfNWI
I’m surprised that you liked this video. It seems to me that it repeats all the standard misinformation. The analogy they use is the standard terrible example of waiting in a long line at a grocery store, and the “solution” is letting certain traffic “jump the line, angering everyone behind them”.
Some quotes from the video:
> it would be so much more efficient for them to let you skip the line and just check out, especially since you’re in a hurry, but they’re rudely refusing
> to go back to our grocery store analogy this would be like if a worker saw you standing at the back ... and either let you skip to the front of the line or opens up an express lane just for you
The video describes the problem of bufferbloat, and then describes the same failed solution that hasn’t worked for the last three decades. Describing the obvious simple-minded (wrong) solution that any normal person would think of based on their personal human experience waiting in grocery stores and airports, is not describing the solution to bufferbloat. The solution to bufferbloat is not that if you are privileged then you get to “skip to the front of the line”. The solution to bufferbloat is that there is no line!
With grocery stores and airports people’s arrivals are independent and not controlled. There is no way for a grocery store or airport to generate backpressure to tell people to wait at home when a queue begins to form. The key to solving bufferbloat is generating timely backpressure to prevent the queue forming in the first place, not accepting a huge queue and then deciding who deserves special treatment to get better service than all the other peons who still have to wait in a long queue, just like before.
Stuart Cheshire
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] The most wonderful video ever about bufferbloat
2022-10-18 0:02 ` [Make-wifi-fast] " Stuart Cheshire
@ 2022-10-18 2:44 ` Dave Taht
2022-10-18 2:51 ` [Make-wifi-fast] [Bloat] " Sina Khanifar
` (2 more replies)
2022-10-18 18:07 ` [Make-wifi-fast] [Bloat] " Sebastian Moeller
1 sibling, 3 replies; 70+ messages in thread
From: Dave Taht @ 2022-10-18 2:44 UTC (permalink / raw)
To: Stuart Cheshire; +Cc: Rpm, bloat, Make-Wifi-fast, Cake List
On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire <cheshire@apple.com> wrote:
>
> On 9 Oct 2022, at 06:14, Dave Taht via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
>
> > This was so massively well done, I cried. Does anyone know how to get in touch with the ifxit folk?
> >
> > https://www.youtube.com/watch?v=UICh3ScfNWI
>
> I’m surprised that you liked this video. It seems to me that it repeats all the standard misinformation. The analogy they use is the standard terrible example of waiting in a long line at a grocery store, and the “solution” is letting certain traffic “jump the line, angering everyone behind them”.
Accuracy be damned. The analogy to common experience resonates more.
>
> Some quotes from the video:
>
> > it would be so much more efficient for them to let you skip the line and just check out, especially since you’re in a hurry, but they’re rudely refusing
I think the person with the cheetos pulling out a gun and shooting
everyone in front of him (AQM) would not go down well.
> > to go back to our grocery store analogy this would be like if a worker saw you standing at the back ... and either let you skip to the front of the line or opens up an express lane just for you
Actually that analogy is fairly close to fair queuing. The multiple
checker analogy is one of the most common analogies in queue theory
itself.
>
> The video describes the problem of bufferbloat, and then describes the same failed solution that hasn’t worked for the last three decades.
Hmm? It establishes the scenario, explains the problem *quickly*,
disses gamer routers for not getting it right.. *points to an
accurate test*, and then to the ideas and products that *actually
work* with "smart queueing", with a screenshot of the most common
(eero's optimize for gaming and videoconferencing), and fq_codel and
cake *by name*, and points folk at the best known solution available,
openwrt.
Bing, baddabang, boom. Also the comments were revealing. A goodly
percentage already knew the problem, more than a few were inspired to
take the test,
there was a whole bunch of "Aha!" success stories and 360k views,
which is more people than we've ever been able to reach in for
example, a nanog conference.
I loved that folk taking the test actually had quite a few A results,
without having had to do anything. At least some ISPs are getting it
more right now!
At this point I think gamers in particular know what "brands" we've
tried to establish - "Smart queues", "SQM", "OpenWrt", fq_codel and
now "cake" are "good" things to have, and are stimulating demand by
asking for them, It's certainly working out better and better for
evenroute, firewalla, ubnt and others, and I saw an uptick in
questions about this on various user forums.
I even like that there's a backlash now of people saying "fixing
bufferbloat doesn't solve everything" -
> Describing the obvious simple-minded (wrong) solution that any normal person would think of based on their personal human experience waiting in grocery stores and airports, is not describing the solution to bufferbloat. The solution to bufferbloat is not that if you are privileged then you get to “skip to the front of the line”. The solution to bufferbloat is that there is no line!
I like the idea of a guru floating above a grocery cart with a better
string of explanations, explaining
- "no, grasshopper, the solution to bufferbloat is no line... at all".
>
> With grocery stores and airports people’s arrivals are independent and not controlled. There is no way for a grocery store or airport to generate backpressure to tell people to wait at home when a queue begins to form. The key to solving bufferbloat is generating timely backpressure to prevent the queue forming in the first place, not accepting a huge queue and then deciding who deserves special treatment to get better service than all the other peons who still have to wait in a long queue, just like before.
I am not huge on the word "backpressure" here. Needs to signal the
other side to slow down, is more accurate. So might say timely
signalling rather than timely backpressure?
Other feedback I got was that the video was too smarmy (I agree),
different audiences than gamers need different forms of outreach...
but to me, winning the gamers has always been one of the most
important things, as they make a lot of buying decisions, and they
benefit the most for
fq and packet prioritization as we do today in gamer routers and in
cake + qosify.
maybe that gets in the way of more serious markets. Certainly I would
like another video explaining what goes wrong with videoconferencing.
>
> Stuart Cheshire
>
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat
2022-10-18 2:44 ` Dave Taht
@ 2022-10-18 2:51 ` Sina Khanifar
2022-10-18 3:15 ` [Make-wifi-fast] A quick report from the WISPA conference Dave Taht
2022-10-18 2:58 ` [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat David Lang
2022-10-19 20:44 ` [Make-wifi-fast] " Stuart Cheshire
2 siblings, 1 reply; 70+ messages in thread
From: Sina Khanifar @ 2022-10-18 2:51 UTC (permalink / raw)
To: Dave Taht; +Cc: Cake List, Make-Wifi-fast, Rpm, Stuart Cheshire, bloat
[-- Attachment #1: Type: text/plain, Size: 5744 bytes --]
Positive or negative, I can claim a bit of credit for this video :). We've
been working with LTT on a few projects and we pitched them on doing
something around bufferbloat. We've seen more traffic to our Waveforn test
than ever before, which has been fun!
On Mon, Oct 17, 2022 at 7:45 PM Dave Taht via Bloat <
bloat@lists.bufferbloat.net> wrote:
> On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire <cheshire@apple.com>
> wrote:
> >
> > On 9 Oct 2022, at 06:14, Dave Taht via Make-wifi-fast <
> make-wifi-fast@lists.bufferbloat.net> wrote:
> >
> > > This was so massively well done, I cried. Does anyone know how to get
> in touch with the ifxit folk?
> > >
> > > https://www.youtube.com/watch?v=UICh3ScfNWI
> >
> > I’m surprised that you liked this video. It seems to me that it repeats
> all the standard misinformation. The analogy they use is the standard
> terrible example of waiting in a long line at a grocery store, and the
> “solution” is letting certain traffic “jump the line, angering everyone
> behind them”.
>
> Accuracy be damned. The analogy to common experience resonates more.
>
> >
> > Some quotes from the video:
> >
> > > it would be so much more efficient for them to let you skip the line
> and just check out, especially since you’re in a hurry, but they’re rudely
> refusing
>
> I think the person with the cheetos pulling out a gun and shooting
> everyone in front of him (AQM) would not go down well.
>
> > > to go back to our grocery store analogy this would be like if a worker
> saw you standing at the back ... and either let you skip to the front of
> the line or opens up an express lane just for you
>
> Actually that analogy is fairly close to fair queuing. The multiple
> checker analogy is one of the most common analogies in queue theory
> itself.
>
> >
> > The video describes the problem of bufferbloat, and then describes the
> same failed solution that hasn’t worked for the last three decades.
>
> Hmm? It establishes the scenario, explains the problem *quickly*,
> disses gamer routers for not getting it right.. *points to an
> accurate test*, and then to the ideas and products that *actually
> work* with "smart queueing", with a screenshot of the most common
> (eero's optimize for gaming and videoconferencing), and fq_codel and
> cake *by name*, and points folk at the best known solution available,
> openwrt.
>
> Bing, baddabang, boom. Also the comments were revealing. A goodly
> percentage already knew the problem, more than a few were inspired to
> take the test,
> there was a whole bunch of "Aha!" success stories and 360k views,
> which is more people than we've ever been able to reach in for
> example, a nanog conference.
>
> I loved that folk taking the test actually had quite a few A results,
> without having had to do anything. At least some ISPs are getting it
> more right now!
>
> At this point I think gamers in particular know what "brands" we've
> tried to establish - "Smart queues", "SQM", "OpenWrt", fq_codel and
> now "cake" are "good" things to have, and are stimulating demand by
> asking for them, It's certainly working out better and better for
> evenroute, firewalla, ubnt and others, and I saw an uptick in
> questions about this on various user forums.
>
> I even like that there's a backlash now of people saying "fixing
> bufferbloat doesn't solve everything" -
>
> > Describing the obvious simple-minded (wrong) solution that any normal
> person would think of based on their personal human experience waiting in
> grocery stores and airports, is not describing the solution to bufferbloat.
> The solution to bufferbloat is not that if you are privileged then you get
> to “skip to the front of the line”. The solution to bufferbloat is that
> there is no line!
>
> I like the idea of a guru floating above a grocery cart with a better
> string of explanations, explaining
>
> - "no, grasshopper, the solution to bufferbloat is no line... at all".
>
> >
> > With grocery stores and airports people’s arrivals are independent and
> not controlled. There is no way for a grocery store or airport to generate
> backpressure to tell people to wait at home when a queue begins to form.
> The key to solving bufferbloat is generating timely backpressure to prevent
> the queue forming in the first place, not accepting a huge queue and then
> deciding who deserves special treatment to get better service than all the
> other peons who still have to wait in a long queue, just like before.
>
> I am not huge on the word "backpressure" here. Needs to signal the
> other side to slow down, is more accurate. So might say timely
> signalling rather than timely backpressure?
>
> Other feedback I got was that the video was too smarmy (I agree),
> different audiences than gamers need different forms of outreach...
>
> but to me, winning the gamers has always been one of the most
> important things, as they make a lot of buying decisions, and they
> benefit the most for
> fq and packet prioritization as we do today in gamer routers and in
> cake + qosify.
>
> maybe that gets in the way of more serious markets. Certainly I would
> like another video explaining what goes wrong with videoconferencing.
>
>
>
>
>
>
> >
> > Stuart Cheshire
> >
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 6926 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat
2022-10-18 2:44 ` Dave Taht
2022-10-18 2:51 ` [Make-wifi-fast] [Bloat] " Sina Khanifar
@ 2022-10-18 2:58 ` David Lang
2022-10-18 17:03 ` Bob McMahon
2022-10-19 20:44 ` [Make-wifi-fast] " Stuart Cheshire
2 siblings, 1 reply; 70+ messages in thread
From: David Lang @ 2022-10-18 2:58 UTC (permalink / raw)
To: Dave Taht; +Cc: Stuart Cheshire, Rpm, Make-Wifi-fast, Cake List, bloat
[-- Attachment #1: Type: text/plain, Size: 993 bytes --]
On Mon, 17 Oct 2022, Dave Taht via Bloat wrote:
> On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire <cheshire@apple.com> wrote:
>>
>> On 9 Oct 2022, at 06:14, Dave Taht via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
>>
>> > This was so massively well done, I cried. Does anyone know how to get in touch with the ifxit folk?
>> >
>> > https://www.youtube.com/watch?v=UICh3ScfNWI
>>
>> I’m surprised that you liked this video. It seems to me that it repeats all the standard misinformation. The analogy they use is the standard terrible example of waiting in a long line at a grocery store, and the “solution” is letting certain traffic “jump the line, angering everyone behind them”.
>
> Accuracy be damned. The analogy to common experience resonates more.
actually, fair queueing is more like the '15 items or less' lanes to speed
through the people doing simple things rather than having them wait behind the
mother of 7 doing their monthly shopping.
David Lang
^ permalink raw reply [flat|nested] 70+ messages in thread
* [Make-wifi-fast] A quick report from the WISPA conference
2022-10-18 2:51 ` [Make-wifi-fast] [Bloat] " Sina Khanifar
@ 2022-10-18 3:15 ` Dave Taht
2022-10-18 17:17 ` Sina Khanifar
0 siblings, 1 reply; 70+ messages in thread
From: Dave Taht @ 2022-10-18 3:15 UTC (permalink / raw)
To: Sina Khanifar; +Cc: Cake List, Make-Wifi-fast, Rpm, Stuart Cheshire, bloat
On Mon, Oct 17, 2022 at 7:51 PM Sina Khanifar <sina@waveform.com> wrote:
>
> Positive or negative, I can claim a bit of credit for this video :). We've been working with LTT on a few projects and we pitched them on doing something around bufferbloat. We've seen more traffic to our Waveforn test than ever before, which has been fun!
Thank you. Great job with that video! And waveform has become the goto
site for many now.
I can't help but wonder tho... are you collecting any statistics, over
time, as to how much better the problem is getting?
And any chance they could do something similar explaining wifi?
...
I was just at WISPA conference week before last. Preseem's booth
(fq_codel) was always packed. Vilo living had put cake in their wifi 6
product. A
keynote speaker had deployed it and talked about it with waveform
results on the big screen (2k people there). A large wireless vendor
demo'd privately to me their flent results before/after cake on their
next-gen radios... and people dissed tarana without me prompting for
their bad bufferbloat... and the best thing of all that happened to me
was... besides getting a hug from a young lady (megan) who'd salvaged
her schooling in alaska using sqm - I walked up to the paraqum booth
(another large QoE middlebox maker centered more in india) and asked.
"So... do y'all have fq_codel yet?"
And they smiled and said: "No, we have something better... we've got cake."
"Cake? What's that?" - I said, innocently.
They then stepped me through their 200Gbps (!!) product, which uses a
bunch of offloads, and can track rtt down to a ms with the intel
ethernet card they were using. They'd modifed cake to provide 16 (?)
levels of service, and were running under dpdk (I am not sure if cake
was). It was a great, convincing pitch...
... then I told 'em who I was. There's a video of the in-both concert after.
...
The downside to me (and the subject of my talk) was that in nearly
every person I talked to, fq_codel was viewed as a means to better
subscriber bandwidth plan enforcement (which is admittedly the market
that preseem pioneered) and it was not understood that I'd got
involved in this whole thing because I'd wanted an algorithm to deal
with "rain fade", running directly on the radios. People wanted to use
the statistics on the radios to drive the plan enforcement better
(which is an ok approach, I guess), and for 10+ I'd been whinging
about the... physics.
So I ranted about rfc7567 a lot and begged people now putting routerOS
7.2 and later out there (mikrotik is huge in this market), to kill
their fifos and sfqs at the native rates of the interfaces... and
watch their network improve that way also.
I think one more wispa conference will be a clean sweep of everyone in
the fixed wireless market to not only adopt these algorithms for plan
enforcement, but even more directly on the radios and more CPE.
I also picked up enough consulting business to keep me busy the rest
of this year, and possibly more than I can handle (anybody looking?)
I wonder what will happen at a fiber conference?
> On Mon, Oct 17, 2022 at 7:45 PM Dave Taht via Bloat <bloat@lists.bufferbloat.net> wrote:
>>
>> On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire <cheshire@apple.com> wrote:
>> >
>> > On 9 Oct 2022, at 06:14, Dave Taht via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
>> >
>> > > This was so massively well done, I cried. Does anyone know how to get in touch with the ifxit folk?
>> > >
>> > > https://www.youtube.com/watch?v=UICh3ScfNWI
>> >
>> > I’m surprised that you liked this video. It seems to me that it repeats all the standard misinformation. The analogy they use is the standard terrible example of waiting in a long line at a grocery store, and the “solution” is letting certain traffic “jump the line, angering everyone behind them”.
>>
>> Accuracy be damned. The analogy to common experience resonates more.
>>
>> >
>> > Some quotes from the video:
>> >
>> > > it would be so much more efficient for them to let you skip the line and just check out, especially since you’re in a hurry, but they’re rudely refusing
>>
>> I think the person with the cheetos pulling out a gun and shooting
>> everyone in front of him (AQM) would not go down well.
>>
>> > > to go back to our grocery store analogy this would be like if a worker saw you standing at the back ... and either let you skip to the front of the line or opens up an express lane just for you
>>
>> Actually that analogy is fairly close to fair queuing. The multiple
>> checker analogy is one of the most common analogies in queue theory
>> itself.
>>
>> >
>> > The video describes the problem of bufferbloat, and then describes the same failed solution that hasn’t worked for the last three decades.
>>
>> Hmm? It establishes the scenario, explains the problem *quickly*,
>> disses gamer routers for not getting it right.. *points to an
>> accurate test*, and then to the ideas and products that *actually
>> work* with "smart queueing", with a screenshot of the most common
>> (eero's optimize for gaming and videoconferencing), and fq_codel and
>> cake *by name*, and points folk at the best known solution available,
>> openwrt.
>>
>> Bing, baddabang, boom. Also the comments were revealing. A goodly
>> percentage already knew the problem, more than a few were inspired to
>> take the test,
>> there was a whole bunch of "Aha!" success stories and 360k views,
>> which is more people than we've ever been able to reach in for
>> example, a nanog conference.
>>
>> I loved that folk taking the test actually had quite a few A results,
>> without having had to do anything. At least some ISPs are getting it
>> more right now!
>>
>> At this point I think gamers in particular know what "brands" we've
>> tried to establish - "Smart queues", "SQM", "OpenWrt", fq_codel and
>> now "cake" are "good" things to have, and are stimulating demand by
>> asking for them, It's certainly working out better and better for
>> evenroute, firewalla, ubnt and others, and I saw an uptick in
>> questions about this on various user forums.
>>
>> I even like that there's a backlash now of people saying "fixing
>> bufferbloat doesn't solve everything" -
>>
>> > Describing the obvious simple-minded (wrong) solution that any normal person would think of based on their personal human experience waiting in grocery stores and airports, is not describing the solution to bufferbloat. The solution to bufferbloat is not that if you are privileged then you get to “skip to the front of the line”. The solution to bufferbloat is that there is no line!
>>
>> I like the idea of a guru floating above a grocery cart with a better
>> string of explanations, explaining
>>
>> - "no, grasshopper, the solution to bufferbloat is no line... at all".
>>
>> >
>> > With grocery stores and airports people’s arrivals are independent and not controlled. There is no way for a grocery store or airport to generate backpressure to tell people to wait at home when a queue begins to form. The key to solving bufferbloat is generating timely backpressure to prevent the queue forming in the first place, not accepting a huge queue and then deciding who deserves special treatment to get better service than all the other peons who still have to wait in a long queue, just like before.
>>
>> I am not huge on the word "backpressure" here. Needs to signal the
>> other side to slow down, is more accurate. So might say timely
>> signalling rather than timely backpressure?
>>
>> Other feedback I got was that the video was too smarmy (I agree),
>> different audiences than gamers need different forms of outreach...
>>
>> but to me, winning the gamers has always been one of the most
>> important things, as they make a lot of buying decisions, and they
>> benefit the most for
>> fq and packet prioritization as we do today in gamer routers and in
>> cake + qosify.
>>
>> maybe that gets in the way of more serious markets. Certainly I would
>> like another video explaining what goes wrong with videoconferencing.
>>
>>
>>
>>
>>
>>
>> >
>> > Stuart Cheshire
>> >
>>
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat
2022-10-18 2:58 ` [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat David Lang
@ 2022-10-18 17:03 ` Bob McMahon
2022-10-18 18:19 ` [Make-wifi-fast] [Rpm] " Sebastian Moeller
0 siblings, 1 reply; 70+ messages in thread
From: Bob McMahon @ 2022-10-18 17:03 UTC (permalink / raw)
To: David Lang; +Cc: Dave Taht, Rpm, Cake List, bloat, Make-Wifi-fast
[-- Attachment #1: Type: text/plain, Size: 3736 bytes --]
I agree with Stuart that there is no reason for shared lines in the first
place. It seems like a design flaw to have a common queue that congests in
a way that impacts the one transmit unit as the atomic forwarding plane
unit. The goal of virtual output queueing
<https://en.wikipedia.org/wiki/Virtual_output_queueing> is to eliminate
head of line blocking, every egress transmit unit gets its own cashier with
no competition. The VOQ queue depths should support one transmit unit and
any jitter through the switching subsystem - jitter for the case of
non-bloat and where a faster VOQ service rate can drain the VOQ. If the
VOQ can't be drained per a faster service rate, then it's just one
transmit unit as the queue is now just a standing queue w/delay and no
benefit.
Many network engineers typically, though incorrectly, perceive a transmit
unit as one ethernet packet. With WiFi it's one Mu transmission or one Su
transmission, with aggregation(s), which is a lot more than one ethernet
packet but it depends on things like MCS, spatial stream powers, Mu peers,
etc. and is variable. Some data center designs have optimized the
forwarding plane for flow completion times so their equivalent transmit
unit is a mouse flow.
I perceive applying AQM to shared queue congestion as a mitigation
technique to a poorly designed forwarding plane. The hope is that
transistor engineers don't do this and "design out the lines" from the
beginning. Better switching engineering vs queue management applied
afterwards as a mitigation technique.
Bob
On Mon, Oct 17, 2022 at 7:58 PM David Lang via Make-wifi-fast <
make-wifi-fast@lists.bufferbloat.net> wrote:
> On Mon, 17 Oct 2022, Dave Taht via Bloat wrote:
>
> > On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire <cheshire@apple.com>
> wrote:
> >>
> >> On 9 Oct 2022, at 06:14, Dave Taht via Make-wifi-fast <
> make-wifi-fast@lists.bufferbloat.net> wrote:
> >>
> >> > This was so massively well done, I cried. Does anyone know how to get
> in touch with the ifxit folk?
> >> >
> >> > https://www.youtube.com/watch?v=UICh3ScfNWI
> >>
> >> I’m surprised that you liked this video. It seems to me that it repeats
> all the standard misinformation. The analogy they use is the standard
> terrible example of waiting in a long line at a grocery store, and the
> “solution” is letting certain traffic “jump the line, angering everyone
> behind them”.
> >
> > Accuracy be damned. The analogy to common experience resonates more.
>
> actually, fair queueing is more like the '15 items or less' lanes to speed
> through the people doing simple things rather than having them wait behind
> the
> mother of 7 doing their monthly shopping.
>
> David Lang_______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #2: Type: text/html, Size: 4620 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] A quick report from the WISPA conference
2022-10-18 3:15 ` [Make-wifi-fast] A quick report from the WISPA conference Dave Taht
@ 2022-10-18 17:17 ` Sina Khanifar
2022-10-18 19:04 ` [Make-wifi-fast] [Bloat] " Sebastian Moeller
2022-10-18 19:17 ` Sebastian Moeller
0 siblings, 2 replies; 70+ messages in thread
From: Sina Khanifar @ 2022-10-18 17:17 UTC (permalink / raw)
To: Dave Taht; +Cc: Cake List, Make-Wifi-fast, Rpm, Stuart Cheshire, bloat
[-- Attachment #1: Type: text/plain, Size: 12262 bytes --]
>
>
>
> I can't help but wonder tho... are you collecting any statistics, over
> time, as to how much better the problem is getting?
>
>
>
We are collecting anonymized data, but we haven't analyzed it yet. If we get a bit of time we'll look at that hopefully.
>
>
>
> And any chance they could do something similar explaining wifi?
>
>
>
I'm actually not exactly sure what mitigations exist for WiFi at the moment - is there something I can read?
On this note: when we were building our test one of the things we really wished existed was a standardized way to test latency and throughput to routers. It would be super helpful if there was a standard in consumer routers that allowed users to both ping and fetch 0kB fils from their routers, and also run download/upload tests.
>
>
>
> I think one more wispa conference will be a clean sweep of everyone in the
> fixed wireless market to not only adopt these algorithms for plan
> enforcement, but even more directly on the radios and more CPE.
>
>
>
T-Mobile has signed up 1m+ people to their new Home Internet over 5G, and all of them have really meaningful bufferbloat issues. I've been pointing folks who reach out to this thread ( https://forum.openwrt.org/t/cake-w-adaptive-bandwidth-historic/108848 ) about cake-autorate and sqm-autorate, but ideally it would be fixed at a network level, just not sure how to apply pressure (I'm in contact with the T-Mobile Home Internet team, but I think this is above their heads).
On Mon, Oct 17, 2022 at 8:15 PM, Dave Taht < dave.taht@gmail.com > wrote:
>
>
>
> On Mon, Oct 17, 2022 at 7:51 PM Sina Khanifar < sina@ waveform. com (
> sina@waveform.com ) > wrote:
>
>
>
>>
>>
>> Positive or negative, I can claim a bit of credit for this video :). We've
>> been working with LTT on a few projects and we pitched them on doing
>> something around bufferbloat. We've seen more traffic to our Waveforn test
>> than ever before, which has been fun!
>>
>>
>>
>
>
>
> Thank you. Great job with that video! And waveform has become the goto
> site for many now.
>
>
>
>
> I can't help but wonder tho... are you collecting any statistics, over
> time, as to how much better the problem is getting?
>
>
>
>
> And any chance they could do something similar explaining wifi?
>
>
>
>
> ...
>
>
>
>
> I was just at WISPA conference week before last. Preseem's booth
> (fq_codel) was always packed. Vilo living had put cake in their wifi 6
> product. A
> keynote speaker had deployed it and talked about it with waveform results
> on the big screen (2k people there). A large wireless vendor demo'd
> privately to me their flent results before/after cake on their next-gen
> radios... and people dissed tarana without me prompting for their bad
> bufferbloat... and the best thing of all that happened to me was...
> besides getting a hug from a young lady (megan) who'd salvaged her
> schooling in alaska using sqm - I walked up to the paraqum booth
> (another large QoE middlebox maker centered more in india) and asked.
>
>
>
> "So... do y'all have fq_codel yet?"
>
>
>
>
> And they smiled and said: "No, we have something better... we've got
> cake."
>
>
>
>
> "Cake? What's that?" - I said, innocently.
>
>
>
>
> They then stepped me through their 200Gbps (!!) product, which uses a
> bunch of offloads, and can track rtt down to a ms with the intel ethernet
> card they were using. They'd modifed cake to provide 16 (?) levels of
> service, and were running under dpdk (I am not sure if cake was). It was a
> great, convincing pitch...
>
>
>
>
> ... then I told 'em who I was. There's a video of the in-both concert
> after.
>
>
>
>
> ...
>
>
>
>
> The downside to me (and the subject of my talk) was that in nearly every
> person I talked to, fq_codel was viewed as a means to better subscriber
> bandwidth plan enforcement (which is admittedly the market that preseem
> pioneered) and it was not understood that I'd got involved in this whole
> thing because I'd wanted an algorithm to deal with "rain fade", running
> directly on the radios. People wanted to use the statistics on the radios
> to drive the plan enforcement better
> (which is an ok approach, I guess), and for 10+ I'd been whinging about
> the... physics.
>
>
>
> So I ranted about rfc7567 a lot and begged people now putting routerOS
> 7.2 and later out there (mikrotik is huge in this market), to kill their
> fifos and sfqs at the native rates of the interfaces... and watch their
> network improve that way also.
>
>
>
> I think one more wispa conference will be a clean sweep of everyone in the
> fixed wireless market to not only adopt these algorithms for plan
> enforcement, but even more directly on the radios and more CPE.
>
>
>
>
> I also picked up enough consulting business to keep me busy the rest of
> this year, and possibly more than I can handle (anybody looking?)
>
>
>
>
> I wonder what will happen at a fiber conference?
>
>
>
>>
>>
>> On Mon, Oct 17, 2022 at 7:45 PM Dave Taht via Bloat < bloat@ lists. bufferbloat.
>> net ( bloat@lists.bufferbloat.net ) > wrote:
>>
>>
>>
>>>
>>>
>>> On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire < cheshire@ apple. com (
>>> cheshire@apple.com ) > wrote:
>>>
>>>
>>>
>>>>
>>>>
>>>> On 9 Oct 2022, at 06:14, Dave Taht via Make-wifi-fast < make-wifi-fast@ lists.
>>>> bufferbloat. net ( make-wifi-fast@lists.bufferbloat.net ) > wrote:
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> This was so massively well done, I cried. Does anyone know how to get in
>>>>> touch with the ifxit folk?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> https:/ / www. youtube. com/ watch?v=UICh3ScfNWI (
>>>>> https://www.youtube.com/watch?v=UICh3ScfNWI )
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> I’m surprised that you liked this video. It seems to me that it repeats
>>>> all the standard misinformation. The analogy they use is the standard
>>>> terrible example of waiting in a long line at a grocery store, and the
>>>> “solution” is letting certain traffic “jump the line, angering everyone
>>>> behind them”.
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> Accuracy be damned. The analogy to common experience resonates more.
>>>
>>>
>>>
>>>>
>>>>
>>>> Some quotes from the video:
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> it would be so much more efficient for them to let you skip the line and
>>>>> just check out, especially since you’re in a hurry, but they’re rudely
>>>>> refusing
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> I think the person with the cheetos pulling out a gun and shooting
>>> everyone in front of him (AQM) would not go down well.
>>>
>>>
>>>
>>>>
>>>>>
>>>>>
>>>>> to go back to our grocery store analogy this would be like if a worker saw
>>>>> you standing at the back ... and either let you skip to the front of the
>>>>> line or opens up an express lane just for you
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> Actually that analogy is fairly close to fair queuing. The multiple
>>> checker analogy is one of the most common analogies in queue theory
>>> itself.
>>>
>>>
>>>
>>>>
>>>>
>>>> The video describes the problem of bufferbloat, and then describes the
>>>> same failed solution that hasn’t worked for the last three decades.
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> Hmm? It establishes the scenario, explains the problem *quickly*, disses
>>> gamer routers for not getting it right.. *points to an accurate test*, and
>>> then to the ideas and products that *actually work* with "smart queueing",
>>> with a screenshot of the most common
>>> (eero's optimize for gaming and videoconferencing), and fq_codel and cake
>>> *by name*, and points folk at the best known solution available, openwrt.
>>>
>>>
>>>
>>> Bing, baddabang, boom. Also the comments were revealing. A goodly
>>> percentage already knew the problem, more than a few were inspired to take
>>> the test,
>>> there was a whole bunch of "Aha!" success stories and 360k views, which is
>>> more people than we've ever been able to reach in for example, a nanog
>>> conference.
>>>
>>>
>>>
>>> I loved that folk taking the test actually had quite a few A results,
>>> without having had to do anything. At least some ISPs are getting it more
>>> right now!
>>>
>>>
>>>
>>>
>>> At this point I think gamers in particular know what "brands" we've tried
>>> to establish - "Smart queues", "SQM", "OpenWrt", fq_codel and now "cake"
>>> are "good" things to have, and are stimulating demand by asking for them,
>>> It's certainly working out better and better for evenroute, firewalla,
>>> ubnt and others, and I saw an uptick in questions about this on various
>>> user forums.
>>>
>>>
>>>
>>>
>>> I even like that there's a backlash now of people saying "fixing
>>> bufferbloat doesn't solve everything" -
>>>
>>>
>>>
>>>>
>>>>
>>>> Describing the obvious simple-minded (wrong) solution that any normal
>>>> person would think of based on their personal human experience waiting in
>>>> grocery stores and airports, is not describing the solution to
>>>> bufferbloat. The solution to bufferbloat is not that if you are privileged
>>>> then you get to “skip to the front of the line”. The solution to
>>>> bufferbloat is that there is no line!
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> I like the idea of a guru floating above a grocery cart with a better
>>> string of explanations, explaining
>>>
>>>
>>>
>>>
>>> - "no, grasshopper, the solution to bufferbloat is no line... at all".
>>>
>>>
>>>
>>>>
>>>>
>>>> With grocery stores and airports people’s arrivals are independent and not
>>>> controlled. There is no way for a grocery store or airport to generate
>>>> backpressure to tell people to wait at home when a queue begins to form.
>>>> The key to solving bufferbloat is generating timely backpressure to
>>>> prevent the queue forming in the first place, not accepting a huge queue
>>>> and then deciding who deserves special treatment to get better service
>>>> than all the other peons who still have to wait in a long queue, just like
>>>> before.
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> I am not huge on the word "backpressure" here. Needs to signal the other
>>> side to slow down, is more accurate. So might say timely signalling rather
>>> than timely backpressure?
>>>
>>>
>>>
>>>
>>> Other feedback I got was that the video was too smarmy (I agree),
>>> different audiences than gamers need different forms of outreach...
>>>
>>>
>>>
>>>
>>> but to me, winning the gamers has always been one of the most important
>>> things, as they make a lot of buying decisions, and they benefit the most
>>> for
>>> fq and packet prioritization as we do today in gamer routers and in cake +
>>> qosify.
>>>
>>>
>>>
>>> maybe that gets in the way of more serious markets. Certainly I would like
>>> another video explaining what goes wrong with videoconferencing.
>>>
>>>
>>>
>>>>
>>>>
>>>> Stuart Cheshire
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> This song goes out to all the folk that thought Stadia would work: https:/
>>> / www. linkedin. com/ posts/ dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>> (
>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>> ) Dave Täht CEO, TekLibre, LLC
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@ lists. bufferbloat. net ( Bloat@lists.bufferbloat.net )
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>
>>
>
>
>
> --
> This song goes out to all the folk that thought Stadia would work: https:/
> / www. linkedin. com/ posts/ dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> (
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> ) Dave Täht CEO, TekLibre, LLC
>
>
>
[-- Attachment #2: Type: text/html, Size: 17856 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat
2022-10-18 0:02 ` [Make-wifi-fast] " Stuart Cheshire
2022-10-18 2:44 ` Dave Taht
@ 2022-10-18 18:07 ` Sebastian Moeller
1 sibling, 0 replies; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-18 18:07 UTC (permalink / raw)
To: Stuart Cheshire, Stuart Cheshire via Bloat, Dave Täht
Cc: Rpm, Make-Wifi-fast, Cake List, bloat
Hi Stuart,
On 18 October 2022 02:02:01 CEST, Stuart Cheshire via Bloat <bloat@lists.bufferbloat.net> wrote:
>On 9 Oct 2022, at 06:14, Dave Taht via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
>
>> This was so massively well done, I cried. Does anyone know how to get in touch with the ifxit folk?
>>
>> https://www.youtube.com/watch?v=UICh3ScfNWI
>
>I’m surprised that you liked this video. It seems to me that it repeats all the standard misinformation. The analogy they use is the standard terrible example of waiting in a long line at a grocery store, and the “solution” is letting certain traffic “jump the line, angering everyone behind them”.
>
>Some quotes from the video:
>
>> it would be so much more efficient for them to let you skip the line and just check out, especially since you’re in a hurry, but they’re rudely refusing
>
>> to go back to our grocery store analogy this would be like if a worker saw you standing at the back ... and either let you skip to the front of the line or opens up an express lane just for you
>
>The video describes the problem of bufferbloat, and then describes the same failed solution that hasn’t worked for the last three decades. Describing the obvious simple-minded (wrong) solution that any normal person would think of based on their personal human experience waiting in grocery stores and airports, is not describing the solution to bufferbloat. The solution to bufferbloat is not that if you are privileged then you get to “skip to the front of the line”. The solution to bufferbloat is that there is no line!
[SM] Short of an oracle at all endpoints that seems as worthy a goal as impossible to achieve. IMHO the engineering should focus more on the 'fastest possible without any congestion' to acceptable performance (throughput and latency) in full and near saturation conditions. That is assume that, in spite of best efforts to avoid a line building, you need robust and reliable means to deal with lines that will sooner or later appear.
>
>With grocery stores and airports people’s arrivals are independent and not controlled. There is no way for a grocery store or airport to generate backpressure to tell people to wait at home when a queue begins to form. The key to solving bufferbloat is generating timely backpressure to prevent the queue forming in the first place,
[SM] Seems somewhat hard for my router on the bottleneck to transmit backpressure to the sending applications in less than 1/2 RTT at best, during that time sending rate and acceptable capacity share will not be matched.... L4S type signalling will only really help if the bottleneck's rate fluctuation is on a slower timeframe than the signaling delay. In short aiming for no/low queue is fine, but better carry a big stick as well for when the queue builds up.
not accepting a huge queue and then deciding who deserves special treatment to get better service than all the other peons who still have to wait in a long queue, just like before.
[SM] This is where a flow scheduler in practise helps a ton... as it
a) avoids starving individual flows as well as possible with minimal information
b) it tends to restrict the fall-out of under-responsive flows to those flows themselves (or to their hash bins).
In a sense this is the opposite of special treatment as all flows are treated with the same goal in mind....
I am not really jousting for the video here, but I want to highlight that any short summary of a complex problem will have to gloss over some complexity (I expect you fully understood the points I make above, but omitted discussing them for brevity).
Regards
Sebastian
>
>Stuart Cheshire
>
>_______________________________________________
>Bloat mailing list
>Bloat@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/bloat
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] The most wonderful video ever about bufferbloat
2022-10-18 17:03 ` Bob McMahon
@ 2022-10-18 18:19 ` Sebastian Moeller
2022-10-18 19:30 ` Bob McMahon
2022-10-19 7:09 ` David Lang
0 siblings, 2 replies; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-18 18:19 UTC (permalink / raw)
To: Bob McMahon, Bob McMahon via Rpm, David Lang
Cc: Rpm, Cake List, Make-Wifi-fast, bloat
Hi Bob,
On 18 October 2022 19:03:21 CEST, Bob McMahon via Rpm <rpm@lists.bufferbloat.net> wrote:
>I agree with Stuart that there is no reason for shared lines in the first
>place. It seems like a design flaw to have a common queue that congests in
>a way that impacts the one transmit unit as the atomic forwarding plane
>unit.
[SM] How does that generalize to internet access links? My gut feeling is that an FQ scheduler comes close.
The goal of virtual output queueing
><https://en.wikipedia.org/wiki/Virtual_output_queueing> is to eliminate
>head of line blocking, every egress transmit unit gets its own cashier with
>no competition. The VOQ queue depths should support one transmit unit and
>any jitter through the switching subsystem - jitter for the case of
>non-bloat and where a faster VOQ service rate can drain the VOQ. If the
>VOQ can't be drained per a faster service rate, then it's just one
>transmit unit as the queue is now just a standing queue w/delay and no
>benefit.
[SM] I guess often things are obvious only retrospectively, but how could one design a switch differently?
>
>Many network engineers typically, though incorrectly, perceive a transmit
>unit as one ethernet packet. With WiFi it's one Mu transmission or one Su
>transmission, with aggregation(s), which is a lot more than one ethernet
>packet but it depends on things like MCS, spatial stream powers, Mu peers,
>etc. and is variable. Some data center designs have optimized the
>forwarding plane for flow completion times so their equivalent transmit
>unit is a mouse flow.
[SM] Is this driven more by the need to aggregate packets to amortize some cost over a larger payload or to reduce the scheduling overhead or to regularize things (as in fixed size DTUs used in DSL with G.INP retransmissions)?
>
>I perceive applying AQM to shared queue congestion as a mitigation
>technique to a poorly designed forwarding plane. The hope is that
>transistor engineers don't do this and "design out the lines" from the
>beginning. Better switching engineering vs queue management applied
>afterwards as a mitigation technique.
[SM] I am all for better hardware, but will this ever allow us the regress back to dumb upper layers? I have some doubts, but hey I would not be unhappy if my AQM would stay idle most of the time, because lower layers avoid triggering it.
>
>Bob
>
>On Mon, Oct 17, 2022 at 7:58 PM David Lang via Make-wifi-fast <
>make-wifi-fast@lists.bufferbloat.net> wrote:
>
>> On Mon, 17 Oct 2022, Dave Taht via Bloat wrote:
>>
>> > On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire <cheshire@apple.com>
>> wrote:
>> >>
>> >> On 9 Oct 2022, at 06:14, Dave Taht via Make-wifi-fast <
>> make-wifi-fast@lists.bufferbloat.net> wrote:
>> >>
>> >> > This was so massively well done, I cried. Does anyone know how to get
>> in touch with the ifxit folk?
>> >> >
>> >> > https://www.youtube.com/watch?v=UICh3ScfNWI
>> >>
>> >> I’m surprised that you liked this video. It seems to me that it repeats
>> all the standard misinformation. The analogy they use is the standard
>> terrible example of waiting in a long line at a grocery store, and the
>> “solution” is letting certain traffic “jump the line, angering everyone
>> behind them”.
>> >
>> > Accuracy be damned. The analogy to common experience resonates more.
>>
>> actually, fair queueing is more like the '15 items or less' lanes to speed
>> through the people doing simple things rather than having them wait behind
>> the
>> mother of 7 doing their monthly shopping.
>>
>> David Lang_______________________________________________
>> Make-wifi-fast mailing list
>> Make-wifi-fast@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] A quick report from the WISPA conference
2022-10-18 17:17 ` Sina Khanifar
@ 2022-10-18 19:04 ` Sebastian Moeller
2022-10-20 5:15 ` Sina Khanifar
2022-10-18 19:17 ` Sebastian Moeller
1 sibling, 1 reply; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-18 19:04 UTC (permalink / raw)
To: Sina Khanifar, Sina Khanifar via Bloat, Dave Taht
Cc: Cake List, Make-Wifi-fast, Rpm, bloat
Hi Sina,
On 18 October 2022 19:17:16 CEST, Sina Khanifar via Bloat <bloat@lists.bufferbloat.net> wrote:
>>
>>
>>
>> I can't help but wonder tho... are you collecting any statistics, over
>> time, as to how much better the problem is getting?
>>
>>
>>
>
>We are collecting anonymized data, but we haven't analyzed it yet. If we get a bit of time we'll look at that hopefully.
[SM] Just an observation, using Safari I see large maximal delays (like a small group of samples far out to the right of the bulk) for both down- and upload that essentially disappear when I switch to firefox.
Now I tend to have a ton of tabs open in Safari while I only open firefox for dedicated use-cases with a few tabs at most, so I do not intend to throw shade on Safari here; my point is more browsers can and do affect the reported latency numbers, of you want to be able to test this, maybe ask users to use the OS browser (safari, edge, konqueror ;) ) as well as firefox and chrome so you can directly compare across browsers?
>
>>
>>
>>
>> And any chance they could do something similar explaining wifi?
>>
>>
>>
>
>I'm actually not exactly sure what mitigations exist for WiFi at the moment - is there something I can read?
>
>On this note: when we were building our test one of the things we really wished existed was a standardized way to test latency and throughput to routers.
[SM] traceroute/mtr albeit not sure how well this approach works from inside the browser, can you e.g. control TTL and do you receive error messages via ICMP?
It would be super helpful if there was a standard in consumer routers that allowed users to both ping and fetch 0kB fils from their routers, and also run download/upload tests.
[SM] I think I see where you are coming from here. Over in the OpenWrt forum we often see that server performance with iperf2/3 or netperf on a router is not all that representative for its routing performance.
What do you expect to deduce from upload/download to the router? (I might misunderstand your point by a mile, if so please elaborate)
Regards
Sebastian
>
>>
>>
>>
>> I think one more wispa conference will be a clean sweep of everyone in the
>> fixed wireless market to not only adopt these algorithms for plan
>> enforcement, but even more directly on the radios and more CPE.
>>
>>
>>
>
>T-Mobile has signed up 1m+ people to their new Home Internet over 5G, and all of them have really meaningful bufferbloat issues. I've been pointing folks who reach out to this thread ( https://forum.openwrt.org/t/cake-w-adaptive-bandwidth-historic/108848 ) about cake-autorate and sqm-autorate, but ideally it would be fixed at a network level, just not sure how to apply pressure (I'm in contact with the T-Mobile Home Internet team, but I think this is above their heads).
>
>On Mon, Oct 17, 2022 at 8:15 PM, Dave Taht < dave.taht@gmail.com > wrote:
>
>>
>>
>>
>> On Mon, Oct 17, 2022 at 7:51 PM Sina Khanifar < sina@ waveform. com (
>> sina@waveform.com ) > wrote:
>>
>>
>>
>>>
>>>
>>> Positive or negative, I can claim a bit of credit for this video :). We've
>>> been working with LTT on a few projects and we pitched them on doing
>>> something around bufferbloat. We've seen more traffic to our Waveforn test
>>> than ever before, which has been fun!
>>>
>>>
>>>
>>
>>
>>
>> Thank you. Great job with that video! And waveform has become the goto
>> site for many now.
>>
>>
>>
>>
>> I can't help but wonder tho... are you collecting any statistics, over
>> time, as to how much better the problem is getting?
>>
>>
>>
>>
>> And any chance they could do something similar explaining wifi?
>>
>>
>>
>>
>> ...
>>
>>
>>
>>
>> I was just at WISPA conference week before last. Preseem's booth
>> (fq_codel) was always packed. Vilo living had put cake in their wifi 6
>> product. A
>> keynote speaker had deployed it and talked about it with waveform results
>> on the big screen (2k people there). A large wireless vendor demo'd
>> privately to me their flent results before/after cake on their next-gen
>> radios... and people dissed tarana without me prompting for their bad
>> bufferbloat... and the best thing of all that happened to me was...
>> besides getting a hug from a young lady (megan) who'd salvaged her
>> schooling in alaska using sqm - I walked up to the paraqum booth
>> (another large QoE middlebox maker centered more in india) and asked.
>>
>>
>>
>> "So... do y'all have fq_codel yet?"
>>
>>
>>
>>
>> And they smiled and said: "No, we have something better... we've got
>> cake."
>>
>>
>>
>>
>> "Cake? What's that?" - I said, innocently.
>>
>>
>>
>>
>> They then stepped me through their 200Gbps (!!) product, which uses a
>> bunch of offloads, and can track rtt down to a ms with the intel ethernet
>> card they were using. They'd modifed cake to provide 16 (?) levels of
>> service, and were running under dpdk (I am not sure if cake was). It was a
>> great, convincing pitch...
>>
>>
>>
>>
>> ... then I told 'em who I was. There's a video of the in-both concert
>> after.
>>
>>
>>
>>
>> ...
>>
>>
>>
>>
>> The downside to me (and the subject of my talk) was that in nearly every
>> person I talked to, fq_codel was viewed as a means to better subscriber
>> bandwidth plan enforcement (which is admittedly the market that preseem
>> pioneered) and it was not understood that I'd got involved in this whole
>> thing because I'd wanted an algorithm to deal with "rain fade", running
>> directly on the radios. People wanted to use the statistics on the radios
>> to drive the plan enforcement better
>> (which is an ok approach, I guess), and for 10+ I'd been whinging about
>> the... physics.
>>
>>
>>
>> So I ranted about rfc7567 a lot and begged people now putting routerOS
>> 7.2 and later out there (mikrotik is huge in this market), to kill their
>> fifos and sfqs at the native rates of the interfaces... and watch their
>> network improve that way also.
>>
>>
>>
>> I think one more wispa conference will be a clean sweep of everyone in the
>> fixed wireless market to not only adopt these algorithms for plan
>> enforcement, but even more directly on the radios and more CPE.
>>
>>
>>
>>
>> I also picked up enough consulting business to keep me busy the rest of
>> this year, and possibly more than I can handle (anybody looking?)
>>
>>
>>
>>
>> I wonder what will happen at a fiber conference?
>>
>>
>>
>>>
>>>
>>> On Mon, Oct 17, 2022 at 7:45 PM Dave Taht via Bloat < bloat@ lists. bufferbloat.
>>> net ( bloat@lists.bufferbloat.net ) > wrote:
>>>
>>>
>>>
>>>>
>>>>
>>>> On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire < cheshire@ apple. com (
>>>> cheshire@apple.com ) > wrote:
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> On 9 Oct 2022, at 06:14, Dave Taht via Make-wifi-fast < make-wifi-fast@ lists.
>>>>> bufferbloat. net ( make-wifi-fast@lists.bufferbloat.net ) > wrote:
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> This was so massively well done, I cried. Does anyone know how to get in
>>>>>> touch with the ifxit folk?
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> https:/ / www. youtube. com/ watch?v=UICh3ScfNWI (
>>>>>> https://www.youtube.com/watch?v=UICh3ScfNWI )
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I’m surprised that you liked this video. It seems to me that it repeats
>>>>> all the standard misinformation. The analogy they use is the standard
>>>>> terrible example of waiting in a long line at a grocery store, and the
>>>>> “solution” is letting certain traffic “jump the line, angering everyone
>>>>> behind them”.
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> Accuracy be damned. The analogy to common experience resonates more.
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> Some quotes from the video:
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> it would be so much more efficient for them to let you skip the line and
>>>>>> just check out, especially since you’re in a hurry, but they’re rudely
>>>>>> refusing
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> I think the person with the cheetos pulling out a gun and shooting
>>>> everyone in front of him (AQM) would not go down well.
>>>>
>>>>
>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> to go back to our grocery store analogy this would be like if a worker saw
>>>>>> you standing at the back ... and either let you skip to the front of the
>>>>>> line or opens up an express lane just for you
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> Actually that analogy is fairly close to fair queuing. The multiple
>>>> checker analogy is one of the most common analogies in queue theory
>>>> itself.
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> The video describes the problem of bufferbloat, and then describes the
>>>>> same failed solution that hasn’t worked for the last three decades.
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> Hmm? It establishes the scenario, explains the problem *quickly*, disses
>>>> gamer routers for not getting it right.. *points to an accurate test*, and
>>>> then to the ideas and products that *actually work* with "smart queueing",
>>>> with a screenshot of the most common
>>>> (eero's optimize for gaming and videoconferencing), and fq_codel and cake
>>>> *by name*, and points folk at the best known solution available, openwrt.
>>>>
>>>>
>>>>
>>>> Bing, baddabang, boom. Also the comments were revealing. A goodly
>>>> percentage already knew the problem, more than a few were inspired to take
>>>> the test,
>>>> there was a whole bunch of "Aha!" success stories and 360k views, which is
>>>> more people than we've ever been able to reach in for example, a nanog
>>>> conference.
>>>>
>>>>
>>>>
>>>> I loved that folk taking the test actually had quite a few A results,
>>>> without having had to do anything. At least some ISPs are getting it more
>>>> right now!
>>>>
>>>>
>>>>
>>>>
>>>> At this point I think gamers in particular know what "brands" we've tried
>>>> to establish - "Smart queues", "SQM", "OpenWrt", fq_codel and now "cake"
>>>> are "good" things to have, and are stimulating demand by asking for them,
>>>> It's certainly working out better and better for evenroute, firewalla,
>>>> ubnt and others, and I saw an uptick in questions about this on various
>>>> user forums.
>>>>
>>>>
>>>>
>>>>
>>>> I even like that there's a backlash now of people saying "fixing
>>>> bufferbloat doesn't solve everything" -
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> Describing the obvious simple-minded (wrong) solution that any normal
>>>>> person would think of based on their personal human experience waiting in
>>>>> grocery stores and airports, is not describing the solution to
>>>>> bufferbloat. The solution to bufferbloat is not that if you are privileged
>>>>> then you get to “skip to the front of the line”. The solution to
>>>>> bufferbloat is that there is no line!
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> I like the idea of a guru floating above a grocery cart with a better
>>>> string of explanations, explaining
>>>>
>>>>
>>>>
>>>>
>>>> - "no, grasshopper, the solution to bufferbloat is no line... at all".
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> With grocery stores and airports people’s arrivals are independent and not
>>>>> controlled. There is no way for a grocery store or airport to generate
>>>>> backpressure to tell people to wait at home when a queue begins to form.
>>>>> The key to solving bufferbloat is generating timely backpressure to
>>>>> prevent the queue forming in the first place, not accepting a huge queue
>>>>> and then deciding who deserves special treatment to get better service
>>>>> than all the other peons who still have to wait in a long queue, just like
>>>>> before.
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> I am not huge on the word "backpressure" here. Needs to signal the other
>>>> side to slow down, is more accurate. So might say timely signalling rather
>>>> than timely backpressure?
>>>>
>>>>
>>>>
>>>>
>>>> Other feedback I got was that the video was too smarmy (I agree),
>>>> different audiences than gamers need different forms of outreach...
>>>>
>>>>
>>>>
>>>>
>>>> but to me, winning the gamers has always been one of the most important
>>>> things, as they make a lot of buying decisions, and they benefit the most
>>>> for
>>>> fq and packet prioritization as we do today in gamer routers and in cake +
>>>> qosify.
>>>>
>>>>
>>>>
>>>> maybe that gets in the way of more serious markets. Certainly I would like
>>>> another video explaining what goes wrong with videoconferencing.
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> Stuart Cheshire
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> This song goes out to all the folk that thought Stadia would work: https:/
>>>> / www. linkedin. com/ posts/ dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> (
>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> ) Dave Täht CEO, TekLibre, LLC
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@ lists. bufferbloat. net ( Bloat@lists.bufferbloat.net )
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>>>
>>>
>>
>>
>>
>> --
>> This song goes out to all the folk that thought Stadia would work: https:/
>> / www. linkedin. com/ posts/ dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> (
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> ) Dave Täht CEO, TekLibre, LLC
>>
>>
>>
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] A quick report from the WISPA conference
2022-10-18 17:17 ` Sina Khanifar
2022-10-18 19:04 ` [Make-wifi-fast] [Bloat] " Sebastian Moeller
@ 2022-10-18 19:17 ` Sebastian Moeller
1 sibling, 0 replies; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-18 19:17 UTC (permalink / raw)
To: Sina Khanifar, Sina Khanifar via Bloat, Dave Taht
Cc: Cake List, Make-Wifi-fast, Rpm, bloat
Hi Sina,
On 18 October 2022 19:17:16 CEST, Sina Khanifar via Bloat <bloat@lists.bufferbloat.net> wrote:
>>
>>
>>
>> I can't help but wonder tho... are you collecting any statistics, over
>> time, as to how much better the problem is getting?
>>
>>
>>
>
>We are collecting anonymized data, but we haven't analyzed it yet. If we get a bit of time we'll look at that hopefully.
>
>>
>>
>>
>> And any chance they could do something similar explaining wifi?
>>
>>
>>
>
>I'm actually not exactly sure what mitigations exist for WiFi at the moment - is there something I can read?
>
>On this note: when we were building our test one of the things we really wished existed was a standardized way to test latency and throughput to routers. It would be super helpful if there was a standard in consumer routers that allowed users to both ping and fetch 0kB fils from their routers, and also run download/upload tests.
>
>>
>>
>>
>> I think one more wispa conference will be a clean sweep of everyone in the
>> fixed wireless market to not only adopt these algorithms for plan
>> enforcement, but even more directly on the radios and more CPE.
>>
>>
>>
>
>T-Mobile has signed up 1m+ people to their new Home Internet over 5G, and all of them have really meaningful bufferbloat issues. I've been pointing folks who reach out to this thread ( https://forum.openwrt.org/t/cake-w-adaptive-bandwidth-historic/108848 )
[SM] Thanks a lot! Most recent discussion moved over to https://forum.openwrt.org/t/cake-w-adaptive-bandwidth/135379
> about cake-autorate and sqm-autorate, but ideally it would be fixed at a network level,
[SM] +1; autorate is a hack that should not be necessary (not trying to diminish Andrew's excellent work, but we should not have to do this).
> just not sure how to apply pressure (I'm in contact with the T-Mobile Home Internet team, but I think this is above their heads).
[SM] I think this ideally would be solved at the 3GPPP level, as ideally that would be backed into the low level protocols... and make information available for higher protocols, e.g. for the uplink getting the most recent RTTs or better OWDs to the basestation would make our lives simpler.
Regards
Sebastian
>
>On Mon, Oct 17, 2022 at 8:15 PM, Dave Taht < dave.taht@gmail.com > wrote:
>
>>
>>
>>
>> On Mon, Oct 17, 2022 at 7:51 PM Sina Khanifar < sina@ waveform. com (
>> sina@waveform.com ) > wrote:
>>
>>
>>
>>>
>>>
>>> Positive or negative, I can claim a bit of credit for this video :). We've
>>> been working with LTT on a few projects and we pitched them on doing
>>> something around bufferbloat. We've seen more traffic to our Waveforn test
>>> than ever before, which has been fun!
>>>
>>>
>>>
>>
>>
>>
>> Thank you. Great job with that video! And waveform has become the goto
>> site for many now.
>>
>>
>>
>>
>> I can't help but wonder tho... are you collecting any statistics, over
>> time, as to how much better the problem is getting?
>>
>>
>>
>>
>> And any chance they could do something similar explaining wifi?
>>
>>
>>
>>
>> ...
>>
>>
>>
>>
>> I was just at WISPA conference week before last. Preseem's booth
>> (fq_codel) was always packed. Vilo living had put cake in their wifi 6
>> product. A
>> keynote speaker had deployed it and talked about it with waveform results
>> on the big screen (2k people there). A large wireless vendor demo'd
>> privately to me their flent results before/after cake on their next-gen
>> radios... and people dissed tarana without me prompting for their bad
>> bufferbloat... and the best thing of all that happened to me was...
>> besides getting a hug from a young lady (megan) who'd salvaged her
>> schooling in alaska using sqm - I walked up to the paraqum booth
>> (another large QoE middlebox maker centered more in india) and asked.
>>
>>
>>
>> "So... do y'all have fq_codel yet?"
>>
>>
>>
>>
>> And they smiled and said: "No, we have something better... we've got
>> cake."
>>
>>
>>
>>
>> "Cake? What's that?" - I said, innocently.
>>
>>
>>
>>
>> They then stepped me through their 200Gbps (!!) product, which uses a
>> bunch of offloads, and can track rtt down to a ms with the intel ethernet
>> card they were using. They'd modifed cake to provide 16 (?) levels of
>> service, and were running under dpdk (I am not sure if cake was). It was a
>> great, convincing pitch...
>>
>>
>>
>>
>> ... then I told 'em who I was. There's a video of the in-both concert
>> after.
>>
>>
>>
>>
>> ...
>>
>>
>>
>>
>> The downside to me (and the subject of my talk) was that in nearly every
>> person I talked to, fq_codel was viewed as a means to better subscriber
>> bandwidth plan enforcement (which is admittedly the market that preseem
>> pioneered) and it was not understood that I'd got involved in this whole
>> thing because I'd wanted an algorithm to deal with "rain fade", running
>> directly on the radios. People wanted to use the statistics on the radios
>> to drive the plan enforcement better
>> (which is an ok approach, I guess), and for 10+ I'd been whinging about
>> the... physics.
>>
>>
>>
>> So I ranted about rfc7567 a lot and begged people now putting routerOS
>> 7.2 and later out there (mikrotik is huge in this market), to kill their
>> fifos and sfqs at the native rates of the interfaces... and watch their
>> network improve that way also.
>>
>>
>>
>> I think one more wispa conference will be a clean sweep of everyone in the
>> fixed wireless market to not only adopt these algorithms for plan
>> enforcement, but even more directly on the radios and more CPE.
>>
>>
>>
>>
>> I also picked up enough consulting business to keep me busy the rest of
>> this year, and possibly more than I can handle (anybody looking?)
>>
>>
>>
>>
>> I wonder what will happen at a fiber conference?
>>
>>
>>
>>>
>>>
>>> On Mon, Oct 17, 2022 at 7:45 PM Dave Taht via Bloat < bloat@ lists. bufferbloat.
>>> net ( bloat@lists.bufferbloat.net ) > wrote:
>>>
>>>
>>>
>>>>
>>>>
>>>> On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire < cheshire@ apple. com (
>>>> cheshire@apple.com ) > wrote:
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> On 9 Oct 2022, at 06:14, Dave Taht via Make-wifi-fast < make-wifi-fast@ lists.
>>>>> bufferbloat. net ( make-wifi-fast@lists.bufferbloat.net ) > wrote:
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> This was so massively well done, I cried. Does anyone know how to get in
>>>>>> touch with the ifxit folk?
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> https:/ / www. youtube. com/ watch?v=UICh3ScfNWI (
>>>>>> https://www.youtube.com/watch?v=UICh3ScfNWI )
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I’m surprised that you liked this video. It seems to me that it repeats
>>>>> all the standard misinformation. The analogy they use is the standard
>>>>> terrible example of waiting in a long line at a grocery store, and the
>>>>> “solution” is letting certain traffic “jump the line, angering everyone
>>>>> behind them”.
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> Accuracy be damned. The analogy to common experience resonates more.
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> Some quotes from the video:
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> it would be so much more efficient for them to let you skip the line and
>>>>>> just check out, especially since you’re in a hurry, but they’re rudely
>>>>>> refusing
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> I think the person with the cheetos pulling out a gun and shooting
>>>> everyone in front of him (AQM) would not go down well.
>>>>
>>>>
>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> to go back to our grocery store analogy this would be like if a worker saw
>>>>>> you standing at the back ... and either let you skip to the front of the
>>>>>> line or opens up an express lane just for you
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> Actually that analogy is fairly close to fair queuing. The multiple
>>>> checker analogy is one of the most common analogies in queue theory
>>>> itself.
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> The video describes the problem of bufferbloat, and then describes the
>>>>> same failed solution that hasn’t worked for the last three decades.
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> Hmm? It establishes the scenario, explains the problem *quickly*, disses
>>>> gamer routers for not getting it right.. *points to an accurate test*, and
>>>> then to the ideas and products that *actually work* with "smart queueing",
>>>> with a screenshot of the most common
>>>> (eero's optimize for gaming and videoconferencing), and fq_codel and cake
>>>> *by name*, and points folk at the best known solution available, openwrt.
>>>>
>>>>
>>>>
>>>> Bing, baddabang, boom. Also the comments were revealing. A goodly
>>>> percentage already knew the problem, more than a few were inspired to take
>>>> the test,
>>>> there was a whole bunch of "Aha!" success stories and 360k views, which is
>>>> more people than we've ever been able to reach in for example, a nanog
>>>> conference.
>>>>
>>>>
>>>>
>>>> I loved that folk taking the test actually had quite a few A results,
>>>> without having had to do anything. At least some ISPs are getting it more
>>>> right now!
>>>>
>>>>
>>>>
>>>>
>>>> At this point I think gamers in particular know what "brands" we've tried
>>>> to establish - "Smart queues", "SQM", "OpenWrt", fq_codel and now "cake"
>>>> are "good" things to have, and are stimulating demand by asking for them,
>>>> It's certainly working out better and better for evenroute, firewalla,
>>>> ubnt and others, and I saw an uptick in questions about this on various
>>>> user forums.
>>>>
>>>>
>>>>
>>>>
>>>> I even like that there's a backlash now of people saying "fixing
>>>> bufferbloat doesn't solve everything" -
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> Describing the obvious simple-minded (wrong) solution that any normal
>>>>> person would think of based on their personal human experience waiting in
>>>>> grocery stores and airports, is not describing the solution to
>>>>> bufferbloat. The solution to bufferbloat is not that if you are privileged
>>>>> then you get to “skip to the front of the line”. The solution to
>>>>> bufferbloat is that there is no line!
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> I like the idea of a guru floating above a grocery cart with a better
>>>> string of explanations, explaining
>>>>
>>>>
>>>>
>>>>
>>>> - "no, grasshopper, the solution to bufferbloat is no line... at all".
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> With grocery stores and airports people’s arrivals are independent and not
>>>>> controlled. There is no way for a grocery store or airport to generate
>>>>> backpressure to tell people to wait at home when a queue begins to form.
>>>>> The key to solving bufferbloat is generating timely backpressure to
>>>>> prevent the queue forming in the first place, not accepting a huge queue
>>>>> and then deciding who deserves special treatment to get better service
>>>>> than all the other peons who still have to wait in a long queue, just like
>>>>> before.
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> I am not huge on the word "backpressure" here. Needs to signal the other
>>>> side to slow down, is more accurate. So might say timely signalling rather
>>>> than timely backpressure?
>>>>
>>>>
>>>>
>>>>
>>>> Other feedback I got was that the video was too smarmy (I agree),
>>>> different audiences than gamers need different forms of outreach...
>>>>
>>>>
>>>>
>>>>
>>>> but to me, winning the gamers has always been one of the most important
>>>> things, as they make a lot of buying decisions, and they benefit the most
>>>> for
>>>> fq and packet prioritization as we do today in gamer routers and in cake +
>>>> qosify.
>>>>
>>>>
>>>>
>>>> maybe that gets in the way of more serious markets. Certainly I would like
>>>> another video explaining what goes wrong with videoconferencing.
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> Stuart Cheshire
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> This song goes out to all the folk that thought Stadia would work: https:/
>>>> / www. linkedin. com/ posts/ dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> (
>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> ) Dave Täht CEO, TekLibre, LLC
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@ lists. bufferbloat. net ( Bloat@lists.bufferbloat.net )
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>>>
>>>
>>
>>
>>
>> --
>> This song goes out to all the folk that thought Stadia would work: https:/
>> / www. linkedin. com/ posts/ dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> (
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> ) Dave Täht CEO, TekLibre, LLC
>>
>>
>>
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] The most wonderful video ever about bufferbloat
2022-10-18 18:19 ` [Make-wifi-fast] [Rpm] " Sebastian Moeller
@ 2022-10-18 19:30 ` Bob McMahon
2022-10-19 7:09 ` David Lang
1 sibling, 0 replies; 70+ messages in thread
From: Bob McMahon @ 2022-10-18 19:30 UTC (permalink / raw)
To: Sebastian Moeller
Cc: Bob McMahon via Rpm, David Lang, Cake List, Make-Wifi-fast, bloat
[-- Attachment #1.1: Type: text/plain, Size: 8237 bytes --]
*[SM] How does that generalize to internet access links? My gut feeling is
that an FQ scheduler comes close.*
Probably not possible. Current fiber SFP 100Gs optics are the most
economic per the SERDES/Laser interface. Any other SFP is
suboptimal, probably for the next decade. Then there are still DSL internet
lines, satellite links, etc. And then WiFi which isn't really
internet access but first/last hop to the IAP last mile link. It seems way
too complicated to generalize a single solution. It's also a bit of an
engineering race and deployment race so the targets driven mostly by market
conditions and engineering project priorities are not fixed.
I do think we can define generalized tests - though that's a digression
(and I'm biased too being a test & measurement engineer.)
*[SM] I guess often things are obvious only retrospectively, but how could
one design a switch differently?*
A suggestion is to look at merchant silicon used by the major integrators
that sell into data centers. But keep in mind the IAP forwarding plane is a
moving target so having some form of hardware programmability in field is
probably needed too. The COGs and volumes are very different too. I think
the market and time will provide the final answer (if there is one) and
then it will change again ;)
*[SM] Is this driven more by the need to aggregate packets to amortize some
cost over a larger payload or to reduce the scheduling overhead or to
regularize things (as in fixed size DTUs used in DSL with G.INP
retransmissions)?*
TXOPs scarcity is driven by listen before talk (LBT.) This is needed for
collision avoidance. Unfortunately, WiFi networks w/o waveguides that share
the same carrier have to be separated in time in a distributed manner to
optimize the overall system. )Adding a scheduling carrier done by things
like mobile networks doesn't work well with small WiFi cells - though
802.11ax is a similar scheduling support mechanism)
*[SM] I am all for better hardware, but will this ever allow us the regress
back to dumb upper layers? I have some doubts, but hey I would not be
unhappy if my AQM would stay idle most of the time, because lower layers
avoid triggering it.*
Doubtful to me to achieve the ideal. Transports enhancements like BBRv2
seem worthwhile. And, yes, the "AQM hammer" to mitigate standing queue(s')
bloat is likely going to be needed as real engineering can't typically
achieve an ideal as some resources are shared and finite as you stated
elsewhere.
Many of the new responsiveness tests under loads are being designed to
create this potentially "artificial" condition, though many times it's real
too, so these tests are a good thing for awareness for sure. What these
tests don't do is monitor actual traffic conditions over time and space to
see how many times AQM had to be activated as well as measure how well
disaggregating the congested shared queue is working.
My opinion is that devices that support OpenWRT could be instrumented to
support network telemetry to provide actuals, at least for the WiFi hop.
There are multiple ways to do this. Some require new engineering efforts.
Others require distributed clock sync so tend to be in test labs only.
Bob
On Tue, Oct 18, 2022 at 11:20 AM Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Bob,
>
> On 18 October 2022 19:03:21 CEST, Bob McMahon via Rpm <
> rpm@lists.bufferbloat.net> wrote:
> >I agree with Stuart that there is no reason for shared lines in the first
> >place. It seems like a design flaw to have a common queue that congests in
> >a way that impacts the one transmit unit as the atomic forwarding plane
> >unit.
>
> [SM] How does that generalize to internet access links? My gut feeling is
> that an FQ scheduler comes close.
>
>
> The goal of virtual output queueing
> ><https://en.wikipedia.org/wiki/Virtual_output_queueing> is to eliminate
> >head of line blocking, every egress transmit unit gets its own cashier
> with
> >no competition. The VOQ queue depths should support one transmit unit and
> >any jitter through the switching subsystem - jitter for the case of
> >non-bloat and where a faster VOQ service rate can drain the VOQ. If the
> >VOQ can't be drained per a faster service rate, then it's just one
> >transmit unit as the queue is now just a standing queue w/delay and no
> >benefit.
>
> [SM] I guess often things are obvious only retrospectively, but how could
> one design a switch differently?
>
>
> >
> >Many network engineers typically, though incorrectly, perceive a transmit
> >unit as one ethernet packet. With WiFi it's one Mu transmission or one Su
> >transmission, with aggregation(s), which is a lot more than one ethernet
> >packet but it depends on things like MCS, spatial stream powers, Mu peers,
> >etc. and is variable. Some data center designs have optimized the
> >forwarding plane for flow completion times so their equivalent transmit
> >unit is a mouse flow.
>
> [SM] Is this driven more by the need to aggregate packets to amortize some
> cost over a larger payload or to reduce the scheduling overhead or to
> regularize things (as in fixed size DTUs used in DSL with G.INP
> retransmissions)?
>
> >
> >I perceive applying AQM to shared queue congestion as a mitigation
> >technique to a poorly designed forwarding plane. The hope is that
> >transistor engineers don't do this and "design out the lines" from the
> >beginning. Better switching engineering vs queue management applied
> >afterwards as a mitigation technique.
>
> [SM] I am all for better hardware, but will this ever allow us the regress
> back to dumb upper layers? I have some doubts, but hey I would not be
> unhappy if my AQM would stay idle most of the time, because lower layers
> avoid triggering it.
>
>
> >
> >Bob
> >
> >On Mon, Oct 17, 2022 at 7:58 PM David Lang via Make-wifi-fast <
> >make-wifi-fast@lists.bufferbloat.net> wrote:
> >
> >> On Mon, 17 Oct 2022, Dave Taht via Bloat wrote:
> >>
> >> > On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire <cheshire@apple.com>
> >> wrote:
> >> >>
> >> >> On 9 Oct 2022, at 06:14, Dave Taht via Make-wifi-fast <
> >> make-wifi-fast@lists.bufferbloat.net> wrote:
> >> >>
> >> >> > This was so massively well done, I cried. Does anyone know how to
> get
> >> in touch with the ifxit folk?
> >> >> >
> >> >> > https://www.youtube.com/watch?v=UICh3ScfNWI
> >> >>
> >> >> I’m surprised that you liked this video. It seems to me that it
> repeats
> >> all the standard misinformation. The analogy they use is the standard
> >> terrible example of waiting in a long line at a grocery store, and the
> >> “solution” is letting certain traffic “jump the line, angering everyone
> >> behind them”.
> >> >
> >> > Accuracy be damned. The analogy to common experience resonates more.
> >>
> >> actually, fair queueing is more like the '15 items or less' lanes to
> speed
> >> through the people doing simple things rather than having them wait
> behind
> >> the
> >> mother of 7 doing their monthly shopping.
> >>
> >> David Lang_______________________________________________
> >> Make-wifi-fast mailing list
> >> Make-wifi-fast@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
> >
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.2: Type: text/html, Size: 9872 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] The most wonderful video ever about bufferbloat
2022-10-18 18:19 ` [Make-wifi-fast] [Rpm] " Sebastian Moeller
2022-10-18 19:30 ` Bob McMahon
@ 2022-10-19 7:09 ` David Lang
2022-10-19 19:18 ` Bob McMahon
1 sibling, 1 reply; 70+ messages in thread
From: David Lang @ 2022-10-19 7:09 UTC (permalink / raw)
To: Sebastian Moeller
Cc: Bob McMahon, Bob McMahon via Rpm, David Lang, Cake List,
Make-Wifi-fast, bloat
On Tue, 18 Oct 2022, Sebastian Moeller wrote:
> Hi Bob,
>
>> Many network engineers typically, though incorrectly, perceive a transmit
>> unit as one ethernet packet. With WiFi it's one Mu transmission or one Su
>> transmission, with aggregation(s), which is a lot more than one ethernet
>> packet but it depends on things like MCS, spatial stream powers, Mu peers,
>> etc. and is variable. Some data center designs have optimized the
>> forwarding plane for flow completion times so their equivalent transmit
>> unit is a mouse flow.
>
> [SM] Is this driven more by the need to aggregate packets to amortize some cost over a larger payload or to reduce the scheduling overhead or to regularize things (as in fixed size DTUs used in DSL with G.INP retransmissions)?
it's to amortize costs over a larger payload.
the gap between transmissions is in ms, and the transmission header is
transmitted at a slow data rate (both for backwards compatibility with older
equipment that doesn't know about the higher data rate modulations)
For a long time, the transmission header was transmitted at 1Mb (which is still
the default in most equipment), but there is now an option to no longer support
802.11b equipment, which raises the header transmission time to 11Mb.
These factors are so imbalanced compared to the top data rates available that
you need to transmit several MB of data to have actual data use 50% of the
airtime.
David Lang
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] The most wonderful video ever about bufferbloat
2022-10-19 7:09 ` David Lang
@ 2022-10-19 19:18 ` Bob McMahon
2022-10-19 19:23 ` David Lang
0 siblings, 1 reply; 70+ messages in thread
From: Bob McMahon @ 2022-10-19 19:18 UTC (permalink / raw)
To: David Lang
Cc: Sebastian Moeller, Bob McMahon via Rpm, Cake List, Make-Wifi-fast, bloat
[-- Attachment #1.1.1: Type: text/plain, Size: 3204 bytes --]
I'm not sure where the gap in milliseconds is coming from. EDCA gaps are
mostly driven by probabilities
<https://link.springer.com/article/10.1007/s10270-020-00817-2>. If
energy detect (ED) indicates the medium is available then the gap prior to
transmit, assuming no others competing & winning at that moment in time, is
driven by AIFS and the CWMIN - CWMAX back offs which are simple probability
distributions. Things change a bit with 802.11ax and trigger frames but the
gap is still determined by the backoff and should be less than milliseconds
per that. Things like NAVs will impact the gap too but that happens when
another is transmitting.
[image: image.png]
Agreed that the PLCP preamble is at low MCS and the payload can be orders
of magnitude greater (per different QAM encodings and other signal
processing techniques.)
Bob
On Wed, Oct 19, 2022 at 12:09 AM David Lang <david@lang.hm> wrote:
> On Tue, 18 Oct 2022, Sebastian Moeller wrote:
> > Hi Bob,
> >
> >> Many network engineers typically, though incorrectly, perceive a
> transmit
> >> unit as one ethernet packet. With WiFi it's one Mu transmission or one
> Su
> >> transmission, with aggregation(s), which is a lot more than one ethernet
> >> packet but it depends on things like MCS, spatial stream powers, Mu
> peers,
> >> etc. and is variable. Some data center designs have optimized the
> >> forwarding plane for flow completion times so their equivalent transmit
> >> unit is a mouse flow.
> >
> > [SM] Is this driven more by the need to aggregate packets to amortize
> some cost over a larger payload or to reduce the scheduling overhead or to
> regularize things (as in fixed size DTUs used in DSL with G.INP
> retransmissions)?
>
> it's to amortize costs over a larger payload.
>
> the gap between transmissions is in ms, and the transmission header is
> transmitted at a slow data rate (both for backwards compatibility with
> older
> equipment that doesn't know about the higher data rate modulations)
>
> For a long time, the transmission header was transmitted at 1Mb (which is
> still
> the default in most equipment), but there is now an option to no longer
> support
> 802.11b equipment, which raises the header transmission time to 11Mb.
>
> These factors are so imbalanced compared to the top data rates available
> that
> you need to transmit several MB of data to have actual data use 50% of the
> airtime.
>
> David Lang
>
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.1.2: Type: text/html, Size: 3803 bytes --]
[-- Attachment #1.2: image.png --]
[-- Type: image/png, Size: 169722 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] The most wonderful video ever about bufferbloat
2022-10-19 19:18 ` Bob McMahon
@ 2022-10-19 19:23 ` David Lang
2022-10-19 21:26 ` [Make-wifi-fast] [Cake] " David P. Reed
0 siblings, 1 reply; 70+ messages in thread
From: David Lang @ 2022-10-19 19:23 UTC (permalink / raw)
To: Bob McMahon
Cc: David Lang, Sebastian Moeller, Bob McMahon via Rpm, Cake List,
Make-Wifi-fast, bloat
you have to listen and hear nothing for some timeframe before you transmit, that
listening time is define in the standard. (isn't it??)
David Lang
On Wed, 19 Oct 2022, Bob McMahon wrote:
> I'm not sure where the gap in milliseconds is coming from. EDCA gaps are
> mostly driven by probabilities
> <https://link.springer.com/article/10.1007/s10270-020-00817-2>. If
> energy detect (ED) indicates the medium is available then the gap prior to
> transmit, assuming no others competing & winning at that moment in time, is
> driven by AIFS and the CWMIN - CWMAX back offs which are simple probability
> distributions. Things change a bit with 802.11ax and trigger frames but the
> gap is still determined by the backoff and should be less than milliseconds
> per that. Things like NAVs will impact the gap too but that happens when
> another is transmitting.
>
>
> [image: image.png]
>
> Agreed that the PLCP preamble is at low MCS and the payload can be orders
> of magnitude greater (per different QAM encodings and other signal
> processing techniques.)
>
> Bob
>
> On Wed, Oct 19, 2022 at 12:09 AM David Lang <david@lang.hm> wrote:
>
>> On Tue, 18 Oct 2022, Sebastian Moeller wrote:
>>> Hi Bob,
>>>
>>>> Many network engineers typically, though incorrectly, perceive a
>> transmit
>>>> unit as one ethernet packet. With WiFi it's one Mu transmission or one
>> Su
>>>> transmission, with aggregation(s), which is a lot more than one ethernet
>>>> packet but it depends on things like MCS, spatial stream powers, Mu
>> peers,
>>>> etc. and is variable. Some data center designs have optimized the
>>>> forwarding plane for flow completion times so their equivalent transmit
>>>> unit is a mouse flow.
>>>
>>> [SM] Is this driven more by the need to aggregate packets to amortize
>> some cost over a larger payload or to reduce the scheduling overhead or to
>> regularize things (as in fixed size DTUs used in DSL with G.INP
>> retransmissions)?
>>
>> it's to amortize costs over a larger payload.
>>
>> the gap between transmissions is in ms, and the transmission header is
>> transmitted at a slow data rate (both for backwards compatibility with
>> older
>> equipment that doesn't know about the higher data rate modulations)
>>
>> For a long time, the transmission header was transmitted at 1Mb (which is
>> still
>> the default in most equipment), but there is now an option to no longer
>> support
>> 802.11b equipment, which raises the header transmission time to 11Mb.
>>
>> These factors are so imbalanced compared to the top data rates available
>> that
>> you need to transmit several MB of data to have actual data use 50% of the
>> airtime.
>>
>> David Lang
>>
>
>
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] The most wonderful video ever about bufferbloat
2022-10-18 2:44 ` Dave Taht
2022-10-18 2:51 ` [Make-wifi-fast] [Bloat] " Sina Khanifar
2022-10-18 2:58 ` [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat David Lang
@ 2022-10-19 20:44 ` Stuart Cheshire
2022-10-19 21:33 ` [Make-wifi-fast] [Bloat] " David Lang
` (2 more replies)
2 siblings, 3 replies; 70+ messages in thread
From: Stuart Cheshire @ 2022-10-19 20:44 UTC (permalink / raw)
To: Dave Täht; +Cc: Rpm, bloat, Make-Wifi-fast, Cake List
On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire <cheshire@apple.com> wrote:
> Accuracy be damned. The analogy to common experience resonates more.
I feel it is not an especially profound insight to observe that, “people don’t like waiting in line.” The conclusion, “therefore privileged people should get to go to the front,” describes an airport first class checkin counter, Disney Fastpass, and countless other analogies from everyday life, all of which are the wrong solution for packets in a network.
> I think the person with the cheetos pulling out a gun and shooting everyone in front of him (AQM) would not go down well.
Which is why starting with a bad analogy (people waiting in a grocery store) inevitably leads to bad conclusions.
If we want to struggle to make the grocery store analogy work, perhaps we show people checking some grocery store app on their smartphone before they leave home, and if they see that a long line is beginning to form they wait until later, when the line is shorter. The challenge is not how to deal with a long queue when it’s there, it is how to avoid a long queue in the first place.
> Actually that analogy is fairly close to fair queuing. The multiple checker analogy is one of the most common analogies in queue theory itself.
I disagree. You are describing the “FQ” part of FQ_CoDel. It’s the “CoDel” part of FQ_CoDel that solves bufferbloat. FQ has been around for a long time, and at best it partially masked the effects of bufferbloat. Having more queues does not solve bufferbloat. Managing the queue(s) better solves bufferbloat.
> I like the idea of a guru floating above a grocery cart with a better string of explanations, explaining
>
> - "no, grasshopper, the solution to bufferbloat is no line... at all".
That is the kind of thing I had in mind. Or a similar quote from The Matrix. While everyone is debating ways to live with long queues, the guru asks, “What if there were no queues?” That is the “mind blown” realization.
Stuart Cheshire
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Rpm] [Bloat] The most wonderful video ever about bufferbloat
2022-10-19 19:23 ` David Lang
@ 2022-10-19 21:26 ` David P. Reed
2022-10-19 21:37 ` David Lang
0 siblings, 1 reply; 70+ messages in thread
From: David P. Reed @ 2022-10-19 21:26 UTC (permalink / raw)
To: David Lang
Cc: Bob McMahon, Cake List, Make-Wifi-fast, Bob McMahon via Rpm, bloat
[-- Attachment #1: Type: text/plain, Size: 3182 bytes --]
4 microseconds!
On Wednesday, October 19, 2022 3:23pm, "David Lang via Cake" <cake@lists.bufferbloat.net> said:
> you have to listen and hear nothing for some timeframe before you transmit, that
> listening time is define in the standard. (isn't it??)
>
> David Lang
>
> On Wed, 19 Oct 2022, Bob McMahon wrote:
>
> > I'm not sure where the gap in milliseconds is coming from. EDCA gaps are
> > mostly driven by probabilities
> > <https://link.springer.com/article/10.1007/s10270-020-00817-2>. If
> > energy detect (ED) indicates the medium is available then the gap prior to
> > transmit, assuming no others competing & winning at that moment in time, is
> > driven by AIFS and the CWMIN - CWMAX back offs which are simple probability
> > distributions. Things change a bit with 802.11ax and trigger frames but the
> > gap is still determined by the backoff and should be less than milliseconds
> > per that. Things like NAVs will impact the gap too but that happens when
> > another is transmitting.
> >
> >
> > [image: image.png]
> >
> > Agreed that the PLCP preamble is at low MCS and the payload can be orders
> > of magnitude greater (per different QAM encodings and other signal
> > processing techniques.)
> >
> > Bob
> >
> > On Wed, Oct 19, 2022 at 12:09 AM David Lang <david@lang.hm> wrote:
> >
> >> On Tue, 18 Oct 2022, Sebastian Moeller wrote:
> >>> Hi Bob,
> >>>
> >>>> Many network engineers typically, though incorrectly, perceive a
> >> transmit
> >>>> unit as one ethernet packet. With WiFi it's one Mu transmission
> or one
> >> Su
> >>>> transmission, with aggregation(s), which is a lot more than one
> ethernet
> >>>> packet but it depends on things like MCS, spatial stream powers,
> Mu
> >> peers,
> >>>> etc. and is variable. Some data center designs have optimized the
> >>>> forwarding plane for flow completion times so their equivalent
> transmit
> >>>> unit is a mouse flow.
> >>>
> >>> [SM] Is this driven more by the need to aggregate packets to amortize
> >> some cost over a larger payload or to reduce the scheduling overhead or
> to
> >> regularize things (as in fixed size DTUs used in DSL with G.INP
> >> retransmissions)?
> >>
> >> it's to amortize costs over a larger payload.
> >>
> >> the gap between transmissions is in ms, and the transmission header is
> >> transmitted at a slow data rate (both for backwards compatibility with
> >> older
> >> equipment that doesn't know about the higher data rate modulations)
> >>
> >> For a long time, the transmission header was transmitted at 1Mb (which is
> >> still
> >> the default in most equipment), but there is now an option to no longer
> >> support
> >> 802.11b equipment, which raises the header transmission time to 11Mb.
> >>
> >> These factors are so imbalanced compared to the top data rates available
> >> that
> >> you need to transmit several MB of data to have actual data use 50% of
> the
> >> airtime.
> >>
> >> David Lang
> >>
> >
> >
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
>
[-- Attachment #2: Type: text/html, Size: 4604 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat
2022-10-19 20:44 ` [Make-wifi-fast] " Stuart Cheshire
@ 2022-10-19 21:33 ` David Lang
2022-10-19 23:36 ` Stephen Hemminger
2022-10-19 21:46 ` [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat Michael Richardson
2022-10-20 9:36 ` [Make-wifi-fast] [Rpm] " Sebastian Moeller
2 siblings, 1 reply; 70+ messages in thread
From: David Lang @ 2022-10-19 21:33 UTC (permalink / raw)
To: Stuart Cheshire; +Cc: Dave Täht, Rpm, Make-Wifi-fast, Cake List, bloat
[-- Attachment #1: Type: text/plain, Size: 3936 bytes --]
On Wed, 19 Oct 2022, Stuart Cheshire via Bloat wrote:
> On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire <cheshire@apple.com> wrote:
>
>> Accuracy be damned. The analogy to common experience resonates more.
>
> I feel it is not an especially profound insight to observe that, “people don’t like waiting in line.” The conclusion, “therefore privileged people should get to go to the front,” describes an airport first class checkin counter, Disney Fastpass, and countless other analogies from everyday life, all of which are the wrong solution for packets in a network.
the 'privileged go first' is traditional QoS, and it can work to some extent,
but is a nightmare to maintain and gets the wrong result most of the time.
AQM (fw_codel and cake) are more the 'cash only line' and '15 items or less'
line, they speed up the things that can be fast a LOT, while not significantly
slowing down the people with a full baskets (but in the process, it shortens the
lines for those people with full baskets)
>> I think the person with the cheetos pulling out a gun and shooting everyone in front of him (AQM) would not go down well.
>
> Which is why starting with a bad analogy (people waiting in a grocery store) inevitably leads to bad conclusions.
>
> If we want to struggle to make the grocery store analogy work, perhaps we show
> people checking some grocery store app on their smartphone before they leave
> home, and if they see that a long line is beginning to form they wait until
> later, when the line is shorter. The challenge is not how to deal with a long
> queue when it’s there, it is how to avoid a long queue in the first place.
only somewhat, you aren't going to have people deciding not to click on a link
because the network is busy, and if you did try to go that direction, I would
fight you. the prioritization is happening at a much lower level, which is hard
to put into an analogy
even with the 'slowing' of bulk traffic, no traffic is prevented, it's just that
they aren't allowed to monopolize the links.
This is where the grocery store analogy is weak, the reality would be more like
'the cashier will only process 30 items before you have to step aside and let
someone else in', but since no store operates that way, it would be a bad
analogy.
>> Actually that analogy is fairly close to fair queuing. The multiple checker analogy is one of the most common analogies in queue theory itself.
>
> I disagree. You are describing the “FQ” part of FQ_CoDel. It’s the “CoDel”
> part of FQ_CoDel that solves bufferbloat. FQ has been around for a long time,
> and at best it partially masked the effects of bufferbloat. Having more queues
> does not solve bufferbloat. Managing the queue(s) better solves bufferbloat.
>
>> I like the idea of a guru floating above a grocery cart with a better string of explanations, explaining
>>
>> - "no, grasshopper, the solution to bufferbloat is no line... at all".
>
> That is the kind of thing I had in mind. Or a similar quote from The Matrix.
> While everyone is debating ways to live with long queues, the guru asks, “What
> if there were no queues?” That is the “mind blown” realization.
In a world where there is no universal scheduler (and no universal knowlege to
base any scheduling decisions on), and where you are going to have malicious
actors trying to get more than their fair share, you can't rely on voluntary
actions to eliminate the lines.
There are data transportation apps that work by starting up a large number of
connections in parallel for the highest transfer speeds (shortening slow start,
reducing the impact of lost packets as they only affect one connection, etc).
This isn't even malicious actors, but places like Hollywood studios sending
the raw movie footage around over dedicated leased lines and wanting to get
every bps of bandwidth that they are paying for used.
David Lang
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Rpm] [Bloat] The most wonderful video ever about bufferbloat
2022-10-19 21:26 ` [Make-wifi-fast] [Cake] " David P. Reed
@ 2022-10-19 21:37 ` David Lang
0 siblings, 0 replies; 70+ messages in thread
From: David Lang @ 2022-10-19 21:37 UTC (permalink / raw)
To: David P. Reed
Cc: David Lang, Bob McMahon, Cake List, Make-Wifi-fast,
Bob McMahon via Rpm, bloat
Thanks, and how long does it take to transmit the wifi header (at 1Mb/s and at
11Mb/s)? That's also airtime that's not availalbe to transmit user data.
And then compare that to the time it takes to transmit a 1500 byte ethernet
packet worth of data over a 160MHz wide channel
Going back to SM's question, there is per-transmission overhead that you want to
amatorize across multiple ethernet packets, not pay for each packet.
David Lang
On Wed, 19 Oct 2022, David P. Reed wrote:
> 4 microseconds!
>
> On Wednesday, October 19, 2022 3:23pm, "David Lang via Cake" <cake@lists.bufferbloat.net> said:
>
>
>
>> you have to listen and hear nothing for some timeframe before you transmit, that
>> listening time is define in the standard. (isn't it??)
>>
>> David Lang
>>
>> On Wed, 19 Oct 2022, Bob McMahon wrote:
>>
>> > I'm not sure where the gap in milliseconds is coming from. EDCA gaps are
>> > mostly driven by probabilities
>> > <https://link.springer.com/article/10.1007/s10270-020-00817-2>. If
>> > energy detect (ED) indicates the medium is available then the gap prior to
>> > transmit, assuming no others competing & winning at that moment in time, is
>> > driven by AIFS and the CWMIN - CWMAX back offs which are simple probability
>> > distributions. Things change a bit with 802.11ax and trigger frames but the
>> > gap is still determined by the backoff and should be less than milliseconds
>> > per that. Things like NAVs will impact the gap too but that happens when
>> > another is transmitting.
>> >
>> >
>> > [image: image.png]
>> >
>> > Agreed that the PLCP preamble is at low MCS and the payload can be orders
>> > of magnitude greater (per different QAM encodings and other signal
>> > processing techniques.)
>> >
>> > Bob
>> >
>> > On Wed, Oct 19, 2022 at 12:09 AM David Lang <david@lang.hm> wrote:
>> >
>> >> On Tue, 18 Oct 2022, Sebastian Moeller wrote:
>> >>> Hi Bob,
>> >>>
>> >>>> Many network engineers typically, though incorrectly, perceive a
>> >> transmit
>> >>>> unit as one ethernet packet. With WiFi it's one Mu transmission
>> or one
>> >> Su
>> >>>> transmission, with aggregation(s), which is a lot more than one
>> ethernet
>> >>>> packet but it depends on things like MCS, spatial stream powers,
>> Mu
>> >> peers,
>> >>>> etc. and is variable. Some data center designs have optimized the
>> >>>> forwarding plane for flow completion times so their equivalent
>> transmit
>> >>>> unit is a mouse flow.
>> >>>
>> >>> [SM] Is this driven more by the need to aggregate packets to amortize
>> >> some cost over a larger payload or to reduce the scheduling overhead or
>> to
>> >> regularize things (as in fixed size DTUs used in DSL with G.INP
>> >> retransmissions)?
>> >>
>> >> it's to amortize costs over a larger payload.
>> >>
>> >> the gap between transmissions is in ms, and the transmission header is
>> >> transmitted at a slow data rate (both for backwards compatibility with
>> >> older
>> >> equipment that doesn't know about the higher data rate modulations)
>> >>
>> >> For a long time, the transmission header was transmitted at 1Mb (which is
>> >> still
>> >> the default in most equipment), but there is now an option to no longer
>> >> support
>> >> 802.11b equipment, which raises the header transmission time to 11Mb.
>> >>
>> >> These factors are so imbalanced compared to the top data rates available
>> >> that
>> >> you need to transmit several MB of data to have actual data use 50% of
>> the
>> >> airtime.
>> >>
>> >> David Lang
>> >>
>> >
>> >
>> _______________________________________________
>> Cake mailing list
>> Cake@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cake
>>
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat
2022-10-19 20:44 ` [Make-wifi-fast] " Stuart Cheshire
2022-10-19 21:33 ` [Make-wifi-fast] [Bloat] " David Lang
@ 2022-10-19 21:46 ` Michael Richardson
2022-12-06 19:17 ` Bob McMahon
2022-10-20 9:36 ` [Make-wifi-fast] [Rpm] " Sebastian Moeller
2 siblings, 1 reply; 70+ messages in thread
From: Michael Richardson @ 2022-10-19 21:46 UTC (permalink / raw)
To: Stuart Cheshire, =?utf-8?Q?Dave_T=C3=A4ht?=,
Rpm, Make-Wifi-fast, Cake List, bloat
[-- Attachment #1: Type: text/plain, Size: 1515 bytes --]
Stuart Cheshire via Bloat <bloat@lists.bufferbloat.net> wrote:
>> I think the person with the cheetos pulling out a gun and shooting
>> everyone in front of him (AQM) would not go down well.
> Which is why starting with a bad analogy (people waiting in a grocery
> store) inevitably leads to bad conclusions.
> If we want to struggle to make the grocery store analogy work, perhaps
> we show people checking some grocery store app on their smartphone
> before they leave home, and if they see that a long line is beginning
> to form they wait until later, when the line is shorter. The challenge
> is not how to deal with a long queue when it’s there, it is how to
> avoid a long queue in the first place.
Maybe if we regard the entire grocery store as the "pipe", then we would
realize that the trick to reducing checkout lines is to move the constraint
from exiting, to entering the store :-)
Then the different times you are in the store because you have different
amounts of shopping to do, etc. and you get txt messages from spouse to
remember to pick up X, and that somehow is an analogy to the various
"PowerBoost" cable and LTE/5G systems that provide for inconsistent
bandwidth.
(There are various pushes to actually do this, as the experience from COVID
was that having fewer people in the store pleased many people.)
--
Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works
-= IPv6 IoT consulting =-
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat
2022-10-19 21:33 ` [Make-wifi-fast] [Bloat] " David Lang
@ 2022-10-19 23:36 ` Stephen Hemminger
2022-10-20 14:26 ` [Make-wifi-fast] [Rpm] [Bloat] Traffic analogies (was: Wonderful video) Rich Brown
0 siblings, 1 reply; 70+ messages in thread
From: Stephen Hemminger @ 2022-10-19 23:36 UTC (permalink / raw)
To: David Lang via Bloat
Cc: David Lang, Stuart Cheshire, Rpm, Make-Wifi-fast, Cake List
On Wed, 19 Oct 2022 14:33:28 -0700 (PDT)
David Lang via Bloat <bloat@lists.bufferbloat.net> wrote:
> On Wed, 19 Oct 2022, Stuart Cheshire via Bloat wrote:
>
> > On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire <cheshire@apple.com> wrote:
> >
> >> Accuracy be damned. The analogy to common experience resonates more.
> >
> > I feel it is not an especially profound insight to observe that, “people don’t like waiting in line.” The conclusion, “therefore privileged people should get to go to the front,” describes an airport first class checkin counter, Disney Fastpass, and countless other analogies from everyday life, all of which are the wrong solution for packets in a network.
>
> the 'privileged go first' is traditional QoS, and it can work to some extent,
> but is a nightmare to maintain and gets the wrong result most of the time.
A lot of times when this is proposed it has some business/political motivation.
It is like "priority boarding" for Global Services customers.
Not solving a latency problem, instead making stakeholders happy.
> AQM (fw_codel and cake) are more the 'cash only line' and '15 items or less'
> line, they speed up the things that can be fast a LOT, while not significantly
> slowing down the people with a full baskets (but in the process, it shortens the
> lines for those people with full baskets)
>
> >> I think the person with the cheetos pulling out a gun and shooting everyone in front of him (AQM) would not go down well.
> >
> > Which is why starting with a bad analogy (people waiting in a grocery store) inevitably leads to bad conclusions.
> >
> > If we want to struggle to make the grocery store analogy work, perhaps we show
> > people checking some grocery store app on their smartphone before they leave
> > home, and if they see that a long line is beginning to form they wait until
> > later, when the line is shorter. The challenge is not how to deal with a long
> > queue when it’s there, it is how to avoid a long queue in the first place.
>
> only somewhat, you aren't going to have people deciding not to click on a link
> because the network is busy, and if you did try to go that direction, I would
> fight you. the prioritization is happening at a much lower level, which is hard
> to put into an analogy
>
> even with the 'slowing' of bulk traffic, no traffic is prevented, it's just that
> they aren't allowed to monopolize the links.
>
> This is where the grocery store analogy is weak, the reality would be more like
> 'the cashier will only process 30 items before you have to step aside and let
> someone else in', but since no store operates that way, it would be a bad
> analogy.
Grocery store analogies also breakdown because packets are not "precious"
it is okay to drop packets. A lot of AQM works by doing "drop early and often"
instead of "drop late and collapse".
>
> >> Actually that analogy is fairly close to fair queuing. The multiple checker analogy is one of the most common analogies in queue theory itself.
> >
> > I disagree. You are describing the “FQ” part of FQ_CoDel. It’s the “CoDel”
> > part of FQ_CoDel that solves bufferbloat. FQ has been around for a long time,
> > and at best it partially masked the effects of bufferbloat. Having more queues
> > does not solve bufferbloat. Managing the queue(s) better solves bufferbloat.
> >
> >> I like the idea of a guru floating above a grocery cart with a better string of explanations, explaining
> >>
> >> - "no, grasshopper, the solution to bufferbloat is no line... at all".
> >
> > That is the kind of thing I had in mind. Or a similar quote from The Matrix.
> > While everyone is debating ways to live with long queues, the guru asks, “What
> > if there were no queues?” That is the “mind blown” realization.
>
> In a world where there is no universal scheduler (and no universal knowlege to
> base any scheduling decisions on), and where you are going to have malicious
> actors trying to get more than their fair share, you can't rely on voluntary
> actions to eliminate the lines.
>
> There are data transportation apps that work by starting up a large number of
> connections in parallel for the highest transfer speeds (shortening slow start,
> reducing the impact of lost packets as they only affect one connection, etc).
> This isn't even malicious actors, but places like Hollywood studios sending
> the raw movie footage around over dedicated leased lines and wanting to get
> every bps of bandwidth that they are paying for used.
>
> David Lang
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] A quick report from the WISPA conference
2022-10-18 19:04 ` [Make-wifi-fast] [Bloat] " Sebastian Moeller
@ 2022-10-20 5:15 ` Sina Khanifar
2022-10-20 9:01 ` Sebastian Moeller
0 siblings, 1 reply; 70+ messages in thread
From: Sina Khanifar @ 2022-10-20 5:15 UTC (permalink / raw)
To: Sebastian Moeller
Cc: Sina Khanifar, Dave Taht, Cake List, Make-Wifi-fast, Rpm
[-- Attachment #1: Type: text/plain, Size: 18490 bytes --]
Hi Sebastian,
>
> [SM] Just an observation, using Safari I see large maximal delays (like a
> small group of samples far out to the right of the bulk) for both down-
> and upload that essentially disappear when I switch to firefox. Now I tend
> to have a ton of tabs open in Safari while I only open firefox for
> dedicated use-cases with a few tabs at most, so I do not intend to throw
> shade on Safari here; my point is more browsers can and do affect the
> reported latency numbers, of you want to be able to test this, maybe ask
> users to use the OS browser (safari, edge, konqueror ;) ) as well as
> firefox and chrome so you can directly compare across browsers?
>
I believe this is because we use the WebTiming APIs to get a more accurate latency numbers, but the API isn't fully supported on Safari. As such, latency measurements in Safari are much less accurate than in Firefox and Chrome.
>
> traceroute/mtr albeit not sure how well this approach works from inside
> the browser, can you e.g. control TTL and do you receive error messages
> via ICMP?
>
Unfortunately traceroutes via the browser don't really work :(. And I don't believe we can control TTL or see ICMP error messages either, though I haven't dug into this very deeply.
>
>
>
> Over in the OpenWrt forum we often see that server performance with
> iperf2/3 or netperf on a router is not all that representative for its
> routing performance. What do you expect to deduce from upload/download to
> the router? (I might misunderstand your point by a mile, if so please
> elaborate)
>
>
>
The goal would be to test the "local" latency, throughput, and bufferbloat between the user's device and the router, and then compare this with the latency, throughput, and bufferbloat when DL/ULing to a remote server.
This would reveal whether the dominant source of increase in latency under load is at the router's WAN interface or somewhere between the router and the user (e.g. WiFi, ethernet, powerline, Moca devices, PtP connections, etc).
Being able to test the user-to-router leg of the connection would be helpful more broadly beyond just bufferbloat. I often want to diagnose whether my connection issues or speed drops are happening due to an issue with my modem (and more generally the WAN connection) or if it's an issue with my wifi connection.
I guess I don't quite understand this part though: "iperf2/3 or netperf on a router is not all that representative for its routing performance." What exactly do you mean here?
>
> Most recent discussion moved over to https://forum.openwrt.org/t/cake-w-adaptive-bandwidth/135379
>
>
>
Thanks! I have a lot of catching up to do on that thread, and some of it is definitely above my pay grade :).
>
> I think this ideally would be solved at the 3GPPP level
>
>
Agreed. I wonder if there's anything we can do to encourage them to pay attention to this.
Best regards,
Sina.
On Tue, Oct 18, 2022 at 12:04 PM, Sebastian Moeller < moeller0@gmx.de > wrote:
>
>
>
> Hi Sina,
>
>
>
>
> On 18 October 2022 19:17:16 CEST, Sina Khanifar via Bloat < bloat@ lists. bufferbloat.
> net ( bloat@lists.bufferbloat.net ) > wrote:
>
>
>
>>
>>>
>>>
>>> I can't help but wonder tho... are you collecting any statistics, over
>>> time, as to how much better the problem is getting?
>>>
>>>
>>>
>>
>>
>>
>> We are collecting anonymized data, but we haven't analyzed it yet. If we
>> get a bit of time we'll look at that hopefully.
>>
>>
>>
>
>
>
> [SM] Just an observation, using Safari I see large maximal delays (like a
> small group of samples far out to the right of the bulk) for both down-
> and upload that essentially disappear when I switch to firefox. Now I tend
> to have a ton of tabs open in Safari while I only open firefox for
> dedicated use-cases with a few tabs at most, so I do not intend to throw
> shade on Safari here; my point is more browsers can and do affect the
> reported latency numbers, of you want to be able to test this, maybe ask
> users to use the OS browser (safari, edge, konqueror ;) ) as well as
> firefox and chrome so you can directly compare across browsers?
>
>
>
>>
>>>
>>>
>>> And any chance they could do something similar explaining wifi?
>>>
>>>
>>>
>>
>>
>>
>> I'm actually not exactly sure what mitigations exist for WiFi at the
>> moment - is there something I can read?
>>
>>
>>
>>
>> On this note: when we were building our test one of the things we really
>> wished existed was a standardized way to test latency and throughput to
>> routers.
>>
>>
>>
>
>
>
> [SM] traceroute/mtr albeit not sure how well this approach works from
> inside the browser, can you e.g. control TTL and do you receive error
> messages via ICMP?
>
>
>
>
> It would be super helpful if there was a standard in consumer routers that
> allowed users to both ping and fetch 0kB fils from their routers, and also
> run download/upload tests.
>
>
>
>
> [SM] I think I see where you are coming from here. Over in the OpenWrt
> forum we often see that server performance with iperf2/3 or netperf on a
> router is not all that representative for its routing performance. What do
> you expect to deduce from upload/download to the router? (I might
> misunderstand your point by a mile, if so please elaborate)
>
>
>
>
> Regards
> Sebastian
>
>
>>
>>>
>>>
>>> I think one more wispa conference will be a clean sweep of everyone in the
>>> fixed wireless market to not only adopt these algorithms for plan
>>> enforcement, but even more directly on the radios and more CPE.
>>>
>>>
>>>
>>
>>
>>
>> T-Mobile has signed up 1m+ people to their new Home Internet over 5G, and
>> all of them have really meaningful bufferbloat issues. I've been pointing
>> folks who reach out to this thread ( https:/ / forum. openwrt. org/ t/ cake-w-adaptive-bandwidth-historic/
>> 108848 (
>> https://forum.openwrt.org/t/cake-w-adaptive-bandwidth-historic/108848 ) )
>> about cake-autorate and sqm-autorate, but ideally it would be fixed at a
>> network level, just not sure how to apply pressure (I'm in contact with
>> the T-Mobile Home Internet team, but I think this is above their heads).
>>
>>
>>
>>
>> On Mon, Oct 17, 2022 at 8:15 PM, Dave Taht < dave. taht@ gmail. com (
>> dave.taht@gmail.com ) > wrote:
>>
>>
>>
>>>
>>>
>>> On Mon, Oct 17 , 2022 at 7:51 PM Sina Khanifar < sina@ waveform. com ( sina@
>>> waveform. com ( sina@waveform.com ) ) > wrote:
>>>
>>>
>>>
>>>>
>>>>
>>>> Positive or negative, I can claim a bit of credit for this video :). We've
>>>> been working with LTT on a few projects and we pitched them on doing
>>>> something around bufferbloat. We've seen more traffic to our Waveforn test
>>>> than ever before, which has been fun!
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> Thank you. Great job with that video! And waveform has become the goto
>>> site for many now.
>>>
>>>
>>>
>>>
>>> I can't help but wonder tho... are you collecting any statistics, over
>>> time, as to how much better the problem is getting?
>>>
>>>
>>>
>>>
>>> And any chance they could do something similar explaining wifi?
>>>
>>>
>>>
>>>
>>> ...
>>>
>>>
>>>
>>>
>>> I was just at WISPA conference week before last. Preseem's booth
>>> (fq_codel) was always packed. Vilo living had put cake in their wifi 6
>>> product. A
>>> keynote speaker had deployed it and talked about it with waveform results
>>> on the big screen (2k people there). A large wireless vendor demo'd
>>> privately to me their flent results before/after cake on their next-gen
>>> radios... and people dissed tarana without me prompting for their bad
>>> bufferbloat... and the best thing of all that happened to me was...
>>> besides getting a hug from a young lady (megan) who'd salvaged her
>>> schooling in alaska using sqm - I walked up to the paraqum booth
>>> (another large QoE middlebox maker centered more in india) and asked.
>>>
>>>
>>>
>>> "So... do y'all have fq_codel yet?"
>>>
>>>
>>>
>>>
>>> And they smiled and said: "No, we have something better... we've got
>>> cake."
>>>
>>>
>>>
>>>
>>> "Cake? What's that?" - I said, innocently.
>>>
>>>
>>>
>>>
>>> They then stepped me through their 200Gbps (!!) product, which uses a
>>> bunch of offloads, and can track rtt down to a ms with the intel ethernet
>>> card they were using. They'd modifed cake to provide 16 (?) levels of
>>> service, and were running under dpdk (I am not sure if cake was). It was a
>>> great, convincing pitch...
>>>
>>>
>>>
>>>
>>> ... then I told 'em who I was. There's a video of the in-both concert
>>> after.
>>>
>>>
>>>
>>>
>>> ...
>>>
>>>
>>>
>>>
>>> The downside to me (and the subject of my talk) was that in nearly every
>>> person I talked to, fq_codel was viewed as a means to better subscriber
>>> bandwidth plan enforcement (which is admittedly the market that preseem
>>> pioneered) and it was not understood that I'd got involved in this whole
>>> thing because I'd wanted an algorithm to deal with "rain fade", running
>>> directly on the radios. People wanted to use the statistics on the radios
>>> to drive the plan enforcement better
>>> (which is an ok approach, I guess), and for 10+ I'd been whinging about
>>> the... physics.
>>>
>>>
>>>
>>> So I ranted about rfc7567 a lot and begged people now putting routerOS
>>> 7.2 and later out there (mikrotik is huge in this market), to kill their
>>> fifos and sfqs at the native rates of the interfaces... and watch their
>>> network improve that way also.
>>>
>>>
>>>
>>> I think one more wispa conference will be a clean sweep of everyone in the
>>> fixed wireless market to not only adopt these algorithms for plan
>>> enforcement, but even more directly on the radios and more CPE.
>>>
>>>
>>>
>>>
>>> I also picked up enough consulting business to keep me busy the rest of
>>> this year, and possibly more than I can handle (anybody looking?)
>>>
>>>
>>>
>>>
>>> I wonder what will happen at a fiber conference?
>>>
>>>
>>>
>>>>
>>>>
>>>> On Mon, Oct 17 , 2022 at 7:45 PM Dave Taht via Bloat < bloat@ lists.
>>>> bufferbloat. net ( bloat@ lists. bufferbloat. net (
>>>> bloat@lists.bufferbloat.net ) ) > wrote:
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> On Mon, Oct 17 , 2022 at 5:02 PM Stuart Cheshire < cheshire@ apple. com ( cheshire@
>>>>> apple. com ( cheshire@apple.com ) ) > wrote:
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> On 9 Oct 2022 , at 06:14, Dave Taht via Make-wifi-fast < make-wifi-fast@
>>>>>> lists. bufferbloat. net ( make-wifi-fast@ lists. bufferbloat. net (
>>>>>> make-wifi-fast@lists.bufferbloat.net ) ) > wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> This was so massively well done, I cried. Does anyone know how to get in
>>>>>>> touch with the ifxit folk?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> https:/ / www. youtube. com/ watch?v=UICh3ScfNWI ( https:/ / www. youtube.
>>>>>>> com/ watch?v=UICh3ScfNWI ( https://www.youtube.com/watch?v=UICh3ScfNWI ) )
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> I’m surprised that you liked this video. It seems to me that it repeats
>>>>>> all the standard misinformation. The analogy they use is the standard
>>>>>> terrible example of waiting in a long line at a grocery store, and the
>>>>>> “solution” is letting certain traffic “jump the line, angering everyone
>>>>>> behind them”.
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Accuracy be damned. The analogy to common experience resonates more.
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> Some quotes from the video:
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> it would be so much more efficient for them to let you skip the line and
>>>>>>> just check out, especially since you’re in a hurry, but they’re rudely
>>>>>>> refusing
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I think the person with the cheetos pulling out a gun and shooting
>>>>> everyone in front of him (AQM) would not go down well.
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> to go back to our grocery store analogy this would be like if a worker saw
>>>>>>> you standing at the back ... and either let you skip to the front of the
>>>>>>> line or opens up an express lane just for you
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Actually that analogy is fairly close to fair queuing. The multiple
>>>>> checker analogy is one of the most common analogies in queue theory
>>>>> itself.
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> The video describes the problem of bufferbloat, and then describes the
>>>>>> same failed solution that hasn’t worked for the last three decades.
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Hmm? It establishes the scenario, explains the problem *quickly*, disses
>>>>> gamer routers for not getting it right.. *points to an accurate test*, and
>>>>> then to the ideas and products that *actually work* with "smart queueing",
>>>>> with a screenshot of the most common
>>>>> (eero's optimize for gaming and videoconferencing), and fq_codel and cake
>>>>> *by name*, and points folk at the best known solution available, openwrt.
>>>>>
>>>>>
>>>>>
>>>>> Bing, baddabang, boom. Also the comments were revealing. A goodly
>>>>> percentage already knew the problem, more than a few were inspired to take
>>>>> the test,
>>>>> there was a whole bunch of "Aha!" success stories and 360k views, which is
>>>>> more people than we've ever been able to reach in for example, a nanog
>>>>> conference.
>>>>>
>>>>>
>>>>>
>>>>> I loved that folk taking the test actually had quite a few A results,
>>>>> without having had to do anything. At least some ISPs are getting it more
>>>>> right now!
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> At this point I think gamers in particular know what "brands" we've tried
>>>>> to establish - "Smart queues", "SQM", "OpenWrt", fq_codel and now "cake"
>>>>> are "good" things to have, and are stimulating demand by asking for them,
>>>>> It's certainly working out better and better for evenroute, firewalla,
>>>>> ubnt and others, and I saw an uptick in questions about this on various
>>>>> user forums.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I even like that there's a backlash now of people saying "fixing
>>>>> bufferbloat doesn't solve everything" -
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> Describing the obvious simple-minded (wrong) solution that any normal
>>>>>> person would think of based on their personal human experience waiting in
>>>>>> grocery stores and airports, is not describing the solution to
>>>>>> bufferbloat. The solution to bufferbloat is not that if you are privileged
>>>>>> then you get to “skip to the front of the line”. The solution to
>>>>>> bufferbloat is that there is no line!
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I like the idea of a guru floating above a grocery cart with a better
>>>>> string of explanations, explaining
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> - "no, grasshopper, the solution to bufferbloat is no line... at all".
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> With grocery stores and airports people’s arrivals are independent and not
>>>>>> controlled. There is no way for a grocery store or airport to generate
>>>>>> backpressure to tell people to wait at home when a queue begins to form.
>>>>>> The key to solving bufferbloat is generating timely backpressure to
>>>>>> prevent the queue forming in the first place, not accepting a huge queue
>>>>>> and then deciding who deserves special treatment to get better service
>>>>>> than all the other peons who still have to wait in a long queue, just like
>>>>>> before.
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I am not huge on the word "backpressure" here. Needs to signal the other
>>>>> side to slow down, is more accurate. So might say timely signalling rather
>>>>> than timely backpressure?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Other feedback I got was that the video was too smarmy (I agree),
>>>>> different audiences than gamers need different forms of outreach...
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> but to me, winning the gamers has always been one of the most important
>>>>> things, as they make a lot of buying decisions, and they benefit the most
>>>>> for
>>>>> fq and packet prioritization as we do today in gamer routers and in cake +
>>>>> qosify.
>>>>>
>>>>>
>>>>>
>>>>> maybe that gets in the way of more serious markets. Certainly I would like
>>>>> another video explaining what goes wrong with videoconferencing.
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> Stuart Cheshire
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> This song goes out to all the folk that thought Stadia would work: https:/
>>>>>
>>>>> / www. linkedin. com/ posts/
>>>>> dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>> (
>>>>> https:/ / www. linkedin. com/ posts/ dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>> (
>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>> )
>>>>> ) Dave Täht CEO, TekLibre, LLC
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@ lists. bufferbloat. net ( Bloat@lists.bufferbloat.net ) https://lists.bufferbloat.net/listinfo/bloat
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> This song goes out to all the folk that thought Stadia would work: https:/
>>>
>>> / www. linkedin. com/ posts/
>>> dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>> (
>>> https:/ / www. linkedin. com/ posts/ dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>> (
>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>> )
>>> ) Dave Täht CEO, TekLibre, LLC
>>>
>>>
>>
>>
>
>
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
>
>
[-- Attachment #2: Type: text/html, Size: 24811 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] A quick report from the WISPA conference
2022-10-20 5:15 ` Sina Khanifar
@ 2022-10-20 9:01 ` Sebastian Moeller
2022-10-20 14:50 ` Jeremy Harris
0 siblings, 1 reply; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-20 9:01 UTC (permalink / raw)
To: Sina Khanifar; +Cc: Sina Khanifar, Dave Taht, Cake List, Make-Wifi-fast, Rpm
Hi Sina,
On 20 October 2022 07:15:47 CEST, Sina Khanifar <sina@waveform.com> wrote:
>Hi Sebastian,
>
>>
>> [SM] Just an observation, using Safari I see large maximal delays (like a
>> small group of samples far out to the right of the bulk) for both down-
>> and upload that essentially disappear when I switch to firefox. Now I tend
>> to have a ton of tabs open in Safari while I only open firefox for
>> dedicated use-cases with a few tabs at most, so I do not intend to throw
>> shade on Safari here; my point is more browsers can and do affect the
>> reported latency numbers, of you want to be able to test this, maybe ask
>> users to use the OS browser (safari, edge, konqueror ;) ) as well as
>> firefox and chrome so you can directly compare across browsers?
>>
>
>I believe this is because we use the WebTiming APIs to get a more accurate latency numbers, but the API isn't fully supported on Safari. As such, latency measurements in Safari are much less accurate than in Firefox and Chrome.
>
>>
>> traceroute/mtr albeit not sure how well this approach works from inside
>> the browser, can you e.g. control TTL and do you receive error messages
>> via ICMP?
>>
>
>Unfortunately traceroutes via the browser don't really work :(. And I don't believe we can control TTL or see ICMP error messages either, though I haven't dug into this very deeply.
>
>>
>>
>>
>> Over in the OpenWrt forum we often see that server performance with
>> iperf2/3 or netperf on a router is not all that representative for its
>> routing performance. What do you expect to deduce from upload/download to
>> the router? (I might misunderstand your point by a mile, if so please
>> elaborate)
>>
>>
>>
>
>The goal would be to test the "local" latency, throughput, and bufferbloat between the user's device and the router, and then compare this with the latency, throughput, and bufferbloat when DL/ULing to a remote server.
>
>This would reveal whether the dominant source of increase in latency under load is at the router's WAN interface or somewhere between the router and the user (e.g. WiFi, ethernet, powerline, Moca devices, PtP connections, etc).
>
>Being able to test the user-to-router leg of the connection would be helpful more broadly beyond just bufferbloat. I often want to diagnose whether my connection issues or speed drops are happening due to an issue with my modem (and more generally the WAN connection) or if it's an issue with my wifi connection.
>
>I guess I don't quite understand this part though: "iperf2/3 or netperf on a router is not all that representative for its routing performance." What exactly do you mean here?
[SM] IIRC some router SoC allow higher routing throughput than they can sink or source bulk traffic with iperf. This is especially pronounced in routers that use soft- and especially hardware acceleration and where the access speed is already beyond what the CPU can deliver on its own, iperf being not accelerated will tend to show throughput below the routing capacity making interpretation of problems somewhat hard. IIUC similar issues arise for big iron routers that pair a potent routing ASIC with a somewhat measly CPU, all task that get relegated to the CPU make the router appear much slower.
I guess you would need a second iperf server in your network and configure your router such that it has to route packets between server and client to get an idea what routing limits your router actually has. And even that is somewhat incomplete if say the ISP uses additional costly layers like PPPoE on the access ling that the router needs to terminate.
Regards
Sebastian
>
>>
>> Most recent discussion moved over to https://forum.openwrt.org/t/cake-w-adaptive-bandwidth/135379
>>
>>
>>
>
>Thanks! I have a lot of catching up to do on that thread, and some of it is definitely above my pay grade :).
>
>>
>> I think this ideally would be solved at the 3GPPP level
>>
>>
>
>Agreed. I wonder if there's anything we can do to encourage them to pay attention to this.
>
>Best regards,
>
>Sina.
>
>On Tue, Oct 18, 2022 at 12:04 PM, Sebastian Moeller < moeller0@gmx.de > wrote:
>
>>
>>
>>
>> Hi Sina,
>>
>>
>>
>>
>> On 18 October 2022 19:17:16 CEST, Sina Khanifar via Bloat < bloat@ lists. bufferbloat.
>> net ( bloat@lists.bufferbloat.net ) > wrote:
>>
>>
>>
>>>
>>>>
>>>>
>>>> I can't help but wonder tho... are you collecting any statistics, over
>>>> time, as to how much better the problem is getting?
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> We are collecting anonymized data, but we haven't analyzed it yet. If we
>>> get a bit of time we'll look at that hopefully.
>>>
>>>
>>>
>>
>>
>>
>> [SM] Just an observation, using Safari I see large maximal delays (like a
>> small group of samples far out to the right of the bulk) for both down-
>> and upload that essentially disappear when I switch to firefox. Now I tend
>> to have a ton of tabs open in Safari while I only open firefox for
>> dedicated use-cases with a few tabs at most, so I do not intend to throw
>> shade on Safari here; my point is more browsers can and do affect the
>> reported latency numbers, of you want to be able to test this, maybe ask
>> users to use the OS browser (safari, edge, konqueror ;) ) as well as
>> firefox and chrome so you can directly compare across browsers?
>>
>>
>>
>>>
>>>>
>>>>
>>>> And any chance they could do something similar explaining wifi?
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> I'm actually not exactly sure what mitigations exist for WiFi at the
>>> moment - is there something I can read?
>>>
>>>
>>>
>>>
>>> On this note: when we were building our test one of the things we really
>>> wished existed was a standardized way to test latency and throughput to
>>> routers.
>>>
>>>
>>>
>>
>>
>>
>> [SM] traceroute/mtr albeit not sure how well this approach works from
>> inside the browser, can you e.g. control TTL and do you receive error
>> messages via ICMP?
>>
>>
>>
>>
>> It would be super helpful if there was a standard in consumer routers that
>> allowed users to both ping and fetch 0kB fils from their routers, and also
>> run download/upload tests.
>>
>>
>>
>>
>> [SM] I think I see where you are coming from here. Over in the OpenWrt
>> forum we often see that server performance with iperf2/3 or netperf on a
>> router is not all that representative for its routing performance. What do
>> you expect to deduce from upload/download to the router? (I might
>> misunderstand your point by a mile, if so please elaborate)
>>
>>
>>
>>
>> Regards
>> Sebastian
>>
>>
>>>
>>>>
>>>>
>>>> I think one more wispa conference will be a clean sweep of everyone in the
>>>> fixed wireless market to not only adopt these algorithms for plan
>>>> enforcement, but even more directly on the radios and more CPE.
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> T-Mobile has signed up 1m+ people to their new Home Internet over 5G, and
>>> all of them have really meaningful bufferbloat issues. I've been pointing
>>> folks who reach out to this thread ( https:/ / forum. openwrt. org/ t/ cake-w-adaptive-bandwidth-historic/
>>> 108848 (
>>> https://forum.openwrt.org/t/cake-w-adaptive-bandwidth-historic/108848 ) )
>>> about cake-autorate and sqm-autorate, but ideally it would be fixed at a
>>> network level, just not sure how to apply pressure (I'm in contact with
>>> the T-Mobile Home Internet team, but I think this is above their heads).
>>>
>>>
>>>
>>>
>>> On Mon, Oct 17, 2022 at 8:15 PM, Dave Taht < dave. taht@ gmail. com (
>>> dave.taht@gmail.com ) > wrote:
>>>
>>>
>>>
>>>>
>>>>
>>>> On Mon, Oct 17 , 2022 at 7:51 PM Sina Khanifar < sina@ waveform. com ( sina@
>>>> waveform. com ( sina@waveform.com ) ) > wrote:
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> Positive or negative, I can claim a bit of credit for this video :). We've
>>>>> been working with LTT on a few projects and we pitched them on doing
>>>>> something around bufferbloat. We've seen more traffic to our Waveforn test
>>>>> than ever before, which has been fun!
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> Thank you. Great job with that video! And waveform has become the goto
>>>> site for many now.
>>>>
>>>>
>>>>
>>>>
>>>> I can't help but wonder tho... are you collecting any statistics, over
>>>> time, as to how much better the problem is getting?
>>>>
>>>>
>>>>
>>>>
>>>> And any chance they could do something similar explaining wifi?
>>>>
>>>>
>>>>
>>>>
>>>> ...
>>>>
>>>>
>>>>
>>>>
>>>> I was just at WISPA conference week before last. Preseem's booth
>>>> (fq_codel) was always packed. Vilo living had put cake in their wifi 6
>>>> product. A
>>>> keynote speaker had deployed it and talked about it with waveform results
>>>> on the big screen (2k people there). A large wireless vendor demo'd
>>>> privately to me their flent results before/after cake on their next-gen
>>>> radios... and people dissed tarana without me prompting for their bad
>>>> bufferbloat... and the best thing of all that happened to me was...
>>>> besides getting a hug from a young lady (megan) who'd salvaged her
>>>> schooling in alaska using sqm - I walked up to the paraqum booth
>>>> (another large QoE middlebox maker centered more in india) and asked.
>>>>
>>>>
>>>>
>>>> "So... do y'all have fq_codel yet?"
>>>>
>>>>
>>>>
>>>>
>>>> And they smiled and said: "No, we have something better... we've got
>>>> cake."
>>>>
>>>>
>>>>
>>>>
>>>> "Cake? What's that?" - I said, innocently.
>>>>
>>>>
>>>>
>>>>
>>>> They then stepped me through their 200Gbps (!!) product, which uses a
>>>> bunch of offloads, and can track rtt down to a ms with the intel ethernet
>>>> card they were using. They'd modifed cake to provide 16 (?) levels of
>>>> service, and were running under dpdk (I am not sure if cake was). It was a
>>>> great, convincing pitch...
>>>>
>>>>
>>>>
>>>>
>>>> ... then I told 'em who I was. There's a video of the in-both concert
>>>> after.
>>>>
>>>>
>>>>
>>>>
>>>> ...
>>>>
>>>>
>>>>
>>>>
>>>> The downside to me (and the subject of my talk) was that in nearly every
>>>> person I talked to, fq_codel was viewed as a means to better subscriber
>>>> bandwidth plan enforcement (which is admittedly the market that preseem
>>>> pioneered) and it was not understood that I'd got involved in this whole
>>>> thing because I'd wanted an algorithm to deal with "rain fade", running
>>>> directly on the radios. People wanted to use the statistics on the radios
>>>> to drive the plan enforcement better
>>>> (which is an ok approach, I guess), and for 10+ I'd been whinging about
>>>> the... physics.
>>>>
>>>>
>>>>
>>>> So I ranted about rfc7567 a lot and begged people now putting routerOS
>>>> 7.2 and later out there (mikrotik is huge in this market), to kill their
>>>> fifos and sfqs at the native rates of the interfaces... and watch their
>>>> network improve that way also.
>>>>
>>>>
>>>>
>>>> I think one more wispa conference will be a clean sweep of everyone in the
>>>> fixed wireless market to not only adopt these algorithms for plan
>>>> enforcement, but even more directly on the radios and more CPE.
>>>>
>>>>
>>>>
>>>>
>>>> I also picked up enough consulting business to keep me busy the rest of
>>>> this year, and possibly more than I can handle (anybody looking?)
>>>>
>>>>
>>>>
>>>>
>>>> I wonder what will happen at a fiber conference?
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> On Mon, Oct 17 , 2022 at 7:45 PM Dave Taht via Bloat < bloat@ lists.
>>>>> bufferbloat. net ( bloat@ lists. bufferbloat. net (
>>>>> bloat@lists.bufferbloat.net ) ) > wrote:
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Oct 17 , 2022 at 5:02 PM Stuart Cheshire < cheshire@ apple. com ( cheshire@
>>>>>> apple. com ( cheshire@apple.com ) ) > wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 9 Oct 2022 , at 06:14, Dave Taht via Make-wifi-fast < make-wifi-fast@
>>>>>>> lists. bufferbloat. net ( make-wifi-fast@ lists. bufferbloat. net (
>>>>>>> make-wifi-fast@lists.bufferbloat.net ) ) > wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> This was so massively well done, I cried. Does anyone know how to get in
>>>>>>>> touch with the ifxit folk?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> https:/ / www. youtube. com/ watch?v=UICh3ScfNWI ( https:/ / www. youtube.
>>>>>>>> com/ watch?v=UICh3ScfNWI ( https://www.youtube.com/watch?v=UICh3ScfNWI ) )
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I’m surprised that you liked this video. It seems to me that it repeats
>>>>>>> all the standard misinformation. The analogy they use is the standard
>>>>>>> terrible example of waiting in a long line at a grocery store, and the
>>>>>>> “solution” is letting certain traffic “jump the line, angering everyone
>>>>>>> behind them”.
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Accuracy be damned. The analogy to common experience resonates more.
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Some quotes from the video:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> it would be so much more efficient for them to let you skip the line and
>>>>>>>> just check out, especially since you’re in a hurry, but they’re rudely
>>>>>>>> refusing
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> I think the person with the cheetos pulling out a gun and shooting
>>>>>> everyone in front of him (AQM) would not go down well.
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> to go back to our grocery store analogy this would be like if a worker saw
>>>>>>>> you standing at the back ... and either let you skip to the front of the
>>>>>>>> line or opens up an express lane just for you
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Actually that analogy is fairly close to fair queuing. The multiple
>>>>>> checker analogy is one of the most common analogies in queue theory
>>>>>> itself.
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> The video describes the problem of bufferbloat, and then describes the
>>>>>>> same failed solution that hasn’t worked for the last three decades.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Hmm? It establishes the scenario, explains the problem *quickly*, disses
>>>>>> gamer routers for not getting it right.. *points to an accurate test*, and
>>>>>> then to the ideas and products that *actually work* with "smart queueing",
>>>>>> with a screenshot of the most common
>>>>>> (eero's optimize for gaming and videoconferencing), and fq_codel and cake
>>>>>> *by name*, and points folk at the best known solution available, openwrt.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Bing, baddabang, boom. Also the comments were revealing. A goodly
>>>>>> percentage already knew the problem, more than a few were inspired to take
>>>>>> the test,
>>>>>> there was a whole bunch of "Aha!" success stories and 360k views, which is
>>>>>> more people than we've ever been able to reach in for example, a nanog
>>>>>> conference.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I loved that folk taking the test actually had quite a few A results,
>>>>>> without having had to do anything. At least some ISPs are getting it more
>>>>>> right now!
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> At this point I think gamers in particular know what "brands" we've tried
>>>>>> to establish - "Smart queues", "SQM", "OpenWrt", fq_codel and now "cake"
>>>>>> are "good" things to have, and are stimulating demand by asking for them,
>>>>>> It's certainly working out better and better for evenroute, firewalla,
>>>>>> ubnt and others, and I saw an uptick in questions about this on various
>>>>>> user forums.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> I even like that there's a backlash now of people saying "fixing
>>>>>> bufferbloat doesn't solve everything" -
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Describing the obvious simple-minded (wrong) solution that any normal
>>>>>>> person would think of based on their personal human experience waiting in
>>>>>>> grocery stores and airports, is not describing the solution to
>>>>>>> bufferbloat. The solution to bufferbloat is not that if you are privileged
>>>>>>> then you get to “skip to the front of the line”. The solution to
>>>>>>> bufferbloat is that there is no line!
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> I like the idea of a guru floating above a grocery cart with a better
>>>>>> string of explanations, explaining
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> - "no, grasshopper, the solution to bufferbloat is no line... at all".
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> With grocery stores and airports people’s arrivals are independent and not
>>>>>>> controlled. There is no way for a grocery store or airport to generate
>>>>>>> backpressure to tell people to wait at home when a queue begins to form.
>>>>>>> The key to solving bufferbloat is generating timely backpressure to
>>>>>>> prevent the queue forming in the first place, not accepting a huge queue
>>>>>>> and then deciding who deserves special treatment to get better service
>>>>>>> than all the other peons who still have to wait in a long queue, just like
>>>>>>> before.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> I am not huge on the word "backpressure" here. Needs to signal the other
>>>>>> side to slow down, is more accurate. So might say timely signalling rather
>>>>>> than timely backpressure?
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Other feedback I got was that the video was too smarmy (I agree),
>>>>>> different audiences than gamers need different forms of outreach...
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> but to me, winning the gamers has always been one of the most important
>>>>>> things, as they make a lot of buying decisions, and they benefit the most
>>>>>> for
>>>>>> fq and packet prioritization as we do today in gamer routers and in cake +
>>>>>> qosify.
>>>>>>
>>>>>>
>>>>>>
>>>>>> maybe that gets in the way of more serious markets. Certainly I would like
>>>>>> another video explaining what goes wrong with videoconferencing.
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Stuart Cheshire
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> This song goes out to all the folk that thought Stadia would work: https:/
>>>>>>
>>>>>> / www. linkedin. com/ posts/
>>>>>> dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>> (
>>>>>> https:/ / www. linkedin. com/ posts/ dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>> (
>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>> )
>>>>>> ) Dave Täht CEO, TekLibre, LLC
>>>>>> _______________________________________________
>>>>>> Bloat mailing list
>>>>>> Bloat@ lists. bufferbloat. net ( Bloat@lists.bufferbloat.net ) https://lists.bufferbloat.net/listinfo/bloat
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> This song goes out to all the folk that thought Stadia would work: https:/
>>>>
>>>> / www. linkedin. com/ posts/
>>>> dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> (
>>>> https:/ / www. linkedin. com/ posts/ dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> (
>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> )
>>>> ) Dave Täht CEO, TekLibre, LLC
>>>>
>>>>
>>>
>>>
>>
>>
>>
>> --
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>
>>
>>
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat
2022-10-19 20:44 ` [Make-wifi-fast] " Stuart Cheshire
2022-10-19 21:33 ` [Make-wifi-fast] [Bloat] " David Lang
2022-10-19 21:46 ` [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat Michael Richardson
@ 2022-10-20 9:36 ` Sebastian Moeller
2022-10-20 18:32 ` Stuart Cheshire
2 siblings, 1 reply; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-20 9:36 UTC (permalink / raw)
To: Stuart Cheshire; +Cc: Dave Täht, Rpm, Make-Wifi-fast, Cake List, bloat
Hi Stuart,
> On Oct 19, 2022, at 22:44, Stuart Cheshire via Rpm <rpm@lists.bufferbloat.net> wrote:
>
> On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire <cheshire@apple.com> wrote:
>
>> Accuracy be damned. The analogy to common experience resonates more.
>
> I feel it is not an especially profound insight to observe that, “people don’t like waiting in line.” The conclusion, “therefore privileged people should get to go to the front,” describes an airport first class checkin counter, Disney Fastpass, and countless other analogies from everyday life, all of which are the wrong solution for packets in a network.
>
>> I think the person with the cheetos pulling out a gun and shooting everyone in front of him (AQM) would not go down well.
>
> Which is why starting with a bad analogy (people waiting in a grocery store) inevitably leads to bad conclusions.
>
> If we want to struggle to make the grocery store analogy work, perhaps we show people checking some grocery store app on their smartphone before they leave home, and if they see that a long line is beginning to form they wait until later, when the line is shorter. The challenge is not how to deal with a long queue when it’s there, it is how to avoid a long queue in the first place.
[SM] That seems to be somewhat optimistic. We have been there before, short of mandating actually-working oracle schedulers on all end-points, intermediate hops will see queues some more and some less transient. So we can strive to minimize queue build-up sure, but can not avoid queues and long queues completely so we need methods to deal with them gracefully.
Also not many applications are actually helped all that much by letting information get stale in their own buffers as compared to an on-path queue. Think an on-line reaction-time gated game, the need is to distribute current world state to all participating clients ASAP. That often means a bunch of packets that can not reasonably be held back by the server to pace them out as world state IIUC needs to be transmitted completely for clients to be able to actually do the right thing. Such an application will continue to dump its world state burtst per client into the network as that is the required mode of operation. I think that there are other applications with similar requirements which will make sure that traffic stays burtsy and that IMHO will cause transient queues to build up. (Probably short duration ones, but still).
>
>> Actually that analogy is fairly close to fair queuing. The multiple checker analogy is one of the most common analogies in queue theory itself.
>
> I disagree. You are describing the “FQ” part of FQ_CoDel. It’s the “CoDel” part of FQ_CoDel that solves bufferbloat. FQ has been around for a long time, and at best it partially masked the effects of bufferbloat. Having more queues does not solve bufferbloat. Managing the queue(s) better solves bufferbloat.
[SM] Yes and no. IMHO it is the FQ part that gets greedy traffic off the back of those flows that stay below their capacity share, as it (unless overloaded) will isolate the consequence of exceeding one's capacity share to the flow(s) doing so. The AQM part then helps for greedy traffic not to congest itself unduly.
So for quite a lot of application classes (e.g. my world-state distribution example above) FQ (or any other type of competent scheduling) will already solve most of the problem, heck if ubiquitious it would even allow greedy traffic to switch to delay based CC methods that can help keeping queues small even without competent AQM at the bottlenecks (not that I recommend/endorse that, I am all for competent AQM/scheduling at the bottlenecks*).
>
>> I like the idea of a guru floating above a grocery cart with a better string of explanations, explaining
>>
>> - "no, grasshopper, the solution to bufferbloat is no line... at all".
>
> That is the kind of thing I had in mind. Or a similar quote from The Matrix. While everyone is debating ways to live with long queues, the guru asks, “What if there were no queues?” That is the “mind blown” realization.
[SM] However the "no queues" state is generally not achievable nor would it be desirable; queues have utility as "shock absorbers" and to help keeping a link busy***. I admit though that "no oversized queues" is far less snappy.
Regards
Sebastian
*) Which is why I am vehemently opposed to L4S, it offers neither competent scheduling nor competent AQM, in both regimes it is admittedly better then the current status quo of having neither but it falls short of the state of the art in both so much that deploying L4S today seems indefensible on technical grounds. And lo and behold one of L4S biggest proponents does so mainly on ideological grounds (just read "Flow rate fairness dismantling a religion" https://dl.acm.org/doi/10.1145/1232919.1232926 and then ask yourself whether you should trust such an author to make objective design/engineering choices after already tying himself to the mast that strongly**) but I digress...
**) I even have some sympathy for his goal of equalizing "cost" and not just simple flow rate, but I fail to see any way at all of supplying intermediate hops with sufficient and reliable enough information to do anything better than "aim to starve no flow". As I keep repeating, flow-queueing is (almost) never optimal, but at the same time it is almost never pessimal as it avoids picking winners and losers as much as possible (which in turn makes it considerably harder to abuse than other unequal rate distribution methods that rely on some characteristics of packet data).
***) I understand that one way to avoid queues is to keep ample capacity reserves so a link "never" gets congested, but that has some issues:
a) to keep a link at say at max 80% capacity there needs to be some admission control (or the aggregate ingress capacity needs to be smaller than the kink capacity) which really just moves the position around where a queue will form.
b) even then most link technologies are either 100% busy of 0 % so if two packets from two different ingress interfaces arrive simultaneously a micro-queue builds up as one packet needs to wait for the other to pass the link.
c) many internet access links for end users are still small enough that congestion can and will reliably happen by normal use-cases and traffic conditions; so as a user of such a link I need to deal with the queueing and can not just wish it away.
>
> Stuart Cheshire
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] [Bloat] Traffic analogies (was: Wonderful video)
2022-10-19 23:36 ` Stephen Hemminger
@ 2022-10-20 14:26 ` Rich Brown
0 siblings, 0 replies; 70+ messages in thread
From: Rich Brown @ 2022-10-20 14:26 UTC (permalink / raw)
To: bloat, Rpm, Make-Wifi-fast
[-- Attachment #1: Type: text/plain, Size: 1963 bytes --]
> On Oct 19, 2022, at 7:36 PM, Stephen Hemminger via Rpm <rpm@lists.bufferbloat.net> wrote:
>
> Grocery store analogies also breakdown because packets are not "precious"
> it is okay to drop packets. A lot of AQM works by doing "drop early and often"
> instead of "drop late and collapse".
Another problem is that grocery store customers are individual flows in their own right - not correlated with each other. Why is my grocery cart any more (or less) important than all the others who're waiting?
I continue to cast about for intuitive analogies (and getting skunked each time). But I'm going to try again...
Imagine a company with a bunch of employees. (Or a sports venue, or a UPS depot - any location where a bunch of vehicles with similar interests all decide to travel at once.) At quitting time, everyone leaves the parking lot where a traffic cop controls entry onto a two-lane road.
If there isn't any traffic on that road, the traffic cop keeps people coming out of the driveway "at the maximum rate".
If a car approaches on the road, what's the fair strategy for letting that single car pass? Wait 'til the parking lot empties? Make them wait 5 minutes? Make them wait one minute? It seems clear to me that it's fairest to stop traffic right away, let the car pass, then resume the driveway traffic.
This has the advantage of distinguishing between new flows (the single car) and bulk flows (treating vehicles in the driveway as a single flow). But it also feels like QoS prioritization or a simple two-queue model, neither of which lead to the proper intuition.
Any "traffic" analogy also ignores people's very real (and correct) intuition that "cars have mass". They can't stop in an instant and need to maintain space between them. This also ignores the recently-stated reality (for routers, at least) that "The best queue is no queue at all..."
Is there any hope of tweaking this analogy? :-)
Thanks.
Rich
[-- Attachment #2: Type: text/html, Size: 5010 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] A quick report from the WISPA conference
2022-10-20 9:01 ` Sebastian Moeller
@ 2022-10-20 14:50 ` Jeremy Harris
2022-10-20 15:56 ` Sebastian Moeller
0 siblings, 1 reply; 70+ messages in thread
From: Jeremy Harris @ 2022-10-20 14:50 UTC (permalink / raw)
To: make-wifi-fast
On 20/10/2022 10:01, Sebastian Moeller via Make-wifi-fast wrote:
> [SM] IIRC some router SoC allow higher routing throughput than they can sink or source bulk traffic with iperf. This is especially pronounced in routers that use soft- and especially hardware acceleration
Why do you think that iperf traffic will not be accelerated?
--
Cheers,
Jeremy
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] A quick report from the WISPA conference
2022-10-20 14:50 ` Jeremy Harris
@ 2022-10-20 15:56 ` Sebastian Moeller
2022-10-20 17:59 ` Bob McMahon
0 siblings, 1 reply; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-20 15:56 UTC (permalink / raw)
To: Jeremy Harris, Jeremy Harris via Make-wifi-fast, make-wifi-fast
Hi Jeremy,
On 20 October 2022 16:50:43 CEST, Jeremy Harris via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
>On 20/10/2022 10:01, Sebastian Moeller via Make-wifi-fast wrote:
>> [SM] IIRC some router SoC allow higher routing throughput than they can sink or source bulk traffic with iperf. This is especially pronounced in routers that use soft- and especially hardware acceleration
>
>Why do you think that iperf traffic will not be accelerated?
[SM] Because for sending and receiving data in the iperf application the CPU has to handle the traffic, while accelerator mainly work by having the CPU touch the data as little as possible/not at all.
Regards
Sebastian
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] A quick report from the WISPA conference
2022-10-20 15:56 ` Sebastian Moeller
@ 2022-10-20 17:59 ` Bob McMahon
0 siblings, 0 replies; 70+ messages in thread
From: Bob McMahon @ 2022-10-20 17:59 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Jeremy Harris, Jeremy Harris via Make-wifi-fast
[-- Attachment #1.1: Type: text/plain, Size: 2445 bytes --]
Iperf on the Router/AP CPU sources and sinks traffic. The Router/AP
hardware acceleration forwards traffic. These can be very different logic
subsystems.
One can sometimes connect a computer to the wired LAN port and measure
traffic between WiFi and the wired LAN port to get forwarding performance.
Also, the iperf 2 <https://sourceforge.net/projects/iperf2/> bounceback
feature, using low duty cycle traffic, may be sufficient to give WiFi
responsiveness metrics even with the Router CPU sourcing and sinking the
traffic. Your mileage will vary applies.
Bob
On Thu, Oct 20, 2022 at 8:57 AM Sebastian Moeller via Make-wifi-fast <
make-wifi-fast@lists.bufferbloat.net> wrote:
> Hi Jeremy,
>
> On 20 October 2022 16:50:43 CEST, Jeremy Harris via Make-wifi-fast <
> make-wifi-fast@lists.bufferbloat.net> wrote:
> >On 20/10/2022 10:01, Sebastian Moeller via Make-wifi-fast wrote:
> >> [SM] IIRC some router SoC allow higher routing throughput than they can
> sink or source bulk traffic with iperf. This is especially pronounced in
> routers that use soft- and especially hardware acceleration
> >
> >Why do you think that iperf traffic will not be accelerated?
>
> [SM] Because for sending and receiving data in the iperf application the
> CPU has to handle the traffic, while accelerator mainly work by having the
> CPU touch the data as little as possible/not at all.
>
> Regards
> Sebastian
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.2: Type: text/html, Size: 3160 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat
2022-10-20 9:36 ` [Make-wifi-fast] [Rpm] " Sebastian Moeller
@ 2022-10-20 18:32 ` Stuart Cheshire
2022-10-20 19:04 ` Bob McMahon
` (2 more replies)
0 siblings, 3 replies; 70+ messages in thread
From: Stuart Cheshire @ 2022-10-20 18:32 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Dave Täht, Rpm, Make-Wifi-fast, Cake List, bloat
On 20 Oct 2022, at 02:36, Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Stuart,
>
> [SM] That seems to be somewhat optimistic. We have been there before, short of mandating actually-working oracle schedulers on all end-points, intermediate hops will see queues some more and some less transient. So we can strive to minimize queue build-up sure, but can not avoid queues and long queues completely so we need methods to deal with them gracefully.
> Also not many applications are actually helped all that much by letting information get stale in their own buffers as compared to an on-path queue. Think an on-line reaction-time gated game, the need is to distribute current world state to all participating clients ASAP.
I’m afraid you are wrong about this. If an on-line game wants low delay, the only answer is for it to avoid generating position updates faster than the network carry them. One packet giving the current game player position is better than a backlog of ten previous stale ones waiting to go out. Sending packets faster than the network can carry them does not get them to the destination faster; it gets them there slower. The same applies to frames in a screen sharing application. Sending the current state of the screen *now* is better than having a backlog of ten previous stale frames sitting in buffers somewhere on their way to the destination. Stale data is not inevitable. Applications don’t need to have stale data if they avoid generating stale data in the first place.
Please watch this video, which explains it better than I can in a written email:
<https://developer.apple.com/videos/play/wwdc2015/719/?time=892>
Stuart Cheshire
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat
2022-10-20 18:32 ` Stuart Cheshire
@ 2022-10-20 19:04 ` Bob McMahon
2022-10-20 19:12 ` Dave Taht
2022-10-20 19:33 ` Sebastian Moeller
2022-10-20 19:33 ` Dave Taht
2022-10-26 20:38 ` Sebastian Moeller
2 siblings, 2 replies; 70+ messages in thread
From: Bob McMahon @ 2022-10-20 19:04 UTC (permalink / raw)
To: Stuart Cheshire; +Cc: Sebastian Moeller, Rpm, Make-Wifi-fast, Cake List, bloat
[-- Attachment #1.1: Type: text/plain, Size: 4087 bytes --]
Intel has a good analogous video on this with their CPU video here
<https://youtu.be/o_WXTRS2qTY?t=316> going over branches and failed
predictions. And to Stuart's point, the longer pipelines make the forks
worse in the amount of in-process bytes that need to be thrown away.
Interactivity, in my opinion, suggests shrinking the pipeline because, with
networks, there is no quick way to throw away stale data rather every
forwarding device continues forward with invalid data. That's bad for the
network too, spending resources on something that's no longer valid. We in
the test & measurement community never measure this.
There have been a few requests that iperf 2 measure the "bytes thrown away"
per a fork (user moves a video pointer, etc.) I haven't come up with a good
test yet. I'm still trying to get basic awareness about existing latency,
OWD and responsiveness metrics. I do think measuring the amount of
resources spent on stale data is sorta like food waste, few really pay
attention to it.
Bob
FYI, iperf 2 supports TCP_NOTSENT_LOWAT for those interested.
--tcp-write-prefetch n[kmKM]
Set TCP_NOTSENT_LOWAT on the socket and use event based writes per select()
on the socket.
On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire via Make-wifi-fast <
make-wifi-fast@lists.bufferbloat.net> wrote:
> On 20 Oct 2022, at 02:36, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> > Hi Stuart,
> >
> > [SM] That seems to be somewhat optimistic. We have been there before,
> short of mandating actually-working oracle schedulers on all end-points,
> intermediate hops will see queues some more and some less transient. So we
> can strive to minimize queue build-up sure, but can not avoid queues and
> long queues completely so we need methods to deal with them gracefully.
> > Also not many applications are actually helped all that much by letting
> information get stale in their own buffers as compared to an on-path queue.
> Think an on-line reaction-time gated game, the need is to distribute
> current world state to all participating clients ASAP.
>
> I’m afraid you are wrong about this. If an on-line game wants low delay,
> the only answer is for it to avoid generating position updates faster than
> the network carry them. One packet giving the current game player position
> is better than a backlog of ten previous stale ones waiting to go out.
> Sending packets faster than the network can carry them does not get them to
> the destination faster; it gets them there slower. The same applies to
> frames in a screen sharing application. Sending the current state of the
> screen *now* is better than having a backlog of ten previous stale frames
> sitting in buffers somewhere on their way to the destination. Stale data is
> not inevitable. Applications don’t need to have stale data if they avoid
> generating stale data in the first place.
>
> Please watch this video, which explains it better than I can in a written
> email:
>
> <https://developer.apple.com/videos/play/wwdc2015/719/?time=892>
>
> Stuart Cheshire
>
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.2: Type: text/html, Size: 4845 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat
2022-10-20 19:04 ` Bob McMahon
@ 2022-10-20 19:12 ` Dave Taht
2022-10-20 19:31 ` Bob McMahon
2022-10-20 19:40 ` Sebastian Moeller
2022-10-20 19:33 ` Sebastian Moeller
1 sibling, 2 replies; 70+ messages in thread
From: Dave Taht @ 2022-10-20 19:12 UTC (permalink / raw)
To: Bob McMahon; +Cc: Stuart Cheshire, Rpm, Make-Wifi-fast, Cake List, bloat
On Thu, Oct 20, 2022 at 12:04 PM Bob McMahon via Make-wifi-fast
<make-wifi-fast@lists.bufferbloat.net> wrote:
>
> Intel has a good analogous video on this with their CPU video here going over branches and failed predictions. And to Stuart's point, the longer pipelines make the forks worse in the amount of in-process bytes that need to be thrown away. Interactivity, in my opinion, suggests shrinking the pipeline because, with networks, there is no quick way to throw away stale data rather every forwarding device continues forward with invalid data. That's bad for the network too, spending resources on something that's no longer valid. We in the test & measurement community never measure this.
One of my all time favorite demos was of stuart's remote desktop
scenario, where he moved the mouse and the window moved with it.
> There have been a few requests that iperf 2 measure the "bytes thrown away" per a fork (user moves a video pointer, etc.) I haven't come up with a good test yet. I'm still trying to get basic awareness about existing latency, OWD and responsiveness metrics. I do think measuring the amount of resources spent on stale data is sorta like food waste, few really pay attention to it.
>
> Bob
>
> FYI, iperf 2 supports TCP_NOTSENT_LOWAT for those interested.
>
> --tcp-write-prefetch n[kmKM]
> Set TCP_NOTSENT_LOWAT on the socket and use event based writes per select() on the socket.
>
>
> On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
>>
>> On 20 Oct 2022, at 02:36, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> > Hi Stuart,
>> >
>> > [SM] That seems to be somewhat optimistic. We have been there before, short of mandating actually-working oracle schedulers on all end-points, intermediate hops will see queues some more and some less transient. So we can strive to minimize queue build-up sure, but can not avoid queues and long queues completely so we need methods to deal with them gracefully.
>> > Also not many applications are actually helped all that much by letting information get stale in their own buffers as compared to an on-path queue. Think an on-line reaction-time gated game, the need is to distribute current world state to all participating clients ASAP.
>>
>> I’m afraid you are wrong about this. If an on-line game wants low delay, the only answer is for it to avoid generating position updates faster than the network carry them. One packet giving the current game player position is better than a backlog of ten previous stale ones waiting to go out. Sending packets faster than the network can carry them does not get them to the destination faster; it gets them there slower. The same applies to frames in a screen sharing application. Sending the current state of the screen *now* is better than having a backlog of ten previous stale frames sitting in buffers somewhere on their way to the destination. Stale data is not inevitable. Applications don’t need to have stale data if they avoid generating stale data in the first place.
>>
>> Please watch this video, which explains it better than I can in a written email:
>>
>> <https://developer.apple.com/videos/play/wwdc2015/719/?time=892>
>>
>> Stuart Cheshire
>>
>> _______________________________________________
>> Make-wifi-fast mailing list
>> Make-wifi-fast@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
>
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it._______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat
2022-10-20 19:12 ` Dave Taht
@ 2022-10-20 19:31 ` Bob McMahon
2022-10-20 19:40 ` Sebastian Moeller
1 sibling, 0 replies; 70+ messages in thread
From: Bob McMahon @ 2022-10-20 19:31 UTC (permalink / raw)
To: Dave Taht; +Cc: Stuart Cheshire, Rpm, Make-Wifi-fast, Cake List, bloat
[-- Attachment #1.1: Type: text/plain, Size: 6368 bytes --]
The demo is nice but a way to measure this with full statistics can be
actionable by engineers. I did add support for tcp write time with
histograms, where setting TCP_NOTSENT_LOWAT, can give a sense of the
network responsiveness as the writes will await the select() indicating the
pipeline has drained. Nobody really uses this much.
Also, there is a suggestion for the server to generate branches so-to-speak
by sending an event back to the client, e.g. move the video pointer, but
how does the test tool decide when to create the user events that need to
be sent back? How long does it wait between events, etc?
Bob
On Thu, Oct 20, 2022 at 12:12 PM Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, Oct 20, 2022 at 12:04 PM Bob McMahon via Make-wifi-fast
> <make-wifi-fast@lists.bufferbloat.net> wrote:
> >
> > Intel has a good analogous video on this with their CPU video here going
> over branches and failed predictions. And to Stuart's point, the longer
> pipelines make the forks worse in the amount of in-process bytes that need
> to be thrown away. Interactivity, in my opinion, suggests shrinking the
> pipeline because, with networks, there is no quick way to throw away stale
> data rather every forwarding device continues forward with invalid data.
> That's bad for the network too, spending resources on something that's no
> longer valid. We in the test & measurement community never measure this.
>
> One of my all time favorite demos was of stuart's remote desktop
> scenario, where he moved the mouse and the window moved with it.
>
> > There have been a few requests that iperf 2 measure the "bytes thrown
> away" per a fork (user moves a video pointer, etc.) I haven't come up with
> a good test yet. I'm still trying to get basic awareness about existing
> latency, OWD and responsiveness metrics. I do think measuring the amount of
> resources spent on stale data is sorta like food waste, few really pay
> attention to it.
> >
> > Bob
> >
> > FYI, iperf 2 supports TCP_NOTSENT_LOWAT for those interested.
> >
> > --tcp-write-prefetch n[kmKM]
> > Set TCP_NOTSENT_LOWAT on the socket and use event based writes per
> select() on the socket.
> >
> >
> > On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire via Make-wifi-fast <
> make-wifi-fast@lists.bufferbloat.net> wrote:
> >>
> >> On 20 Oct 2022, at 02:36, Sebastian Moeller <moeller0@gmx.de> wrote:
> >>
> >> > Hi Stuart,
> >> >
> >> > [SM] That seems to be somewhat optimistic. We have been there before,
> short of mandating actually-working oracle schedulers on all end-points,
> intermediate hops will see queues some more and some less transient. So we
> can strive to minimize queue build-up sure, but can not avoid queues and
> long queues completely so we need methods to deal with them gracefully.
> >> > Also not many applications are actually helped all that much by
> letting information get stale in their own buffers as compared to an
> on-path queue. Think an on-line reaction-time gated game, the need is to
> distribute current world state to all participating clients ASAP.
> >>
> >> I’m afraid you are wrong about this. If an on-line game wants low
> delay, the only answer is for it to avoid generating position updates
> faster than the network carry them. One packet giving the current game
> player position is better than a backlog of ten previous stale ones waiting
> to go out. Sending packets faster than the network can carry them does not
> get them to the destination faster; it gets them there slower. The same
> applies to frames in a screen sharing application. Sending the current
> state of the screen *now* is better than having a backlog of ten previous
> stale frames sitting in buffers somewhere on their way to the destination.
> Stale data is not inevitable. Applications don’t need to have stale data if
> they avoid generating stale data in the first place.
> >>
> >> Please watch this video, which explains it better than I can in a
> written email:
> >>
> >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=892>
> >>
> >> Stuart Cheshire
> >>
> >> _______________________________________________
> >> Make-wifi-fast mailing list
> >> Make-wifi-fast@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
> >
> >
> > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of
> it._______________________________________________
> > Make-wifi-fast mailing list
> > Make-wifi-fast@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
>
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.2: Type: text/html, Size: 7724 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat
2022-10-20 19:04 ` Bob McMahon
2022-10-20 19:12 ` Dave Taht
@ 2022-10-20 19:33 ` Sebastian Moeller
1 sibling, 0 replies; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-20 19:33 UTC (permalink / raw)
To: Bob McMahon; +Cc: Stuart Cheshire, Rpm, Make-Wifi-fast, Cake List, bloat
Hi Bob,
I think I agree, I also agree with the goal of keeping queues small to non-existent, all I am saying is that this is fine as a goal, but unrealistic as a reachable end-point. Queues in the network serve a purpose (actually multiple) and are not pure bloat. The trick is to keep the good properties while minimizing the bad. The way I put it it is:
over-sized and under0managed buffers/queues are bad, the solution is not to get rid of queues but to size them better and more importantly manage them better. Which will result in overall less queue delay, but critically not zero queue delay.
Regards
Sebastian
> On Oct 20, 2022, at 21:04, Bob McMahon <bob.mcmahon@broadcom.com> wrote:
>
> Intel has a good analogous video on this with their CPU video here going over branches and failed predictions. And to Stuart's point, the longer pipelines make the forks worse in the amount of in-process bytes that need to be thrown away. Interactivity, in my opinion, suggests shrinking the pipeline because, with networks, there is no quick way to throw away stale data rather every forwarding device continues forward with invalid data. That's bad for the network too, spending resources on something that's no longer valid. We in the test & measurement community never measure this.
>
> There have been a few requests that iperf 2 measure the "bytes thrown away" per a fork (user moves a video pointer, etc.) I haven't come up with a good test yet. I'm still trying to get basic awareness about existing latency, OWD and responsiveness metrics. I do think measuring the amount of resources spent on stale data is sorta like food waste, few really pay attention to it.
>
> Bob
>
> FYI, iperf 2 supports TCP_NOTSENT_LOWAT for those interested.
>
> --tcp-write-prefetch n[kmKM]
> Set TCP_NOTSENT_LOWAT on the socket and use event based writes per select() on the socket.
>
>
> On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
> On 20 Oct 2022, at 02:36, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> > Hi Stuart,
> >
> > [SM] That seems to be somewhat optimistic. We have been there before, short of mandating actually-working oracle schedulers on all end-points, intermediate hops will see queues some more and some less transient. So we can strive to minimize queue build-up sure, but can not avoid queues and long queues completely so we need methods to deal with them gracefully.
> > Also not many applications are actually helped all that much by letting information get stale in their own buffers as compared to an on-path queue. Think an on-line reaction-time gated game, the need is to distribute current world state to all participating clients ASAP.
>
> I’m afraid you are wrong about this. If an on-line game wants low delay, the only answer is for it to avoid generating position updates faster than the network carry them. One packet giving the current game player position is better than a backlog of ten previous stale ones waiting to go out. Sending packets faster than the network can carry them does not get them to the destination faster; it gets them there slower. The same applies to frames in a screen sharing application. Sending the current state of the screen *now* is better than having a backlog of ten previous stale frames sitting in buffers somewhere on their way to the destination. Stale data is not inevitable. Applications don’t need to have stale data if they avoid generating stale data in the first place.
>
> Please watch this video, which explains it better than I can in a written email:
>
> <https://developer.apple.com/videos/play/wwdc2015/719/?time=892>
>
> Stuart Cheshire
>
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat
2022-10-20 18:32 ` Stuart Cheshire
2022-10-20 19:04 ` Bob McMahon
@ 2022-10-20 19:33 ` Dave Taht
2022-10-26 20:38 ` Sebastian Moeller
2 siblings, 0 replies; 70+ messages in thread
From: Dave Taht @ 2022-10-20 19:33 UTC (permalink / raw)
To: Stuart Cheshire; +Cc: Sebastian Moeller, Rpm, Make-Wifi-fast, Cake List, bloat
On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire <cheshire@apple.com> wrote:
>
> On 20 Oct 2022, at 02:36, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> > Hi Stuart,
> >
> > [SM] That seems to be somewhat optimistic. We have been there before, short of mandating actually-working oracle schedulers on all end-points, intermediate hops will see queues some more and some less transient. So we can strive to minimize queue build-up sure, but can not avoid queues and long queues completely so we need methods to deal with them gracefully.
> > Also not many applications are actually helped all that much by letting information get stale in their own buffers as compared to an on-path queue. Think an on-line reaction-time gated game, the need is to distribute current world state to all participating clients ASAP.
>
> I’m afraid you are wrong about this. If an on-line game wants low delay, the only answer is for it to avoid generating position updates faster than the network carry them. One packet giving the current game player position is better than a backlog of ten previous stale ones waiting to go out. Sending packets faster than the network can carry them does not get them to the destination faster; it gets them there slower. The same applies to frames in a screen sharing application. Sending the current state of the screen *now* is better than having a backlog of ten previous stale frames sitting in buffers somewhere on their way to the destination. Stale data is not inevitable. Applications don’t need to have stale data if they avoid generating stale data in the first place.
The core of what you describe is that transports and applications are
evolving towards being delay aware, which is the primary outcome you
get from FQ'd environment, be the FQs are physical (VoQs, LAGs,
multiple channels or subcarriers in wireless technologies) or virtual
(fq-codel, cake, fq-pie), so that the only source of congestion is
self-harm.
Everything from BBR to googles' gcc for videoconferencing, to recent
work on swift ( https://research.google/pubs/pub49448/ ) seems to be
pointing this way.
I'm also loving the work on reliable FQ detection for QUIC.
> Please watch this video, which explains it better than I can in a written email:
>
> <https://developer.apple.com/videos/play/wwdc2015/719/?time=892>
>
> Stuart Cheshire
>
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat
2022-10-20 19:12 ` Dave Taht
2022-10-20 19:31 ` Bob McMahon
@ 2022-10-20 19:40 ` Sebastian Moeller
2022-10-21 17:48 ` Bob McMahon
1 sibling, 1 reply; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-20 19:40 UTC (permalink / raw)
To: Dave Täht; +Cc: Bob McMahon, Rpm, Cake List, bloat, Make-Wifi-fast
Hi Dave,
> On Oct 20, 2022, at 21:12, Dave Taht via Rpm <rpm@lists.bufferbloat.net> wrote:
>
> On Thu, Oct 20, 2022 at 12:04 PM Bob McMahon via Make-wifi-fast
> <make-wifi-fast@lists.bufferbloat.net> wrote:
>>
>> Intel has a good analogous video on this with their CPU video here going over branches and failed predictions. And to Stuart's point, the longer pipelines make the forks worse in the amount of in-process bytes that need to be thrown away. Interactivity, in my opinion, suggests shrinking the pipeline because, with networks, there is no quick way to throw away stale data rather every forwarding device continues forward with invalid data. That's bad for the network too, spending resources on something that's no longer valid. We in the test & measurement community never measure this.
>
> One of my all time favorite demos was of stuart's remote desktop
> scenario, where he moved the mouse and the window moved with it.
[SM] Fair enough. However in 2015 I had been using NX's remote X11 desktop solution which even from Central Europe to California allowed me to remote control graphical applications way better than the first demo with the multi-second delay between mouse movement and resulting screen updates. (This was over a 6/1 Mbps ADSL link, admittedly using HTB-fq_codel, but since it did not saturate the link I assign the usability to NX's better design). I will make an impolite suggestion here, that the demonstrated screen sharing program simply had not yet been optimized/designed for longer slower paths...
Regards
Sebastian
>
>> There have been a few requests that iperf 2 measure the "bytes thrown away" per a fork (user moves a video pointer, etc.) I haven't come up with a good test yet. I'm still trying to get basic awareness about existing latency, OWD and responsiveness metrics. I do think measuring the amount of resources spent on stale data is sorta like food waste, few really pay attention to it.
>>
>> Bob
>>
>> FYI, iperf 2 supports TCP_NOTSENT_LOWAT for those interested.
>>
>> --tcp-write-prefetch n[kmKM]
>> Set TCP_NOTSENT_LOWAT on the socket and use event based writes per select() on the socket.
>>
>>
>> On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
>>>
>>> On 20 Oct 2022, at 02:36, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>
>>>> Hi Stuart,
>>>>
>>>> [SM] That seems to be somewhat optimistic. We have been there before, short of mandating actually-working oracle schedulers on all end-points, intermediate hops will see queues some more and some less transient. So we can strive to minimize queue build-up sure, but can not avoid queues and long queues completely so we need methods to deal with them gracefully.
>>>> Also not many applications are actually helped all that much by letting information get stale in their own buffers as compared to an on-path queue. Think an on-line reaction-time gated game, the need is to distribute current world state to all participating clients ASAP.
>>>
>>> I’m afraid you are wrong about this. If an on-line game wants low delay, the only answer is for it to avoid generating position updates faster than the network carry them. One packet giving the current game player position is better than a backlog of ten previous stale ones waiting to go out. Sending packets faster than the network can carry them does not get them to the destination faster; it gets them there slower. The same applies to frames in a screen sharing application. Sending the current state of the screen *now* is better than having a backlog of ten previous stale frames sitting in buffers somewhere on their way to the destination. Stale data is not inevitable. Applications don’t need to have stale data if they avoid generating stale data in the first place.
>>>
>>> Please watch this video, which explains it better than I can in a written email:
>>>
>>> <https://developer.apple.com/videos/play/wwdc2015/719/?time=892>
>>>
>>> Stuart Cheshire
>>>
>>> _______________________________________________
>>> Make-wifi-fast mailing list
>>> Make-wifi-fast@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>>
>>
>> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it._______________________________________________
>> Make-wifi-fast mailing list
>> Make-wifi-fast@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat
2022-10-20 19:40 ` Sebastian Moeller
@ 2022-10-21 17:48 ` Bob McMahon
0 siblings, 0 replies; 70+ messages in thread
From: Bob McMahon @ 2022-10-21 17:48 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Dave Täht, Rpm, Cake List, bloat, Make-Wifi-fast
[-- Attachment #1.1: Type: text/plain, Size: 6863 bytes --]
Hi All,
I just wanted to say thanks for this discussion. I always learn from each
and all of you.
Bob
On Thu, Oct 20, 2022 at 12:40 PM Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Dave,
>
>
> > On Oct 20, 2022, at 21:12, Dave Taht via Rpm <rpm@lists.bufferbloat.net>
> wrote:
> >
> > On Thu, Oct 20, 2022 at 12:04 PM Bob McMahon via Make-wifi-fast
> > <make-wifi-fast@lists.bufferbloat.net> wrote:
> >>
> >> Intel has a good analogous video on this with their CPU video here
> going over branches and failed predictions. And to Stuart's point, the
> longer pipelines make the forks worse in the amount of in-process bytes
> that need to be thrown away. Interactivity, in my opinion, suggests
> shrinking the pipeline because, with networks, there is no quick way to
> throw away stale data rather every forwarding device continues forward with
> invalid data. That's bad for the network too, spending resources on
> something that's no longer valid. We in the test & measurement community
> never measure this.
> >
> > One of my all time favorite demos was of stuart's remote desktop
> > scenario, where he moved the mouse and the window moved with it.
>
> [SM] Fair enough. However in 2015 I had been using NX's remote X11
> desktop solution which even from Central Europe to California allowed me to
> remote control graphical applications way better than the first demo with
> the multi-second delay between mouse movement and resulting screen updates.
> (This was over a 6/1 Mbps ADSL link, admittedly using HTB-fq_codel, but
> since it did not saturate the link I assign the usability to NX's better
> design). I will make an impolite suggestion here, that the demonstrated
> screen sharing program simply had not yet been optimized/designed for
> longer slower paths...
>
> Regards
> Sebastian
>
>
> >
> >> There have been a few requests that iperf 2 measure the "bytes thrown
> away" per a fork (user moves a video pointer, etc.) I haven't come up with
> a good test yet. I'm still trying to get basic awareness about existing
> latency, OWD and responsiveness metrics. I do think measuring the amount of
> resources spent on stale data is sorta like food waste, few really pay
> attention to it.
> >>
> >> Bob
> >>
> >> FYI, iperf 2 supports TCP_NOTSENT_LOWAT for those interested.
> >>
> >> --tcp-write-prefetch n[kmKM]
> >> Set TCP_NOTSENT_LOWAT on the socket and use event based writes per
> select() on the socket.
> >>
> >>
> >> On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire via Make-wifi-fast <
> make-wifi-fast@lists.bufferbloat.net> wrote:
> >>>
> >>> On 20 Oct 2022, at 02:36, Sebastian Moeller <moeller0@gmx.de> wrote:
> >>>
> >>>> Hi Stuart,
> >>>>
> >>>> [SM] That seems to be somewhat optimistic. We have been there before,
> short of mandating actually-working oracle schedulers on all end-points,
> intermediate hops will see queues some more and some less transient. So we
> can strive to minimize queue build-up sure, but can not avoid queues and
> long queues completely so we need methods to deal with them gracefully.
> >>>> Also not many applications are actually helped all that much by
> letting information get stale in their own buffers as compared to an
> on-path queue. Think an on-line reaction-time gated game, the need is to
> distribute current world state to all participating clients ASAP.
> >>>
> >>> I’m afraid you are wrong about this. If an on-line game wants low
> delay, the only answer is for it to avoid generating position updates
> faster than the network carry them. One packet giving the current game
> player position is better than a backlog of ten previous stale ones waiting
> to go out. Sending packets faster than the network can carry them does not
> get them to the destination faster; it gets them there slower. The same
> applies to frames in a screen sharing application. Sending the current
> state of the screen *now* is better than having a backlog of ten previous
> stale frames sitting in buffers somewhere on their way to the destination.
> Stale data is not inevitable. Applications don’t need to have stale data if
> they avoid generating stale data in the first place.
> >>>
> >>> Please watch this video, which explains it better than I can in a
> written email:
> >>>
> >>> <https://developer.apple.com/videos/play/wwdc2015/719/?time=892>
> >>>
> >>> Stuart Cheshire
> >>>
> >>> _______________________________________________
> >>> Make-wifi-fast mailing list
> >>> Make-wifi-fast@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
> >>
> >>
> >> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of
> it._______________________________________________
> >> Make-wifi-fast mailing list
> >> Make-wifi-fast@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
> >
> >
> >
> > --
> > This song goes out to all the folk that thought Stadia would work:
> >
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> > Dave Täht CEO, TekLibre, LLC
> > _______________________________________________
> > Rpm mailing list
> > Rpm@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/rpm
>
>
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.2: Type: text/html, Size: 8660 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat
2022-10-20 18:32 ` Stuart Cheshire
2022-10-20 19:04 ` Bob McMahon
2022-10-20 19:33 ` Dave Taht
@ 2022-10-26 20:38 ` Sebastian Moeller
2022-10-26 20:42 ` Dave Taht
2 siblings, 1 reply; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-26 20:38 UTC (permalink / raw)
To: Stuart Cheshire; +Cc: Dave Täht, Rpm, Make-Wifi-fast, Cake List, bloat
Hi Stuart,
> On Oct 20, 2022, at 20:32, Stuart Cheshire <cheshire@apple.com> wrote:
>
> On 20 Oct 2022, at 02:36, Sebastian Moeller <moeller0@gmx.de> wrote:
>
>> Hi Stuart,
>>
>> [SM] That seems to be somewhat optimistic. We have been there before, short of mandating actually-working oracle schedulers on all end-points, intermediate hops will see queues some more and some less transient. So we can strive to minimize queue build-up sure, but can not avoid queues and long queues completely so we need methods to deal with them gracefully.
>> Also not many applications are actually helped all that much by letting information get stale in their own buffers as compared to an on-path queue. Think an on-line reaction-time gated game, the need is to distribute current world state to all participating clients ASAP.
>
> I’m afraid you are wrong about this.
[SM] Well possible, would not be a first. ;)
> If an on-line game wants low delay, the only answer is for it to avoid generating position updates faster than the network carry them.
[SM] +1; it seems I misconstrued the argument I wanted to make when bringing up gaming though. If you allow I will try to lay out why I believe that for some applications like some forms of gaming a competent scheduler can be leaps and bounds more helpful than the best AQM.
Let's start with me conceding that when the required average rate of an application exceeds the networks capacity (for too much of the time) that application and that network path are not going to become/stay friends.
That out of the way, the application profile I wanted to describe with the "gaming" tag is an application that on average sends relatively little, albeit in a clocked/bursty way, where every X milliseconds it wants to send a bunch of packets to each client; and where the fidelity of the predictive "simulation" performed by the clients critically depends on not deviating from the server-managed "word-state" for too long. (The longer the simulation runs without server updates the larger the expected deviation becomes and the more noticeable any actions that need to be taken later once world-updates arrive, so the goal is to send world-state relevant updates as soon as possible after the server calculated the authoritative state).
These burtst will likely be sent close to the server's line rate and hence will create a (hopefully) transient queue at all places where the capacity gets smaller along the path. However the end result is that these packets arrive at the client as fast as possible.
> One packet giving the current game player position is better than a backlog of ten previous stale ones waiting to go out.
[SM] Yes! In a multiplayer game each client really needs to be informed about all other players'/entites' actions. If this information is often? send in multiple packets (either because aggregate size exceeds a packet's "MTU/MSS" or because implementation-wise sending one packet per individual entity (players, NPCs, "bullets", ...) is preferable. Then all packets need to be received to appropriately update world-state... the faster this goes the less do clients go out of "sync".
> Sending packets faster than the network can carry them does not get them to the destination faster; it gets them there slower.
[SM] Again I fully agree. Although in the limit case on an otherwise idle network sending our hypothetical bunch of packets from the server either at line rate or paced out to the bottleneck-rate of the path should deliver the bunch equally fast. That is sending the bunch as bunch is IMHO a rationale and defensible strategy for the server relieving it from having to keep state for the capacity for each client.
> The same applies to frames in a screen sharing application. Sending the current state of the screen *now* is better than having a backlog of ten previous stale frames sitting in buffers somewhere on their way to the destination.
[SM] I respectfully argue that a screen sharing application that will send for prolonged durations well above a path's capacity is either not optimally designed or mis-configured. As I wrote before, I used (the free version of nomachine's) NX remote control across the Atlantic to southern California, and while not all that enjoyable it was leaps and bounds more usable than what you demonstrated in the video below. (I did however make concessions, like configuring NX to expect a slow WAN link manually, and did not configure full window dragging on the remote host).
> Stale data is not inevitable. Applications don’t need to have stale data if they avoid generating stale data in the first place.
[SM] Alas no application using an internet path is in full control of avoiding queueing. Queues have a reason to exist (I personally like Nichols/Jacobsen description of queues acting as shock absorbers), especially over shared path with cross traffic (at least until we finally roll-out these fine oracle schedulers that I encounter sometimes in the literature to all endpoints ;) ).
I do agree that applications generally should try to avoid dumping excessive amounts of data into the network.
>
> Please watch this video, which explains it better than I can in a written email:
>
> <https://developer.apple.com/videos/play/wwdc2015/719/?time=892>
[SM] Argh, not a pleasant sight. But also not illustrating the case I was trying to make.
To come back to my point, for an application profile like the game traffic (that does not exceed capacity except in very short timeframes) a flow-queueing scheduler helps a lot, independent of whether the greedy flows sharing the same path are well behaved or not (let's ignore active malicious DOS traffic for this discussion). Once you have a competent scheduler the queueing problem moves from one "unfriendly" application can ruin an access link for all other flows to unfriendly applications can mostly make their own live miserable. To be clear I think both competent AQM and competent scheduling are desirable features that complement each other.*
Regards
Sebastian
*) It goes without much saying that I consider L4S an unfortunate combination of not competent enough AQM with an inadequate scheduler, this is IMHO "too little, too late". The best I want to say about L4S is, that I think trying to signal more fine-grained queuing information from the network to the endpoints is a decent idea. L4S however fails to implement this idea in an acceptable fashion. In multiple ways; bit banging the queueing state in a multi-packet stream appears at best sub-optimal, compared to giving say each packet even few-bit accumulative queue-filling-state counter. Why? Because such a counter can be used to deduce queue filling rate quick enough to have a fighting chance to actually tackle the "when to exit slow-start" question, something that L4S essentially punted on (or did I miss a grand anouncement of paced chirping making it into a deployed network stack?).
>
> Stuart Cheshire
>
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat
2022-10-26 20:38 ` Sebastian Moeller
@ 2022-10-26 20:42 ` Dave Taht
2022-10-26 20:53 ` Sebastian Moeller
0 siblings, 1 reply; 70+ messages in thread
From: Dave Taht @ 2022-10-26 20:42 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Stuart Cheshire, Rpm, Make-Wifi-fast, Cake List, bloat
I loved paced chirping.
I also loved packet subwindows. I wish we could all agree to get
cracking on working on those two things for cubic and reno rather than
whinging all the time about the stuff we will never agree on.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat
2022-10-26 20:42 ` Dave Taht
@ 2022-10-26 20:53 ` Sebastian Moeller
0 siblings, 0 replies; 70+ messages in thread
From: Sebastian Moeller @ 2022-10-26 20:53 UTC (permalink / raw)
To: Dave Täht; +Cc: Stuart Cheshire, Rpm, Make-Wifi-fast, Cake List, bloat
Hi Dave,
> On Oct 26, 2022, at 22:42, Dave Taht <dave.taht@gmail.com> wrote:
>
> I loved paced chirping.
[SM] Yes it sounded like a clever idea (however I would prefer a clearer signal from the network about queue-filling). However I have heard precious little about parced-chirping actually working in the real internet, which IMHO means the following questions are still open:
a) does it actually work under any realistic conditions?
b) under which condition will it fail?
c) how likely are these conditions over the existing internet?
IIRC it uses packet spacing to deduce whether capacity has been reached, and since packet spacing is a known unreliable source of information it needs to average and aggregate to make up for operating based on questionable data. IMHO it would be preferable to solve the "questionable data" problem as my gut feeling is with better data would come simpler solutions to the same challenge.
Or I am simply misremembering the whole thing and be barking up the wrong tree ;)
> I also loved packet subwindows.
That was allowing "rates" below one packet per RTT?
> I wish we could all agree to get
> cracking on working on those two things for cubic and reno rather than
> whinging all the time about the stuff we will never agree on.
;). You might have seen some of my code, which should indicate that maybe I am not going to be all that helpful in the "get cracking" department ;)
Regards
Sebastian
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat
2022-10-19 21:46 ` [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat Michael Richardson
@ 2022-12-06 19:17 ` Bob McMahon
0 siblings, 0 replies; 70+ messages in thread
From: Bob McMahon @ 2022-12-06 19:17 UTC (permalink / raw)
To: Michael Richardson
Cc: Stuart Cheshire, Dave Täht, Rpm, Make-Wifi-fast, Cake List, bloat
[-- Attachment #1.1: Type: text/plain, Size: 3068 bytes --]
Stuart's analogy seems better to me as it allows people to do something
else while waiting for an under-provisioned resource. And they may decide
that the wait isn't worth it at all. If the constraint moves to "entering
into the store" or "arrival rate to the grocery store doors" then the queue
just builds up in the parking lot vs the cashiers' lines. No real
difference.
Bob
On Tue, Dec 6, 2022 at 10:45 AM Michael Richardson via Make-wifi-fast <
make-wifi-fast@lists.bufferbloat.net> wrote:
>
> Stuart Cheshire via Bloat <bloat@lists.bufferbloat.net> wrote:
> >> I think the person with the cheetos pulling out a gun and shooting
> >> everyone in front of him (AQM) would not go down well.
>
> > Which is why starting with a bad analogy (people waiting in a grocery
> > store) inevitably leads to bad conclusions.
>
> > If we want to struggle to make the grocery store analogy work,
> perhaps
> > we show people checking some grocery store app on their smartphone
> > before they leave home, and if they see that a long line is beginning
> > to form they wait until later, when the line is shorter. The
> challenge
> > is not how to deal with a long queue when it’s there, it is how to
> > avoid a long queue in the first place.
>
> Maybe if we regard the entire grocery store as the "pipe", then we would
> realize that the trick to reducing checkout lines is to move the constraint
> from exiting, to entering the store :-)
>
> Then the different times you are in the store because you have different
> amounts of shopping to do, etc. and you get txt messages from spouse to
> remember to pick up X, and that somehow is an analogy to the various
> "PowerBoost" cable and LTE/5G systems that provide for inconsistent
> bandwidth.
>
> (There are various pushes to actually do this, as the experience from COVID
> was that having fewer people in the store pleased many people.)
>
>
> --
> Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works
> -= IPv6 IoT consulting =-
>
>
>
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[-- Attachment #1.2: Type: text/html, Size: 3912 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
end of thread, other threads:[~2022-12-06 19:17 UTC | newest]
Thread overview: 70+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-09 13:14 [Make-wifi-fast] The most wonderful video ever about bufferbloat Dave Taht
2022-10-09 13:23 ` [Make-wifi-fast] [Bloat] " Nathan Owens
2022-10-10 5:52 ` Taraldsen Erik
2022-10-10 9:09 ` [Make-wifi-fast] [Cake] " Sebastian Moeller
2022-10-10 9:33 ` Taraldsen Erik
2022-10-10 9:40 ` Sebastian Moeller
2022-10-10 11:46 ` [Make-wifi-fast] [Bloat] [Cake] " Taraldsen Erik
2022-10-10 20:23 ` Sebastian Moeller
2022-10-11 6:08 ` [Make-wifi-fast] [Cake] [Bloat] " Taraldsen Erik
2022-10-11 6:35 ` Sebastian Moeller
2022-10-11 6:38 ` [Make-wifi-fast] [Bloat] [Cake] " Dave Taht
2022-10-11 11:34 ` Taraldsen Erik
2022-10-10 16:45 ` [Make-wifi-fast] [Cake] [Bloat] " Bob McMahon
2022-10-10 22:57 ` [Make-wifi-fast] [Bloat] [Cake] " David Lang
2022-10-11 0:05 ` Bob McMahon
2022-10-11 7:15 ` Sebastian Moeller
2022-10-11 16:58 ` Bob McMahon
2022-10-11 17:00 ` [Make-wifi-fast] [Rpm] " Dave Taht
2022-10-11 17:26 ` [Make-wifi-fast] " Sebastian Moeller
2022-10-11 17:47 ` Bob McMahon
2022-10-11 13:57 ` [Make-wifi-fast] [Rpm] " Rich Brown
2022-10-11 14:43 ` Dave Taht
2022-10-11 17:05 ` Bob McMahon
2022-10-11 18:44 ` Rich Brown
2022-10-11 22:24 ` Dave Taht
2022-10-12 17:39 ` Bob McMahon
2022-10-12 21:44 ` [Make-wifi-fast] [Cake] [Rpm] [Bloat] " David P. Reed
2022-10-13 17:45 ` [Make-wifi-fast] [Bloat] [Rpm] [Cake] " Livingood, Jason
2022-10-13 17:49 ` [Make-wifi-fast] [Rpm] [Bloat] " Dave Taht
2022-10-11 6:28 ` [Make-wifi-fast] [Cake] [Bloat] " Sebastian Moeller
2022-10-18 0:02 ` [Make-wifi-fast] " Stuart Cheshire
2022-10-18 2:44 ` Dave Taht
2022-10-18 2:51 ` [Make-wifi-fast] [Bloat] " Sina Khanifar
2022-10-18 3:15 ` [Make-wifi-fast] A quick report from the WISPA conference Dave Taht
2022-10-18 17:17 ` Sina Khanifar
2022-10-18 19:04 ` [Make-wifi-fast] [Bloat] " Sebastian Moeller
2022-10-20 5:15 ` Sina Khanifar
2022-10-20 9:01 ` Sebastian Moeller
2022-10-20 14:50 ` Jeremy Harris
2022-10-20 15:56 ` Sebastian Moeller
2022-10-20 17:59 ` Bob McMahon
2022-10-18 19:17 ` Sebastian Moeller
2022-10-18 2:58 ` [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat David Lang
2022-10-18 17:03 ` Bob McMahon
2022-10-18 18:19 ` [Make-wifi-fast] [Rpm] " Sebastian Moeller
2022-10-18 19:30 ` Bob McMahon
2022-10-19 7:09 ` David Lang
2022-10-19 19:18 ` Bob McMahon
2022-10-19 19:23 ` David Lang
2022-10-19 21:26 ` [Make-wifi-fast] [Cake] " David P. Reed
2022-10-19 21:37 ` David Lang
2022-10-19 20:44 ` [Make-wifi-fast] " Stuart Cheshire
2022-10-19 21:33 ` [Make-wifi-fast] [Bloat] " David Lang
2022-10-19 23:36 ` Stephen Hemminger
2022-10-20 14:26 ` [Make-wifi-fast] [Rpm] [Bloat] Traffic analogies (was: Wonderful video) Rich Brown
2022-10-19 21:46 ` [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat Michael Richardson
2022-12-06 19:17 ` Bob McMahon
2022-10-20 9:36 ` [Make-wifi-fast] [Rpm] " Sebastian Moeller
2022-10-20 18:32 ` Stuart Cheshire
2022-10-20 19:04 ` Bob McMahon
2022-10-20 19:12 ` Dave Taht
2022-10-20 19:31 ` Bob McMahon
2022-10-20 19:40 ` Sebastian Moeller
2022-10-21 17:48 ` Bob McMahon
2022-10-20 19:33 ` Sebastian Moeller
2022-10-20 19:33 ` Dave Taht
2022-10-26 20:38 ` Sebastian Moeller
2022-10-26 20:42 ` Dave Taht
2022-10-26 20:53 ` Sebastian Moeller
2022-10-18 18:07 ` [Make-wifi-fast] [Bloat] " Sebastian Moeller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox