[Bloat] the future belongs to pacing
Daniel Sterling
sterling.daniel at gmail.com
Sat Jul 4 13:52:16 EDT 2020
On Sat, Jul 4, 2020 at 1:29 PM Matt Mathis via Bloat
<bloat at lists.bufferbloat.net> wrote:
"pacing is inevitable, because it saves large content providers money
(more efficient use of the most expensive silicon in the data center,
the switch buffer memory), however to use pacing we walk away from 30
years of experience with TCP self clock"
at the risk of asking w/o doing any research,
could someone explain this to a lay person or point to a doc talking
about this more?
What does BBR do that's different from other algorithms? Why does it
break the clock? Before BBR, was the clock the only way TCP did CC?
Also,
I have UBNT "Amplifi" HD wifi units in my house. (HD units only; none
of the "mesh" units. Just HD units connected either via wifi or
wired.) Empirically, I've found that in order to reduce latency, I
need to set cake to about 1/4 of the total possible wifi speed;
otherwise if a large download comes down from my internet link, that
flow causes latency.
That is, if I'm using 5ghz at 20mhz channel width, I need to set
cake's bandwidth argument to 40mbits to prevent video streams /
downloads from impacting latency for any other stream. This is w/o any
categorization at all; no packet marking based on port or anything
else; cake set to "best effort".
Anything higher and when a large amount of data comes thru, something
(assumedly the buffer in the Amplifi HD units) causes 100s of
milliseconds of latency.
Can anyone speak to how BBR would react to this? My ISP is full
gigabit; but cake is going to drop a lot of packets as it throttles
that down to 40mbit before it sends the packets to the wifi AP.
Thanks,
Dan
More information about the Bloat
mailing list