[Bloat] What is fairness, anyway? was: Re: finally... winning on wired!
Dave Taht
dave.taht at gmail.com
Wed Jan 11 02:26:06 EST 2012
On Mon, Jan 9, 2012 at 6:38 AM, Bob Briscoe <bob.briscoe at bt.com> wrote:
> Dave,
> You're conflating removal of standing queues with bandwidth allocation. The
> former is a problem in HGs and hosts. The latter isn't a problem in HGs and
> hosts.
I've been trying to understand your point here more fully.
1) removal of standing queues is a problem in HGs and hosts
On the host side, the standing queue problem is something that happens often
in wireless in particular.
Additionally you ship packets around in aggregates, and those
aggregates can be delayed, lost, or rescheduled.
FQ reduces the damage done when packets are being bulk shipped in this
way.
http://www.teklibre.com/~d/bloat/ping_log.ps
(This graph also shows that the uncontrolled
device driver queue depth totals about 50ms in this case)
In the benchmarks I've been running against wireless, even on fairly light
loads, FQ reduces bursty packet loss, tcp resets, and the like. Statistically
it's difficult to 'see', and I'm trying to come up with better methods to do so
besides double-blind A/B testing and *most importantly*
trying to convince more people to discard their biases and
actually try the code.
Or take a look at some packet captures.
As for the AP side, you have both a bandwidth allocation and FQ problem
with wireless, compounded by the packet aggregation problem.
Still a big problem in either wireless case is a need to expire old packets and
manage the depth of the queue based on the actual bandwidth between
two devices actually available at that instant of time. Otherwise you
get nonsense like 10+ second ping times.
So as for managing the the overall length of the standing queues,
conventional AQM techniques, such as RED, blue, etc... apply but
as for coping with the bursty nature of wireless in particular (and TSO'd
streams) FQ helps break up the container-loads into manageable pieces.
2) Bandwidth allocation isn't a problem in HGs and hosts.
On hosts, on wired, it is not a problem. On wireless, see above.
On home gateways, which run uplinks at anywhere between 128KB/sec
in parts of the world, to 1Mbit in others, & 4Mbit fairly typical on cable,
it's a huge problem. Regardless of any level of queue management
as applied above (fq and aqm), the only decent way known to deal with
'bufferbloat' on bad devices beyond the HG, is to limit your own bandwidth
to what you've measured as below what the messed up edge provider
is providing....
and manage it from there across the needs of your site, where various
AQM and FQ technologies can make a dent in your own problems.
So perhaps I misunderstood your point here?
Certainly the use model of the internet has changed significantly
and TCP is in dire need of viable complementary protocols such
as ledbat, etc. I also happen to like hipl, and enjoy but am befuddled
by the ccn work on-going.
And certainly I look forward to seeing less edge devices misbehaving
with excessive buffering and lack of AQM.
I'd like in particular, a contractual model - I mean, you are typically buying
'x bandwidth' as part of your ISP contract -
made available to correctly, and automatically provision downstream devices.
something as simple as a extension to dhcp would nice,
or something like parsable data for 'http://whatsmydarnbandwidth.myisp.com'
would help.
Having to infer the available bandwidth and amount of buffering
with tools such as shaperprobe is useful but a poor second to
a contractual model for a baseline and tighter feedback loops
for ongoing management.
--
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
FR Tel: 0638645374
http://www.bufferbloat.net
More information about the Bloat
mailing list