[Bloat] Large buffers: deliberate optimisation for HTTP?

Steve Davies steve at connection-telecom.com
Fri Feb 4 11:08:48 PST 2011


Hi,

I'm a new subscriber and hardly a hardcore network programmer.  But I have
been working with IP for years and even have Douglas Comer's books...

I was thinking about this issue of excess buffers.  It occurred to me that
the large buffers could be a deliberate strategy to optimise HTTP-style
traffic.  Having 1/2 MB or so of buffering towards the edge does mean that a
typical web page and images etc can likely be "dumped" into those buffers
"en-bloc".

Or maybe its not so deliberate but just that testing has become fixated on
"throughput" and impact latency and jitter has been ignored.  If you have a
spanking new Gb NIC the first thing you do is try some scps and see how
close to line-speed you can get.  And lots of buffering helps that test in
the absence of real packet loss in the actual link.

Perhaps the reality is that our traffic patterns are not typical of
broadband link usage and that these large buffers that mess up our usage
patterns (interactive traffic mixed with bulk data), actually benefit the
majority usage pattern which is "chunky bursts".

Would you agree with my logic?

Steve
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20110204/a77f6819/attachment.html>


More information about the Bloat mailing list