[Bloat] Large buffers: deliberate optimisation for HTTP?

Richard Scheffenegger rscheff at gmx.at
Fri Feb 4 17:35:27 EST 2011


No.
  ----- Original Message ----- 
  From: Steve Davies 
  To: bloat at lists.bufferbloat.net 
  Sent: Friday, February 04, 2011 8:08 PM
  Subject: [Bloat] Large buffers: deliberate optimisation for HTTP?


  Hi,


  I'm a new subscriber and hardly a hardcore network programmer.  But I have been working with IP for years and even have Douglas Comer's books...  


  I was thinking about this issue of excess buffers.  It occurred to me that the large buffers could be a deliberate strategy to optimise HTTP-style traffic.  Having 1/2 MB or so of buffering towards the edge does mean that a typical web page and images etc can likely be "dumped" into those buffers "en-bloc".


  Or maybe its not so deliberate but just that testing has become fixated on "throughput" and impact latency and jitter has been ignored.  If you have a spanking new Gb NIC the first thing you do is try some scps and see how close to line-speed you can get.  And lots of buffering helps that test in the absence of real packet loss in the actual link.


  Perhaps the reality is that our traffic patterns are not typical of broadband link usage and that these large buffers that mess up our usage patterns (interactive traffic mixed with bulk data), actually benefit the majority usage pattern which is "chunky bursts".


  Would you agree with my logic?


  Steve




------------------------------------------------------------------------------


  _______________________________________________
  Bloat mailing list
  Bloat at lists.bufferbloat.net
  https://lists.bufferbloat.net/listinfo/bloat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20110204/2c226710/attachment-0003.html>


More information about the Bloat mailing list