Hi,<div><br></div><div>I'm a new subscriber and hardly a hardcore network programmer. But I have been working with IP for years and even have Douglas Comer's books... </div><div><br></div><div>I was thinking about this issue of excess buffers. It occurred to me that the large buffers could be a deliberate strategy to optimise HTTP-style traffic. Having 1/2 MB or so of buffering towards the edge does mean that a typical web page and images etc can likely be "dumped" into those buffers "en-bloc".</div>
<div><br></div><div>Or maybe its not so deliberate but just that testing has become fixated on "throughput" and impact latency and jitter has been ignored. If you have a spanking new Gb NIC the first thing you do is try some scps and see how close to line-speed you can get. And lots of buffering helps that test in the absence of real packet loss in the actual link.</div>
<div><br></div><div>Perhaps the reality is that our traffic patterns are not typical of broadband link usage and that these large buffers that mess up our usage patterns (interactive traffic mixed with bulk data), actually benefit the majority usage pattern which is "chunky bursts".</div>
<div><br></div><div>Would you agree with my logic?</div><div><br></div><div>Steve</div><div><br></div>