<html><body bgcolor="#FFFFFF"><div><br><br>On 5 May 2011, at 17:49, Neil Davies <<a href="mailto:Neil.Davies@pnsol.com">Neil.Davies@pnsol.com</a>> wrote:<br><br></div><div></div><blockquote type="cite"><div><span>On the issue of loss - we did a study of the UK's ADSL access network back in 2006 over several weeks, looking at the loss and delay that was introduced into the bi-directional traffic.</span><br><span></span><br><span>We found that the delay variability (that bit left over after you've taken the effects of geography and line sync rates) was broadly</span><br><span>the same over the half dozen locations we studied - it was there all the time to the same level of variance and that what did vary by time of day was the loss rate.</span><br><span></span><br><span>We also found out, at the time much to our surprise - but we understand why now, that loss was broadly independent of the offered load - we used a constant data rate (with either fixed or variable packet sizes) .</span><br><span></span><br><span>We found that loss rates were in the range 1% to 3% (which is what would be expected from a large number of TCP streams contending for a limiting resource).</span><br><span></span><br><span>As for burst loss, yes it does occur - but it could be argued that this more the fault of the sending TCP stack than the network.</span><br><span></span><br><span>This phenomenon was well covered in the academic literature in the '90s (if I remember correctly folks at INRIA lead the way) - it is all down to the nature of random processes and how you observe them. </span><br><span></span><br><span>Back to back packets see higher loss rates than packets more spread out in time. Consider a pair of packets, back to back, arriving over a 1Gbit/sec link into a queue being serviced at 34Mbit/sec, the first packet being 'lost' is equivalent to saying that the first packet 'observed' the queue full - the system's state is no longer a random variable - it is known to be full. The second packet (lets assume it is also a full one) 'makes an observation' of the state of that queue about 12us later - but that is only 3% of the time that it takes to service such large packets at 34 Mbit/sec. The system has not had any time to 'relax' anywhere near to back its steady state, it is highly likely that it is still full. </span><br><span></span><br><span>Fixing this makes a phenomenal difference on the goodput (with the usual delay effects that implies), we've even built and deployed systems with this sort of engineering embedded (deployed as a network 'wrap') that mean that end users can sustainably (days on end) achieve effective throughput that is better than 98% of (the transmission media imposed) maximum. What we had done is make the network behave closer to the underlying statistical assumptions made in TCP's design.</span><font class="Apple-style-span" color="#000000"><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469);"><font class="Apple-style-span" color="#0023A3"><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.300781); -webkit-composition-fill-color: rgba(175, 192, 227, 0.234375); -webkit-composition-frame-color: rgba(77, 128, 180, 0.234375);"><br></span></font></span></font></div></blockquote><br><div>How did you fix this? What alters the packet spacing? The network or the host?</div><div><br></div><div>Sam</div></body></html>