Hi Fred, On Dec 28, 2013, at 12:09 , Fred Stratton wrote: > IThe UK consensus fudge factor has always been 85 per cent of the rate achieved, not 95 or 99 per cent. I know that the recommendations have been lower in the past; I think this is partly because before Jesper Brouer's and Russels Stuart's work to properly account for ATM "quantization" people typically had to deal with a ~10% rate tax for the 5byte per cell overhead (48 byte payload in 53 byte cells 90.57% useable rate) plus an additional 5% to stochastically account for the padding of the last cell and the per packet overhead both of which affect the effective good put way more for small than large packets, so the 85% never worked well for all packet sizes. My hypothesis now is since we can and do properly account for these effects of ATM framing we can afford to start with a fudge factor of 90% or even 95% percent. As far as I know the recommended fudge factors are never ever explained by more than "this works empirically"... > > Devices express 2 values: the sync rate - or 'maximum rate attainable' - and the dynamic value of 'current rate'. The actual data rate is the relevant information for shaping, often DSL modems report the link capacity as "maximum rate attainable" or some such, while the actual bandwidth is limited to a rate below what the line would support by contract (often this bandwidth reduction is performed on the PPPoE link to the BRAS). > > As the sync rate is fairly stable for any given installation - ADSL or Fibre - this could be used as a starting value. decremented by the traditional 15 per cent of 'overhead'. and the 85 per cent fudge factor applied to that. I would like to propose to use the "current rate" as starting point, as 'maximum rate attainable' >= 'current rate'. > > Fibre - FTTC - connections can suffer quite large download speed fluctuations over the 200 - 500 metre link to the MSAN. This phenomenon is not confined to ADSL links. On the actual xDSL link? As far as I know no telco actually uses SRA (seamless rate adaptation or so) so the current link speed will only get lower not higher, so I would expect a relative stable current rate (it might take a while, a few days to actually slowly degrade to the highest link speed supported under all conditions, but I hope you still get my point). > > > An alternative speed test is something like this > > http://download.bethere.co.uk/downloadMeter.html > > which, as Be has been bought by Sky, may not exist after the end of April 2014. But, if we recommend to run speed tests we really need to advise our users to start several concurrent up- and downloads to independent servers to actually measure the bandwidth of our bottleneck link; often a single server connection will not saturate a link (I seem to recall that with TCP it is guaranteed to only reach 75% or so averaged over time, is that correct?). But I think this is not the proper way to set the bandwidth for the shaper, because upstream of our link to the ISP we have no guaranteed bandwidth at all and just can hope the ISP is oing the right thing AQM-wise. > > • [What is the proper description here?] If you use PPPoE (but not over ADSL/DSL link), PPPoATM, or bridging that isn’t Ethernet, you should choose [what?] and set the Per-packet Overhead to [what?] > > For a PPPoA service, the PPPoA link is treated as PPPoE on the second device, here running ceroWRT. This still means you should specify the PPPoA overhead, not PPPoE. > The packet overhead values are written in the dubious man page for tc_stab. The only real flaw in that man page, as far as I know, is the fact that it indicates that the kernel will account for the 18byte ethernet header automatically, while the kernel does no such thing (which I hope to change). > Sebastian has a potential alternative method of formal calculation. So, I have no formal calculation method available, but an empirical way of detecting ATM quantization as well as measuring the per packet overhead of an ATM link. The idea is to measure the RTT of ICMP packets of increasing length and then displaying the distribution of RTTs by ICMP packet length, on an ATM carrier we expect to see a step function with steps 48 bytes apart. For non-ATM carrier we expect to rather see a smooth ramp. By comparing the residuals of a linear fit of the data with the residuals of the best step function fit to the data. The fit with the lower residuals "wins". Attached you will find an example of this approach, ping data in red (median of NNN repetitions for each ICMP packet size), linear fit in blue, and best staircase fit in green. You notice that data starts somewhere in a 48 byte ATM cell. Since the ATM encapsulation overhead is maximally 44 bytes and we know the IP and ICMP overhead of the ping probe we can calculate the overhead preceding the IP header, which is what needs to be put in the overhead field in the GUI. (Note where the green line intersect the y-axis at 0 bytes packet size? this is where the IP header starts, the "missing" part of this ATM cell is the overhead).