[Cake] Fwd: cake in dd-wrt

Sebastian Moeller moeller0 at gmx.de
Wed Aug 21 12:56:31 EDT 2019


This went private by accident...


> On Aug 21, 2019, at 14:04, Sebastian Gottschall <s.gottschall at newmedia-net.de> wrote:
> 
> 
> Am 21.08.2019 um 13:27 schrieb Sebastian Moeller:
>> 
>>> On Aug 21, 2019, at 13:19, Sebastian Gottschall <s.gottschall at newmedia-net.de> wrote:
>>> 
>>> 
>>>> 	Nah, barely good enough, breitbandmessung.de might be suited (they have access control to not overwhelm their test servers), but other Speedtests are notoriously inaccurate* (I am looking at you Ookla...) and occasionally report "measured" goodput in excess over the actual goddput achivable over the given gross access rate.
>>>> 	IMHO the real challenge is that to set our shaper correctly we need both information about the gross rate of the link (which can bei either physically bound, by say a dsl-link's sync-rates, or in softwar, say in a BRAS/BNG-level traffic shaper) and of the worst applicable overhead between user and ISP (which again might be a physical link property or might come from the configuration of the ISPs traffic shaper). Most ISP will giver very little information about the precise value of any of the two. So we need to solve for two unkowns (per direction, even though per-packet-overhead likely is going to be identical for both directions), making the whole endeavor way more complicated than it should be. If we would know at least one of the values precisely, gross-limit-rates or per-packet-overhead, we could "simply" try different values fr the other and measure the resulting bufferbloat, plotting the bloat versus the variable should show a more or less step-like increase once we exceed the parameter's true value. But I digress, we do not know any of the two...
>>>> 
>>>> 
>>>> 
>>>> *) I guess they are precise and accurate enough for their intended use-case, but they are somewhat problematic for precise measurements of real maximum goodput.
>>> anotherway is using a adaptive algorithm. so changing values in binary searching tree form to find the best matching value. so some sort of calibration phase
>> 	Not a bad idea, but IMHO also problematic with our two unknowns, if the overhead is set to low, we can, for the typical close to full MTU1500 sized packets, make up for that by setting the shaper-rate a bit lower, and conversely if the overhead is set to high we can adapt by setting the shaper gross-rate a bit higher. But that only works as long as we are talking large packets. Once a link carries a sufficient number of small packets our adjustments are getting worse and worse and bufferbloat begins to raise its ugly head again... So any adaptive method would need to probe with packets of differing sizes. I guess I am making a mountain out of a mole-hill, but empirically deducing these two numbers (or four if we count up- and downstream separately) seems much harder than it should be ;)
> if course the real mtu size must be detected first. but that isnt a big deal to find out where the fragmentation limit is

	Well, that by itself, while important to know, does not help much. The issue I see is if you find a combination of shaper-rate and accounted-overhead that works for saturating the link at a given packetsize without causing bufferbloat, these settings are not guaranteed to actually work just as well for smaller packets. You basically need to find a combination of rate and overhead that keep bufferbloat under control both for small and large packets (or better independent of packetsize) without sacrificing (too much) bandwidth. The math is not terribly tricky, but it helps to put things into formulas.

So, let me illustrate conceptually one possible way to tackle this issue that with some silly calculations:

	[1] GrossRate *  (PayloadSize) / (PayloadSize  + PerPacketOverheads) = Goodput

It is obvious that Goodput for a fixed GrossRate now is a function of PayloadSize, approaching GrossRate if 
PayloadSize >> PerPacketOverheads
and approaching  0 if 
PayloadSize << PerPacketOverheads
(well there is a lower limit below which measuring goodput becomes futile, being 1 Byte)
Conversely, this dependence of Goodput on payload size is the reason why no sane ISP actually promises/guarantees unqualified Goodput and ISP shapers mostly work with a fixed GrossRate.

with PayloadSize depending on PacketSize:

	[2] PayloadSize =  PacketSize - PayLoadOverhead

(MTU sets an upper limit for PacketSize, but packets can be smaller) and 

	[3] PerPacketOverheads = PayloadOverhead + "TransportOverhead"

expressed as a formula for GrossRate we get

	[4] GrossRate =  Goodput / ((PayloadSize) / (PayloadSize  + PerPacketOverheads)) = Goodput *  ((PayloadSize  + PerPacketOverheads) / (PayloadSize)) 

Now since the real GrossRate and PerPacketOverheads are constants they will not depend on the payload size (but goodput will):

	[5] Goodput(small) *  ((PayloadSize(small)  + PerPacketOverheads) / (PayloadSize(small))) =  Goodput(big) *  ((PayloadSize(big)  + PerPacketOverheads) / (PayloadSize(big))) 

with small < big <= MTU, now PayloadSize is known (or can be easily deduced by a packet capture), Goodput(small) and Goodput(big) can be empirically measured, that in turn allows us to calculate PerPacketOverheads, and from that the "TransportOverhead" that we actually after...

	[6] PerPacketOverheads = ((Goodput(big) - Goodput(small)) * (PayloadSize(small) * PayloadSize(big))) / ((Goodput(small) * PayloadSize(big)) - ( Goodput(big) * PayloadSize(small)))


and armed with that knowledge any side of [5] will give us GrossRate.

All well and dandy, the issue is that this really needs precise Goodput measurements that really veridically measure the capacity of the link.


Let's just play with a few real numbers to get a  handle on the permissible measurement error (and I am going for simple Gb-ethernet TCP/IPv4 data here):

big: "MTU1500"
Goodput(big): 1000 * (1500 - 20 - 20) / (1500 + 38) = 949.284785436
PayloadSize(big) = 1500 - 20 - 20 = 1460
PerPacketOverheads = 20 + 20 + 38 = 78

small: "MTU150"
Goodput(small): 1000 * (150 - 20 - 20) / (150 + 38) = 585.106382979
PayloadSize(small) = 150 - 20 - 20 = 110

If we put these numbers back into [5] things just work
585.106382979 *  ((110  + 78) / (110)) = 1000 = 949.284785436 *  ((1460  + 78) / (1460))

Now plug these into [6] gives:
PerPacketOverheads = ((949.284785436 - 585.106382979) * (110 * 1460)) / ((585.106382979 * 1460) - ( 949.284785436 * 110)) = 78

Let's just reduce the Goodput for the big packetsize by 1% from the theoretical value (and if speedtests were reliably correct to 1% I would be amazed)
PerPacketOverheads = (((949.284785436 * 0.99) - (585.106382979 * 1)) * (110 * 1460)) / (((585.106382979 * 1) * 1460) - ((949.284785436 * 0.99) * 110)) = 75.8611711098 ~ 76

As a saving grace, if both speedtests suffer equal "underreporting", things are just fine again.
PerPacketOverheads = (((949.284785436 * 0.5) - (585.106382979 * 0.5)) * (110 * 1460)) / (((585.106382979 * 0.5) * 1460) - ((949.284785436 * 0.5) * 110)) = 78


Mmmh, that is actually better than I realized. In theory we can now use any speedtest we want together with MSS clamping to measure goodput at our desired payloadsizes and actually have a fighting chance of this getting somewhere... This probably could also be improved by measuring at more than just 2 paketsizes...
Time to move this from the drawing board to the real world... (were Linux just had a recommendation to increase the minimum allowed MSS sizes to work around remotely trigger-able resource exhaustions, but the above formula in theory)


In theory the above approach should not care about the delta in packetsize between the two probes:
small: "MTU1499"
Goodput(small): 1000 * (1499 - 20 - 20) / (1499 + 38) = 949.2517892
PayloadSize(small) = 1499 - 20 - 20 = 1459
PerPacketOverheads = (((949.284785436 * 1) - (949.2517892 * 1)) * (1459 * 1460)) / (((949.2517892 * 1) * 1460) - ((949.284785436 * 1) * 1459)) = 78.0000002717

But again, time to test this in the real world again.

>> 
>> 
>> 
>> Best Regards
>> 	Sebastian
>> 
>> 
>>>> 
>>>>>> Best Regards
>>>>>> 	Sebastian
>>>>>> 
>>>>>> 
>>>>>>> Sebastian

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cake/attachments/20190821/04b07e3d/attachment-0001.html>


More information about the Cake mailing list