[Cake] Thinking about ingress shaping & cake
moeller0 at gmx.de
Sun Apr 12 05:47:46 EDT 2020
> On Apr 12, 2020, at 10:23, Kevin Darbyshire-Bryant <kevin at darbyshire-bryant.me.uk> wrote:
>> On 10 Apr 2020, at 15:14, Jonathan Morton <chromatix99 at gmail.com> wrote:
>> No. If the dequeue rate is never less than the enqueue rate, then the backlog remains at zero pretty much all the time. There are some short-term effects which can result in transient queuing of a small number of packets, but these will all drain out promptly.
>> For Cake to actually gain control of the bottleneck queue, it needs to *become* the bottleneck - which, when downstream of the nominal bottleneck, can only be achieved by shaping to a slower rate. I would try 79Mbit for your case.
>> - Jonathan Morton
> Thanks for correcting my erroneous thinking Jonathan!
I can not see erroneous thinking here at all, you just independently discovered the approximate nature of post-bottleneck traffic shaping ;) As I see it, this theoretical concern made people ignore ingress shaping for far too long. As the bufferbloat effort demonstrated even approximate traffic shaping help considerably. And Jonathan's "ingress" mode for cake make that approximation less dependent on the number of concurrent flows, so as so often cake turns it up to eleven ;)
> As I was typing it I was thinking “how does that actually work?” I should have thought more.
You identified the core issue of ingress shaping quite succinctly for the rest there is the mailing list.
> I typically run ingress rate as 97.5% of modem sync rate (78000 of 80000) which is gives me a little wiggle room when the modem doesn’t quite make the 80000 target (often 79500ish). Egress is easy, 99.5% of 20000 ie. 19900, all is wonderful.
Those were the good old days, when I could just assume the sync rate would be the limiting factor, over on this side of the channel ISP pretty much all switched to use traffic shapers at their end (typically not in the DSLAM which seems to be more or less just a L2 switch with fancy media-converters for the individual subscriber lines). Getting reliably information about the setting of these shapers is near impossible... (And yes, the ISPs also seem to shape my egress, so have to deal with approximate shaping at their end as well). My current approach is to make a few speedtests without SQM enabled take my best estimate of the applicable maximum speed (this is harder than it looks, as a number of speedtests are really imprecise and try to get instantaneous rate estimates, which suffer from windowing effects, resulting in reported speeds higher than a DSL link can theoretically carry maximally). Then I take this and just plug this net rate into sqm as gross shaper rate and things should be a decent starting point (plus my best theoretical estimate of the per-packet-overhead PPO).
Since I usually can not help it, I then take my PPO estimate and reverse the gross rate from the net rate, e.g. for VDSL2/PTM/PPPoE/IPv4/TCP/RFC1323Timestamps)
gross shaper rate = net speedtest result * 65/64 * ((1500 + 26) / (1500 - 8 - 20 -20 -12))
and compare this with my sync rate, if this is <= my syncrate I then set:
egress gross shaper rate = egress net speedtest result * ((1500 + 26) / (1500 - 8 - 20 -20 -12)) * 0.995
ingress gross shaper rate = ingress net speedtest result * ((1500 + 26) / (1500 - 8 - 20 -20 -12)) * 0.95
if the calculated rate is > my syncrate I repeat with a different speedtest, while mumbling and cursing like a sailor...
> I’m wondering what the relationship between actual incoming rate vs shaped rate and latency peaks is?
Good question! My mental image is bound to the water and pipe model of the internet (series of tubes ;)) if the inrush is too high for the current bottleneck element/pipe there is going to be "back-spill" into the buffers upstream of the bottleneck. So, bursts and DOS traffic will flood back into the ISPs typically under-managed and over-sized buffers increasing the latency.
> My brain can’t compute that but I suspect is related to the rtt of the flow/s and hence how quickly the signalling manages to control the incoming rate.
> I guess ultimately we’re dependent on the upstream (ISP) shaper configuration, ie if that’s a large buffer and we’ve an unresponsive flow incoming then no matter what we do, we’re stuffed, that flow will fill the buffer & induce latency on other flows.
Yes, but this is where cake's ingress mode helps, by aiming its rate target to the ingress side it will effectively send stronger signal so that the endpoints react faster reducing the likelihood of back-spill. But in the end, it would be a great help if the ISP's shaper would have acceptable buffer management...
> Kevin D-B
> gpg: 012C ACB2 28C6 C53E 9775 9123 B3A2 389B 9DE2 334A
> Cake mailing list
> Cake at lists.bufferbloat.net
More information about the Cake