Cake - FQ_codel the next generation
 help / color / mirror / Atom feed
* [Cake] Thinking about ingress shaping & cake
@ 2020-04-10 13:16 Kevin Darbyshire-Bryant
  2020-04-10 14:14 ` Jonathan Morton
  0 siblings, 1 reply; 6+ messages in thread
From: Kevin Darbyshire-Bryant @ 2020-04-10 13:16 UTC (permalink / raw)
  To: Cake List

[-- Attachment #1: Type: text/plain, Size: 1765 bytes --]

This is pretty much a thinking out loud experiment/meander as I understand it to be in cake.

I have a 80/20mbit FTTC line into the house.  Egress shaping/control with cake is simple, easy, beautiful and it works.  Tell it to use 19900Kbit, set some min packet size, a bit of overhead and off you go.  Ingress has more problems:

Assuming I do actually get 80Mbit incoming then the naive bandwidth setting for CAKE would be 80Mbit.  Cake internally dequeues at that 80Mbit rate and therefore the only way any flows can accumulate backlog is when they’re competing with each other in terms of fairness(Tin/Host) and quantums become involved…I think.  The backlog is controlled by the cake egress rate.  There’s an ‘ingress’ mode within cake that AFIUI says ‘even though you dropped a packet, still include it in the ‘bandwidth occupied’ count, because the data still arrived through the link, even though we dropped it’ BUT we’re still operating at the output/egress side of cake and not looking at all at how much data is arriving on the queue input side…the upstream ISP shaper is doing that for us.

I’ve been wondering about how to control the rate on the input side to cake, and an ingress policer is available under linux.  If that policer is set a little below the ISP rate then IT, in theory, will shoot packets first and harder than the ISP one, therefore the congestion/control point is with us.  And I also think we’ve the potential of running cake in ‘unlimited’ mode… ie. it doesn’t have to do the shaping (at the wrong point - egress). It just does ‘flow/host’ fairness ‘backpressure’.

Does any of that make sense?

Cheers,

Kevin D-B

gpg: 012C ACB2 28C6 C53E 9775  9123 B3A2 389B 9DE2 334A


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Cake] Thinking about ingress shaping & cake
  2020-04-10 13:16 [Cake] Thinking about ingress shaping & cake Kevin Darbyshire-Bryant
@ 2020-04-10 14:14 ` Jonathan Morton
  2020-04-12  8:23   ` Kevin Darbyshire-Bryant
  0 siblings, 1 reply; 6+ messages in thread
From: Jonathan Morton @ 2020-04-10 14:14 UTC (permalink / raw)
  To: Kevin Darbyshire-Bryant; +Cc: Cake List



> On 10 Apr, 2020, at 4:16 pm, Kevin Darbyshire-Bryant <kevin@darbyshire-bryant.me.uk> wrote:
> 
> I have a 80/20mbit FTTC line into the house.  Egress shaping/control with cake is simple, easy, beautiful and it works.  Tell it to use 19900Kbit, set some min packet size, a bit of overhead and off you go.  Ingress has more problems:
> 
> Assuming I do actually get 80Mbit incoming then the naive bandwidth setting for CAKE would be 80Mbit.  Cake internally dequeues at that 80Mbit rate and therefore the only way any flows can accumulate backlog is when they’re competing with each other in terms of fairness(Tin/Host) and quantums become involved…I think.

No.  If the dequeue rate is never less than the enqueue rate, then the backlog remains at zero pretty much all the time.  There are some short-term effects which can result in transient queuing of a small number of packets, but these will all drain out promptly.

For Cake to actually gain control of the bottleneck queue, it needs to *become* the bottleneck - which, when downstream of the nominal bottleneck, can only be achieved by shaping to a slower rate.  I would try 79Mbit for your case.

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Cake] Thinking about ingress shaping & cake
  2020-04-10 14:14 ` Jonathan Morton
@ 2020-04-12  8:23   ` Kevin Darbyshire-Bryant
  2020-04-12  9:47     ` Sebastian Moeller
  2020-04-12 11:02     ` Jonathan Morton
  0 siblings, 2 replies; 6+ messages in thread
From: Kevin Darbyshire-Bryant @ 2020-04-12  8:23 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: Cake List

[-- Attachment #1: Type: text/plain, Size: 1637 bytes --]



> On 10 Apr 2020, at 15:14, Jonathan Morton <chromatix99@gmail.com> wrote:
> 
> 
> No.  If the dequeue rate is never less than the enqueue rate, then the backlog remains at zero pretty much all the time.  There are some short-term effects which can result in transient queuing of a small number of packets, but these will all drain out promptly.
> 
> For Cake to actually gain control of the bottleneck queue, it needs to *become* the bottleneck - which, when downstream of the nominal bottleneck, can only be achieved by shaping to a slower rate.  I would try 79Mbit for your case.
> 
> - Jonathan Morton
> 

Thanks for correcting my erroneous thinking Jonathan!  As I was typing it I was thinking “how does that actually work?”  I should have thought more.  I typically run ingress rate as 97.5% of modem sync rate (78000 of 80000) which is gives me a little wiggle room when the modem doesn’t quite make the 80000 target (often 79500ish). Egress is easy, 99.5% of 20000 ie. 19900, all is wonderful.

I’m wondering what the relationship between actual incoming rate vs shaped rate and latency peaks is?  My brain can’t compute that but I suspect is related to the rtt of the flow/s and hence how quickly the signalling manages to control the incoming rate.

I guess ultimately we’re dependent on the upstream (ISP) shaper configuration, ie if that’s a large buffer and we’ve an unresponsive flow incoming then no matter what we do, we’re stuffed, that flow will fill the buffer & induce latency on other flows.


Cheers,

Kevin D-B

gpg: 012C ACB2 28C6 C53E 9775  9123 B3A2 389B 9DE2 334A


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Cake] Thinking about ingress shaping & cake
  2020-04-12  8:23   ` Kevin Darbyshire-Bryant
@ 2020-04-12  9:47     ` Sebastian Moeller
  2020-04-12 11:02     ` Jonathan Morton
  1 sibling, 0 replies; 6+ messages in thread
From: Sebastian Moeller @ 2020-04-12  9:47 UTC (permalink / raw)
  To: Kevin Darbyshire-Bryant; +Cc: Jonathan Morton, Cake List

Hi Kevin.

> On Apr 12, 2020, at 10:23, Kevin Darbyshire-Bryant <kevin@darbyshire-bryant.me.uk> wrote:
> 
> 
> 
>> On 10 Apr 2020, at 15:14, Jonathan Morton <chromatix99@gmail.com> wrote:
>> 
>> 
>> No.  If the dequeue rate is never less than the enqueue rate, then the backlog remains at zero pretty much all the time.  There are some short-term effects which can result in transient queuing of a small number of packets, but these will all drain out promptly.
>> 
>> For Cake to actually gain control of the bottleneck queue, it needs to *become* the bottleneck - which, when downstream of the nominal bottleneck, can only be achieved by shaping to a slower rate.  I would try 79Mbit for your case.
>> 
>> - Jonathan Morton
>> 
> 
> Thanks for correcting my erroneous thinking Jonathan!  

	I can not see erroneous thinking here at all, you just independently discovered the approximate nature of post-bottleneck traffic shaping ;) As I see it, this theoretical concern made people ignore ingress shaping for far too long. As the bufferbloat effort demonstrated even approximate traffic shaping help considerably. And Jonathan's "ingress" mode for cake make that approximation less dependent on the number of concurrent flows, so as so often cake turns it up to eleven ;)


> As I was typing it I was thinking “how does that actually work?”  I should have thought more.

	You identified the core issue of ingress shaping quite succinctly for the rest there is the mailing list.


>  I typically run ingress rate as 97.5% of modem sync rate (78000 of 80000) which is gives me a little wiggle room when the modem doesn’t quite make the 80000 target (often 79500ish). Egress is easy, 99.5% of 20000 ie. 19900, all is wonderful.

	Those were the good old days, when I could just assume the sync rate would be the limiting factor, over on this side of the channel ISP pretty much all switched to use traffic shapers at their end (typically not in the DSLAM which seems to be more or less just a L2 switch with fancy media-converters for the individual subscriber lines). Getting reliably information about the setting of these shapers is near impossible... (And yes, the ISPs also seem to shape my egress, so have to deal with approximate shaping at their end as well). My current approach is to make a few speedtests without SQM enabled take my best estimate of the applicable maximum speed (this is harder than it looks, as a number of speedtests are really imprecise and try to get instantaneous rate estimates, which suffer from windowing effects, resulting in reported speeds higher than a DSL link can theoretically carry maximally). Then I take this and just plug this net rate into sqm as gross shaper rate and things should be a decent starting point (plus my best theoretical estimate of the per-packet-overhead PPO). 
Since I usually can not help it, I then take my PPO estimate and reverse the gross rate from the net rate, e.g. for VDSL2/PTM/PPPoE/IPv4/TCP/RFC1323Timestamps)

gross shaper rate = net speedtest result * 65/64 * ((1500 + 26) / (1500 - 8 - 20 -20 -12))

and compare this with my sync rate, if this is <= my syncrate I then set:
egress gross shaper rate = egress net speedtest result * ((1500 + 26) / (1500 - 8 - 20 -20 -12)) * 0.995
ingress gross shaper rate = ingress net speedtest result * ((1500 + 26) / (1500 - 8 - 20 -20 -12)) * 0.95

if the calculated rate is > my syncrate I repeat with a different speedtest, while mumbling and cursing like a sailor...


> 
> I’m wondering what the relationship between actual incoming rate vs shaped rate and latency peaks is?

	Good question! My mental image is bound to the water and pipe model of the internet (series of tubes ;)) if the inrush is too high for the current bottleneck element/pipe  there is going to be "back-spill" into the buffers upstream of the bottleneck. So, bursts and DOS traffic will flood back into the ISPs typically under-managed and over-sized buffers increasing the latency.


>  My brain can’t compute that but I suspect is related to the rtt of the flow/s and hence how quickly the signalling manages to control the incoming rate.

	I agree.

> 
> I guess ultimately we’re dependent on the upstream (ISP) shaper configuration, ie if that’s a large buffer and we’ve an unresponsive flow incoming then no matter what we do, we’re stuffed, that flow will fill the buffer & induce latency on other flows.

	Yes, but this is where cake's ingress mode helps, by aiming its rate target to the ingress side it will effectively send stronger signal so that the endpoints react faster reducing the likelihood of back-spill. But in the end, it would be a great help if the ISP's shaper would have acceptable buffer management...

Best Regards
	Sebastian


> 
> 
> Cheers,
> 
> Kevin D-B
> 
> gpg: 012C ACB2 28C6 C53E 9775  9123 B3A2 389B 9DE2 334A
> 
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Cake] Thinking about ingress shaping & cake
  2020-04-12  8:23   ` Kevin Darbyshire-Bryant
  2020-04-12  9:47     ` Sebastian Moeller
@ 2020-04-12 11:02     ` Jonathan Morton
  2020-04-12 13:12       ` Kevin Darbyshire-Bryant
  1 sibling, 1 reply; 6+ messages in thread
From: Jonathan Morton @ 2020-04-12 11:02 UTC (permalink / raw)
  To: Kevin Darbyshire-Bryant; +Cc: Cake List

> On 12 Apr, 2020, at 11:23 am, Kevin Darbyshire-Bryant <kevin@darbyshire-bryant.me.uk> wrote:
> 
> I’m wondering what the relationship between actual incoming rate vs shaped rate and latency peaks is?  My brain can’t compute that but I suspect is related to the rtt of the flow/s and hence how quickly the signalling manages to control the incoming rate.

There are two important cases to consider here:  the slow-start and congestion-avoidance phases of TCP.  But in general, the bigger the difference between the link rate and Cake's shaped rate, the less latency peaks you will notice.

Slow-start basically doubles the send rate every RTT until terminated by a congestion signal.  It's therefore likely that you'll get a full RTT of queued data at the moment of slow-start exit, which then has to drain - and most of this will occur in the dumb FIFO upstream of you.  Typical Internet RTTs are about 80ms.  You should expect a slow-start related latency spike every time you start a bulk flow, although some of them will be avoided by the HyStart algorithm, which uses increases in latency as a congestion signal specifically for governing slow-start exit.

In congestion avoidance, TCP typically adds one segment to the congestion window per RTT.  If you assume the shaper is saturated, you can calculate the excess bandwidth caused by this "Reno linear growth" as 8 bits per byte * 1500 bytes * flow count / RTT seconds.  For a single flow at 80ms, that's 150 Kbps.  At 20ms it would be 600 Kbps.  If that number totals less than the margin you've left, then the peaks of the AIMD sawtooth should not collect in the dumb FIFO and will be handled entirely by Cake.

 - Jonathan Morton

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Cake] Thinking about ingress shaping & cake
  2020-04-12 11:02     ` Jonathan Morton
@ 2020-04-12 13:12       ` Kevin Darbyshire-Bryant
  0 siblings, 0 replies; 6+ messages in thread
From: Kevin Darbyshire-Bryant @ 2020-04-12 13:12 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: Cake List

[-- Attachment #1: Type: text/plain, Size: 3804 bytes --]



> On 12 Apr 2020, at 12:02, Jonathan Morton <chromatix99@gmail.com> wrote:
> 
>> On 12 Apr, 2020, at 11:23 am, Kevin Darbyshire-Bryant <kevin@darbyshire-bryant.me.uk> wrote:
>> 
>> I’m wondering what the relationship between actual incoming rate vs shaped rate and latency peaks is?  My brain can’t compute that but I suspect is related to the rtt of the flow/s and hence how quickly the signalling manages to control the incoming rate.
> 
> There are two important cases to consider here:  the slow-start and congestion-avoidance phases of TCP.  But in general, the bigger the difference between the link rate and Cake's shaped rate, the less latency peaks you will notice.
> 
> Slow-start basically doubles the send rate every RTT until terminated by a congestion signal.  It's therefore likely that you'll get a full RTT of queued data at the moment of slow-start exit, which then has to drain - and most of this will occur in the dumb FIFO upstream of you.  Typical Internet RTTs are about 80ms.  You should expect a slow-start related latency spike every time you start a bulk flow, although some of them will be avoided by the HyStart algorithm, which uses increases in latency as a congestion signal specifically for governing slow-start exit.
> 
> In congestion avoidance, TCP typically adds one segment to the congestion window per RTT.  If you assume the shaper is saturated, you can calculate the excess bandwidth caused by this "Reno linear growth" as 8 bits per byte * 1500 bytes * flow count / RTT seconds.  For a single flow at 80ms, that's 150 Kbps.  At 20ms it would be 600 Kbps.  If that number totals less than the margin you've left, then the peaks of the AIMD sawtooth should not collect in the dumb FIFO and will be handled entirely by Cake.

Thank you.  That is really useful.

In case you all fancied a laugh at my expense and to show you what state of stir crazy I’m in due to lock down, here’s the analogy of queuing I came up with that explained to me why my queue departure rate must be less than the inbound rate.

So I imagined a farmer with a single cow only milking machine and a transporter that moves cows from the field to the milking machine(!)  As Mr Farmer turns up at the field, the cows saunter over to the gate.  The gate opens when there’s space for a cow on the transporter.  The transporter can move a single cow to the milking machine at an arbitrary 1 cow per 10 seconds (6 cows a minute).  The cows are interested at the thought of being milked so they arrive at the gate from around the field faster than 6 cows a minute.  So the cows naturally form a queue and wait their turn to go through the gate.

Mr Farmer has some special cows that must be milked in preference to standard cows.  So he installs some fencing and arranges them into two funnel shapes arriving at the gate.  The gate has been upgraded too and it can choose from which funnel to accept a cow.  If a cow is available in the special queue then it takes that cow, else it takes a standard cow.  A helper assists in directing the cows to the correct queue.

It’s at this point I realised that for the special/standard cow preference to make any difference the cows must be arriving faster than they can depart, otherwise there’s never the case that a standard cow has to wait for a special cow, they just walk on through.  I have to have a queue.

I won’t take the analogy any further since I’m aware of the ’special cow’ queue starving access to the ’normal cow’ queue and I’m not sure that controlling queue length when they all come running over (cow burst!) by culling cows is exactly ideal either :-)

Anyway welcome to my Easter madness :-)

Cheers,

Kevin D-B

gpg: 012C ACB2 28C6 C53E 9775  9123 B3A2 389B 9DE2 334A


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-04-12 13:13 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-10 13:16 [Cake] Thinking about ingress shaping & cake Kevin Darbyshire-Bryant
2020-04-10 14:14 ` Jonathan Morton
2020-04-12  8:23   ` Kevin Darbyshire-Bryant
2020-04-12  9:47     ` Sebastian Moeller
2020-04-12 11:02     ` Jonathan Morton
2020-04-12 13:12       ` Kevin Darbyshire-Bryant

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox