Question: what's the "lag under load" experienced when these two loads are filling the capacity of the bottleneck router (the DSL link)? I'm wondering whether your cake setup is deliberately building up a big queue within the router for any of the 10 bulk/best efforts flows. On Saturday, April 25, 2020 4:34pm, "Kevin Darbyshire-Bryant" said: > _______________________________________________ > Cake mailing list > Cake@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cake > > > > On 25 Apr 2020, at 16:25, Jonathan Morton > wrote: > > > >> On 25 Apr, 2020, at 2:07 pm, Kevin Darbyshire-Bryant > wrote: > >> > >> Download from ‘onedrive’ from 1 box, using 5 flows, > classified as Bulk. Little other traffic going on, sits there at circa 70Mbit, no > problem. > >> > >> If I started another download on another box, say 5 flows, classified as > Best Effort, what rates would you expect the Bulk & Best effort tins to flow at? > > > > Approximately speaking, Cake should give the Best Effort traffic priority > over Bulk, until the latter is squashed down to its tin's capacity. So you may > see 5-10Mbps of Bulk and 65-70Mbps of Best Effort, depending on some short-term > effects. > > > > This assumes that the Diffserv marking actually reaches Cake, of course. > > Thanks Jonathan. I can assure you diffserv markings are reaching cake both egress > & ingress due to my pet ‘act_ctinfo/connmark -savedscp’ project. > Amongst other monitoring methods a simple 'watch -t tc -s qdisc show dev $1’ > albeit with a slightly modified cake module & tc to report per tin traffic as a > percentage of total & per tin % of threshold is used. > > eg: > Bulk Best Effort Video Voice > thresh 4812Kbit 77Mbit 38500Kbit 19250Kbit > target 5.0ms 5.0ms 5.0ms 5.0ms > interval 100.0ms 100.0ms 100.0ms 100.0ms > pk_delay 961us 167us 311us 164us > av_delay 453us 78us 141us 75us > sp_delay 51us 12us 17us 9us > backlog 9084b 0b 0b 0b > pkts 60618617 2006708 460725 11129 > bytes 91414263264 2453185010 636385583 5205008 > traffic% 89 0 0 0 > traftin% 1435 0 0 0 > way_inds 2703134 8957 169 111 > way_miss 922 6192 104 525 > way_cols 0 0 0 0 > drops 8442 230 37 0 > marks 5 0 0 0 > ack_drop 0 0 0 0 > sp_flows 2 3 1 3 > bk_flows 1 0 0 0 > un_flows 0 0 0 0 > max_len 66616 12112 9084 3360 > quantum 300 1514 1174 587 > > Your expectation is that Best Effort would exert downward pressure on Bulk traffic > reducing bulk traffic to about bulk threshold level which is my expectation also. > Tin priority then host (fairness), then flow. > > As you may have guessed, that’s not quite what I’m seeing but as > I’ve managed to see the issue when using ‘flowblind’ am now much > less inclined to point the finger at host fairness & friends. I remain confused > why ‘bulk’ is exceeding its allocation though in what should be > pressure from best effort but it ends up going all over the place and being a bit > unstable. Odd. > > BTW: The ‘onedrive’ client box is actually running linux. > >