Cake - FQ_codel the next generation
 help / color / mirror / Atom feed
From: Jonathan Morton <chromatix99@gmail.com>
To: "Toke Høiland-Jørgensen" <toke@toke.dk>
Cc: cake@lists.bufferbloat.net
Subject: Re: [Cake] A few puzzling Cake results
Date: Wed, 18 Apr 2018 18:03:26 +0300	[thread overview]
Message-ID: <FCA9E4F3-EA5C-4223-BA23-771C993867AF@gmail.com> (raw)
In-Reply-To: <87r2nc1taq.fsf@toke.dk>

>>> So if there is one active bulk flow, we allow each flow to queue four
>>> packets. But if there are ten active bulk flows, we allow *each* flow to
>>> queue *40* packets.
>> 
>> No - because the drain rate per flow scales inversely with the number
>> of flows, we have to wait for 40 MTUs' serialisation delay to get 4
>> packets out of *each* flow.
> 
> Ah right, yes. Except it's not 40 MTUs it's 40 quantums (as each flow
> will only dequeue a packet each MTU/quantum rounds of the scheduler). 

The maximum quantum in Cake is equal to the MTU, and obviously you can't increase the drain rate by decreasing the quantum below the packet size.

>> Without that, we can end up with very high drop rates which, in
>> ingress mode, don't actually improve congestion on the bottleneck link
>> because TCP can't reduce its window below 4 MTUs, and it's having to
>> retransmit all the lost packets as well.  That loses us a lot of
>> goodput for no good reason.
> 
> I can sorta, maybe, see the point of not dropping packets that won't
> cause the flow to decrease its rate *in ingress mode*. But this is also
> enabled in egress mode, where it doesn't make sense.

I couldn't think of a good reason to switch it off in egress mode.  That would improve a metric that few people care about or can even measure, while severely increasing packet loss and retransmissions in some situations, which is something that people *do* care about and measure.

> Also, the minimum TCP window is two packets including those that are in
> flight but not yet queued; so allowing four packets at the bottleneck is
> way excessive.

You can only hold the effective congestion window in NewReno down to 2 packets if you have a 33% AQM signalling rate (dropping one packet per RTT), which is hellaciously high if the hosts aren't using ECN.  If they *are* using ECN, then goodput in ingress mode doesn't depend inversely on signalling rate anyway, so it doesn't matter.  At 4 packets, the required signalling rate is still pretty high (1 packet per 3 RTTs, if it really does go down to 2 MTUs meanwhile) but a lot more manageable - in particular, it's comfortably within the margin required by ingress mode - and gets a lot more goodput through.

We did actually measure the effect this had in a low-inherent-latency, low-bandwidth environment.  Goodput went up significantly, and peak inter-flow latency went *down* due to upstream queuing effects.

>> So I do accept the increase in intra-flow latency when the flow count
>> grows beyond the link's capacity to cope.
> 
> TCP will always increase its bandwidth above the link's capacity to
> cope. That's what TCP does.
> 
>> It helps us keep the inter-flow induced latency low
> 
> What does this change have to do with inter-flow latency?
> 
>> while maintaining bulk goodput, which is more important.
> 
> No, it isn't! Accepting a factor of four increase in latency to gain a
> few percents' goodput in an edge case is how we got into this whole
> bufferbloat mess in the first place...

Perhaps a poor choice of wording; I consider *inter-flow latency* to be the most important factor.  But users also consider goodput relative to link capacity to be important, especially on slow links.  Intra-flow latency, by contrast, is practically invisible except for traffic types that are usually sparse.

As I noted during the thread Kevin linked, Dave originally asserted that the AQM target should *not* depend on the flow count, but the total number of packets in the queue should be held constant.  I found that assertion had to be challenged once cases emerged where it was clearly detrimental.  So now I assert the opposite: that the queue must be capable of accepting a minimum number of packets *per flow*, and not just transiently, if the inherent latency is not greater than what corresponds to the optimal BDP for TCP.

This tweak has zero theoretical effect on inter-flow latency (which is guaranteed by the DRR++ scheme, not the AQM), but can improve goodput and sender load at the expense of intra-flow latency.  The practical effect on inter-flow latency can actually be positive in some scenarios.

Feel free to measure.  Just be aware of what this is designed to handle.

And obviously I need to write about this in the paper...

 - Jonathan Morton


  reply	other threads:[~2018-04-18 15:03 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-17  9:42 Toke Høiland-Jørgensen
2018-04-17 10:04 ` Luca Muscariello
2018-04-17 10:38   ` Toke Høiland-Jørgensen
2018-04-17 12:05     ` Y
     [not found]     ` <mailman.225.1523966725.3573.cake@lists.bufferbloat.net>
2018-04-17 12:22       ` Toke Høiland-Jørgensen
2018-04-17 13:16         ` Jonas Mårtensson
2018-04-17 13:50           ` Toke Høiland-Jørgensen
2018-04-17 13:47         ` Luca Muscariello
2018-04-17 13:52         ` Luca Muscariello
2018-04-17 14:25           ` Toke Høiland-Jørgensen
2018-04-17 14:54             ` Luca Muscariello
2018-04-17 15:10               ` Toke Høiland-Jørgensen
2018-04-17 14:03 ` Jonathan Morton
2018-04-17 14:17   ` Toke Høiland-Jørgensen
2018-04-18 11:25     ` Toke Høiland-Jørgensen
2018-04-18 12:21       ` Kevin Darbyshire-Bryant
2018-04-18 12:57         ` Toke Høiland-Jørgensen
2018-04-18 13:13       ` Jonas Mårtensson
2018-04-18 13:21         ` Toke Høiland-Jørgensen
2018-04-18 14:12       ` Jonathan Morton
2018-04-18 14:30         ` Toke Høiland-Jørgensen
2018-04-18 15:03           ` Jonathan Morton [this message]
2018-04-18 15:17             ` Sebastian Moeller
2018-04-18 15:58               ` Jonathan Morton
2018-04-18 16:11                 ` Toke Høiland-Jørgensen
2018-04-18 16:25                   ` Dave Taht
2018-04-18 16:34                     ` Georgios Amanakis
2018-04-18 17:10                       ` Sebastian Moeller
2018-04-19  7:49                     ` Luca Muscariello
2018-04-19  8:11                       ` Jonathan Morton
2018-04-19  9:00                         ` Toke Høiland-Jørgensen
2018-04-19  9:21                           ` Jonathan Morton
2018-04-19  9:26                             ` Toke Høiland-Jørgensen
2018-04-19  9:55                               ` Jonathan Morton
2018-04-19 10:33                                 ` Toke Høiland-Jørgensen
2018-04-19 11:55                                   ` Luca Muscariello
2018-04-18 16:54                   ` Jonathan Morton
2018-04-18 17:02                     ` Dave Taht
2018-04-18 18:06                       ` Jonas Mårtensson
2018-04-18 18:11                         ` Toke Høiland-Jørgensen
2018-04-18 18:16                           ` Kevin Darbyshire-Bryant
     [not found]                           ` <mailman.238.1524075384.3573.cake@lists.bufferbloat.net>
2018-04-19  8:31                             ` Kevin Darbyshire-Bryant
2018-04-18 18:11                     ` Jonas Mårtensson
2018-04-18 19:53                     ` David Lang
2018-04-18 21:53                       ` Jonathan Morton
2018-04-19  9:22                         ` Toke Høiland-Jørgensen
2018-04-19  9:32                           ` Jonathan Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/cake.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=FCA9E4F3-EA5C-4223-BA23-771C993867AF@gmail.com \
    --to=chromatix99@gmail.com \
    --cc=cake@lists.bufferbloat.net \
    --cc=toke@toke.dk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox