Cake - FQ_codel the next generation
 help / color / mirror / Atom feed
From: moeller0 <moeller0@gmx.de>
To: Jonathan Morton <chromatix99@gmail.com>
Cc: cake@lists.bufferbloat.net
Subject: Re: [Cake] triple flow isolation
Date: Mon, 18 Jan 2016 10:21:18 +0100	[thread overview]
Message-ID: <079A86FB-D2A6-4787-BC42-D2E200AD3290@gmx.de> (raw)
In-Reply-To: <A6C928E5-AD1B-4D42-8251-45E15CB5692F@gmail.com>

Hi Jonathan,


> On Jan 16, 2016, at 10:05 , Jonathan Morton <chromatix99@gmail.com> wrote:
> 
> I’ve just committed and pushed the fixes required for triple-isolation to actually work.  They are small.  My tests now pass.  This is a good feeling.
> 
>> On 15 Jan, 2016, at 10:05, moeller0 <moeller0@gmx.de> wrote:
>> 
>>>> I do not claim I understand what triple-iso intends to accomplish in detail.
>>> 
>>> The short version is that, in theory at least, I’ve found a way to ensure fairness without needing to know which side of the interface is which.  
>> 
>> Could you please explain the fairness you are talking about here?

	First off thanks for taking the time to explain….

> 
> As you’re already aware. Cake uses DRR++ to ensure flow-to-flow fairness.  This chiefly means keeping a deficit counter per flow, skipping over flows that are in deficit, and ensuring that the counters get incremented at a mutually equal rate, so that they eventually leave deficit.

	Okay, I am with you so far...

> 
> Triple isolation also keeps such counters on a per-source-host and per-destination-host basis (NB: *not* on a per-host-pair basis).  

	Am I right to assume that dust and src host isolation works with the same counters but simply ignores one of them? Or will only the relevant counter be kept and updated?

> The host deficits are checked first, and unless *both* of the relevant hosts are out of deficit, the per-flow counters are not even examined.  However, the number of flows deferred due to host deficits is tracked, which allows the host deficit counters to be batch-incremented at the correct times.  (This proved to be the main source of subtleties.)
> 
> Where two sets of flows have a common source or destination, but separate hosts at the opposite end, this ensures fair bandwidth sharing between the separate hosts, no matter which side of the link they happen to lie or the relative numbers of flows involved.  Within each such set, per-flow fairness is also ensured.  This is easy to understand and to demonstrate.
> 
> It’s also easy to see that fairness between disjoint host-pairs is also maintained in the same manner.
> 
> More complex situations require more thought to analyse.  As you suggest, the typical situation is where a small number of local hosts talk to an arbitrary combination of remote hosts through the link.  To keep matters simple, I’ll assume there are two local hosts (A and B) and that all flows are bulk in the same direction, and assert that the principles generalise to larger numbers and more realistic traffic patterns.
> 
> The simplest such case might be where A talks to a single remote host (C), but B talks to two (D and E).  This could be considered competition between a single webserver and a sharded website.  After stepping through the algorithm with some mechanical aid, I think what will happen in this case is that C-D-E fairness will govern the system, giving B more bandwidth than A.  In general, I think the side with the larger number of hosts will tend to govern.

	Okay.

> 
> The opposite sense would be to have the side with the smaller number of hosts govern the system.  This would, I think, handle both the swarm and shard cases better than the above, so I’ll see if I can think of a way to adapt the algorithm to do that.

	So if all internal hosts talk to one external host, does this scheme then equal pure per-flow fairness? I am trying to understand how robust triple-iso is going to be against attempt at shenanigans by unruly machines on the internal/external networks...

Best Regards
	Sebastian

> 
> - Jonathan Morton
> 


  parent reply	other threads:[~2016-01-18  9:21 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-11 17:40 Kevin Darbyshire-Bryant
2016-01-11 18:16 ` moeller0
2016-01-11 20:33   ` Kevin Darbyshire-Bryant
2016-01-14 14:20     ` moeller0
2016-01-14 14:45       ` Jonathan Morton
2016-01-14 15:48         ` moeller0
2016-01-14 16:05           ` Jonathan Morton
     [not found]             ` <02A10F37-145C-4BF9-B428-BC1BDF700135@gmx.de>
2016-01-15  0:05               ` Jonathan Morton
2016-01-15  8:05                 ` moeller0
2016-01-16  9:05                   ` Jonathan Morton
2016-01-16  9:35                     ` Jonathan Morton
2016-01-17 22:22                       ` moeller0
2016-01-18  9:21                     ` moeller0 [this message]
2016-01-18  9:37                       ` Jonathan Morton
2016-01-18 11:08                         ` Alan Jenkins
2016-01-18 11:39                           ` Jonathan Morton
2016-01-18 16:20                         ` Sebastian Moeller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/cake.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=079A86FB-D2A6-4787-BC42-D2E200AD3290@gmx.de \
    --to=moeller0@gmx.de \
    --cc=cake@lists.bufferbloat.net \
    --cc=chromatix99@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox