[Bloat] ipspace.net: "QUEUING MECHANISMS IN MODERN SWITCHES"

Dave Taht dave.taht at gmail.com
Thu May 29 12:58:08 EDT 2014


I am really enjoying this thread. There was a video and presentation
from stanford
last (?) year  that decided that the "right" number of buffers at
really high rates (10gb+)
was really small, like, 20, and used 10s of thousands of flows to make
its point.

I think it came out of the optical networking group... anybody remember the
paper/preso/video I'm talking about? It seemed like a pretty radical conclusion
at the time.

On Thu, May 29, 2014 at 12:20 AM, Neil Davies <neil.davies at pnsol.com> wrote:
>
> On 28 May 2014, at 12:00, Jonathan Morton <chromatix99 at gmail.com> wrote:
>
>>
>> On 28 May, 2014, at 12:39 pm, Hal Murray wrote:
>>
>>>> in non discarding scheduling total delay is conserved,
>>>> irrespective of the scheduling discipline
>>>
>>> Is that true for all backplane/switching topologies?
>>
>> It's a mathematical truth for any topology that you can reduce to a black box with one or more inputs and one output, which you call a "queue" and which *does not discard* packets.  Non-discarding queues don't exist in the real world, of course.
>>
>> The intuitive proof is that every time you promote a packet to be transmitted earlier, you must demote one to be transmitted later.  A non-FIFO queue tends to increase the maximum delay and decrease the minimum delay, but the average delay will remain constant.

There are two cases here, under congestion, that are of interest. One
is X into 1, where figuring out
what to shoot at when, is important.

The other is where X into 1 at one rate is ultimately being stepped
down from, say 10gbit, to 10mbit, e2e.
In the latter case I'm reasonably confident that stochastic fair
queueing at a ratio of number of flows proportional to the ultimate
step-down is a win. (and you still have to decide what to shoot at) -
and it makes tons of sense for hosts servicing a limited number of
users to also disburse their
packet payloads at a similar ratio.

In either case as rates and numbers of flows get insanely high, my gut
(which has been wrong before!)
agreed with the stanford result, (short queues, drop tail), and
conflicts with the observation that breaking
up high speed clumps into highly mixed packets is a good thing.

I wish it were possible to experiment with a 10+gbit, congested,
internet backbone link and observe the results of these lines of
thought...

>
> Jonathan - there is a mathematical underpinning for this, when you (mathematically) construct queueing systems that will differentially allocate both delay and loss you find that the underlying state space has certain properties - they have "lumpability" - this lumpabilty (apart from making the state space dramatically smaller) has another, profound, implication. A set of states that are in a "lump" have an interesting equivalence, it doesn't matter how you leave the "lump" the overall system properties are unaffected.

http://www.pnsol.com/publications.html has invented several terms that
I don't fully understand.


> In the systems we studied (in which there was a ranking in "order of service" (delay/urgency) things in, and a ranking in discarding (loss/cherish) things) this basically implied that the overall system properties (the total "amount" of loss and delay) was independent of that choice. The "quality attenuation" (the loss and delay) was thus conserved.
>
>>
>>>> The question is if (codel/pie/whatever) AQM makes sense at all for 10G/40G
>>>> hardware and higher performance irons? Igress/egress bandwidth is nearly
>>>> identical, a larger/longer buffering should not happen. Line card memory is
>>>> limited, a larger buffering is defacto excluded.
>>>
>>> The simplest interesting case is where you have two input lines feeding the
>>> same output line.
>>>
>>> AQM may not be the best solution, but you have to do something.  Dropping any
>>> packet that won't fit into the buffer is probably simplest.
>>
>> The relative bandwidths of the input(s) and output(s) is also relevant.  You *can* have a saturated 5-port switch with no dropped packets, even if one of them is a common uplink, provided the uplink port has four times the bandwidth and the traffic coming in on it is evenly distributed to the other four.
>>
>> Which yields you the classic tail-drop FIFO, whose faults are by now well documented.  If you have the opportunity to do something better than that, you probably should.  The simplest improvement I can think of is a *head*-drop FIFO, which gets the congestion signal back to the source quicker.  It *should* I think be possible to do Codel at 10G (if not 40G) by now; whether or not it is *easy* probably depends on your transistor budget.
>
> Caveat: this is probably the best strategy for networks that consist solely of long lived, non service critical, TCP flows - for the rest of networking requirements think carefully. There are several, real world, scenarios where this is not the best strategy and, where you are looking to make any form of "safety" case (be it fiscal or safety of life) it does create new performance related attack vectors. We know this, because we've been asked this and we've done the analysis.
>
>>
>> - Jonathan Morton
>>
>
> ---------------------------------------------------
> Neil Davies, PhD, CEng, CITP, MBCS
> Chief Scientist
> Predictable Network Solutions Ltd
> Tel:   +44 3333 407715
> Mob: +44 7974 922445
> neil.davies at pnsol.com
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 
Dave Täht

NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article



More information about the Bloat mailing list