General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: Bob Briscoe <ietf@bobbriscoe.net>
To: Matthias Tafelmeier <matthias.tafelmeier@gmx.net>
Cc: bloat@lists.bufferbloat.net
Subject: Re: [Bloat] DETNET
Date: Mon, 13 Nov 2017 03:58:26 +0800	[thread overview]
Message-ID: <796aa11e-9e35-cf34-e456-6ae98d1875d6@bobbriscoe.net> (raw)
In-Reply-To: <79f4d92c-74f4-8cd0-9d38-e51a668cb9b6@gmx.net>

[-- Attachment #1: Type: text/plain, Size: 3154 bytes --]

Matthias, Dave,

The sort of industrial control applications that detnet is targeting 
require far lower queuing delay and jitter than fq_CoDel can give. They 
have thrown around numbers like 250us jitter and 1E-9 to 1E-12 packet 
loss probability.

However, like you, I just sigh when I see the behemoth detnet is building.

Nonetheless, it's important to have a debate about where to go to next. 
Personally I don't think fq_CoDel alone has legs to get (that) much better.

I prefer the direction that Mohamad Alizadeh's HULL pointed in:
Less is More: Trading a little Bandwidth for Ultra-Low Latency in the 
Data Center <https://people.csail.mit.edu/alizadeh/papers/hull-nsdi12.pdf>

In HULL you have i) a virtual queue that models what the queue would be 
if the link were slightly slower, then marks with ECN based on that. 
ii)  a much more well-behaved TCP (HULL uses DCTCP with hardware pacing 
in the NICs).

I would love to be able to demonstrate that HULL can achieve the same 
extremely low latency and loss targets as detnet, but with a fraction of 
the complexity.

*Queuing latency?* This keeps the real FIFO queue in the low hundreds to 
tens of microseconds.

*Loss prob?* Mohammad doesn't recall seeing a loss during the entire 
period of the experiments, but he doubted their measurement 
infrastructure was sufficiently accurate (or went on long enough) to be 
sure they were able to detect one loss per 10^12 packets.

For their research prototype, HULL used a dongle they built, plugged 
into each output port to constrict the link in order to shift the AQM 
out of the box. However, Broadcom mid-range chipsets already contain 
vertual queue hardware (courtesey of a project we did with them when I 
was at BT:
How to Build a Virtual Queue from Two Leaky Buckets (and why one is not 
enough) <http://bobbriscoe.net/pubs.html#vq2lb> ).

*For public Internet, not just for DCs?* You might have seen the work 
we've done (L4S <https://riteproject.eu/dctth/>) to get queuing delay 
over regular public Internet and broadband down to about mean 500us; 
90%-ile 1ms, by making DCTCP deployable alongside existing Internet 
traffic (unlike HULL, pacing at the source is in Linux, not hardware). 
My personal roadmap for that is to introduce virtual queues at some 
future stage, to get down to the sort of delays that detnet wants, but 
over the public Internet with just FIFOs.

Radio links are harder, of course, but a lot of us are working on that too.



Bob

On 12/11/2017 22:58, Matthias Tafelmeier wrote:
> On 11/07/2017 01:36 AM, Dave Taht wrote:
>>> Perceived that as shareworthy/entertaining ..
>>>
>>> https://tools.ietf.org/html/draft-ietf-detnet-architecture-03#section-4.5
>>>
>>> without wanting to belittle it.
>> Hope springs eternal that they might want to look over the relevant
>> codel and fq_codel RFCS at some point or another.
>
> Not sure, appears like juxtaposing classical mechanics to nanoscale 
> physics.
>
> -- 
> Besten Gruß
>
> Matthias Tafelmeier
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


[-- Attachment #2: Type: text/html, Size: 4840 bytes --]

  reply	other threads:[~2017-11-12 19:59 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-04 13:45 Matthias Tafelmeier
     [not found] ` <87shdr0vt6.fsf@nemesis.taht.net>
2017-11-12 14:58   ` Matthias Tafelmeier
2017-11-12 19:58     ` Bob Briscoe [this message]
2017-11-13 17:56       ` Matthias Tafelmeier
2017-11-15 19:31         ` Dave Taht
2017-11-15 19:45           ` Ken Birman
2017-11-15 20:09             ` Matthias Tafelmeier
2017-11-15 20:16               ` Dave Taht
2017-11-15 21:01                 ` Ken Birman
2017-11-18 15:56                   ` Matthias Tafelmeier
2017-12-11 20:32                   ` Toke Høiland-Jørgensen
2017-12-11 20:43                     ` Ken Birman
2017-11-18 15:38           ` Matthias Tafelmeier
2017-11-18 15:45             ` Ken Birman
2017-11-19 18:33             ` Dave Taht
2017-11-19 20:24               ` Ken Birman
2017-11-20 17:56                 ` [Bloat] *** GMX Spamverdacht *** DETNET Matthias Tafelmeier
2017-11-20 19:04                   ` Ken Birman
2017-12-17 12:46                     ` Matthias Tafelmeier
2017-12-17 16:06                       ` Ken Birman
2017-11-18 17:55           ` [Bloat] DETNET Matthias Tafelmeier
2017-11-18 19:43             ` Ken Birman
2017-11-18 19:47               ` Ken Birman
2017-11-20 18:32               ` Matthias Tafelmeier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=796aa11e-9e35-cf34-e456-6ae98d1875d6@bobbriscoe.net \
    --to=ietf@bobbriscoe.net \
    --cc=bloat@lists.bufferbloat.net \
    --cc=matthias.tafelmeier@gmx.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox