[Bloat] DETNET
Ken Birman
kpb3 at cornell.edu
Wed Nov 15 14:45:40 EST 2017
I'm missing context. Can someone tell me why I'm being cc'ed on these? Topic seems germane to me (Derecho uses RDMA on RoCE) but I'm unclear what this thread is "about"
Ken Birman
-----Original Message-----
From: Dave Taht [mailto:dave at taht.net]
Sent: Wednesday, November 15, 2017 2:32 PM
To: Matthias Tafelmeier <matthias.tafelmeier at gmx.net>
Cc: Bob Briscoe <ietf at bobbriscoe.net>; ken at cs.cornell.edu; bloat at lists.bufferbloat.net
Subject: Re: [Bloat] DETNET
Matthias Tafelmeier <matthias.tafelmeier at gmx.net> writes:
> However, like you, I just sigh when I see the behemoth detnet is building.
>
> Does it? Well, so far the circumference seems justififiable for what
> they want to achieve, at least according to what I can tell from these
> rather still abstract concepts.
>
> The sort of industrial control applications that detnet is targeting
> require far lower queuing delay and jitter than fq_CoDel can give. They
> have thrown around numbers like 250us jitter and 1E-9 to 1E-12 packet
> loss probability.
>
> Nonetheless, it's important to have a debate about where to go to next.
> Personally I don't think fq_CoDel alone has legs to get (that) much better.
The place where bob and I always disconnect is that I care about interflow latencies generally more than queuing latencies and prefer to have strong incentives for non-queue building flows in the first place. This results in solid latencies of 1/flows at your bandwidth. At 100Mbit, a single 1500 byte packet takes 130us to deliver, gbit, 13us, 10Gbit, 1.3us.
So for values of flows of 2, 20, 200, at these bandwidths, we meet this detnet requirement. As for queuing, if you constrain the network diameter, and use ECN, fq_codel can scale down quite a lot, but I agree there is other work in this area that is promising.
However, underneath this, unless a shaper like htb or cake is used, is additional unavoidable buffering in the device driver, at 1Gbit and higher, managed by BQL. We've successfully used sch_cake to hold things down to a single packet, soft-shaped, at speeds of 15Gbit or so, on high end hardware.
Now, I don't honestly know enough about detnet to say if any part of this discussion actually applies to what they are trying to solve! and I don't plan to look into until the next ietf meeting.
I've been measuring overlying latencies elsewhere in the Linux kernel at 2-6us with a long tail to about 2ms for years now. There is a lot of passionate work trying to get latencies down for small packets above 10Gbits in the Linux world.. but there, it's locking, and routing - not queueing - that is the dominating factor.
>
>
>
> Certainly, all you said is valid - as I stated, I mostly wanted to
> share the digest/the existance of the inititiative without judging/reproaching/peaching .
> ..
>
> I prefer the direction that Mohamad Alizadeh's HULL pointed in:
> Less is More: Trading a little Bandwidth for Ultra-Low Latency in the Data
> Center
I have adored all his work. DCTCP, HULL, one other paper... what's he doing now?
> In HULL you have i) a virtual queue that models what the queue would be if
> the link were slightly slower, then marks with ECN based on that. ii) a much
> more well-behaved TCP (HULL uses DCTCP with hardware pacing in the NICs).
I do keep hoping that more folk will look at cake... it's a little crufty right now, but we just added ack filtering to it and starting up a major set of test runs in december/january.
> I would love to be able to demonstrate that HULL can achieve the same
> extremely low latency and loss targets as detnet, but with a fraction of the
> complexity.
>
> Well, if it's already for specific HW, then I'd prefer to see RDMA in
> place right away with getting rid of IRQs and other TCP/IP specific
> rust along the way, at least for DC realms :) Although, this HULL
> might has a spin for it from economics perspective.
It would be good for more to read the proceeds from the recent netdev
conference:
https://lwn.net/Articles/738912/
>
> For public Internet, not just for DCs? You might have seen the work we've
> done (L4S) to get queuing delay over regular public Internet and broadband
> down to about mean 500us; 90%-ile 1ms, by making DCTCP deployable alongside
> existing Internet traffic (unlike HULL, pacing at the source is in Linux,
> not hardware). My personal roadmap for that is to introduce virtual queues
> at some future stage, to get down to the sort of delays that detnet wants,
> but over the public Internet with just FIFOs.
My personal goal is to just apply what we got to incrementally reduce all delays from seconds to milliseconds, across the internet and on every device I can fix. Stuff derived from the sqm-scripts is universally available in third party firmware now, and in many devices.
Also: I'm really really happy with what we've done for wifi so far, I think we can cut peak latencies by another factor or 3, maybe even 5, with what we got coming up next from the make-wifi-fast project.
And that's mostly *driver* work, not abstract queuing theory.
More information about the Bloat
mailing list