[Ecn-sane] FQ in the core
Luca Muscariello
luca.muscariello at gmail.com
Mon Mar 25 05:17:06 EDT 2019
We've had this discussion multiple times.
- You do not need FQ everywhere and depending on
the case you can do approximations of that.
- What I believe you always need is flow-awareness.
- There are already implementations of dual-queue systems in DC switches
such as the Cisco nexus 9k.
- The dualQ system in docsis is the wrong way to implement a flow-aware
system with two queues.
For many reasons including, but not limited to the fact that dualQ to work
makes assumptions about the behaviour of the end-points.
A flow aware queuing system should not be sensitive to some sort of
compliancy of the end-points.
You cannot trust the end-points and the protection system should not
discriminate good vs bad based on a badge carried by the packets.
- I also believe dualQ fails to achieve its goal under current and future
traffic patters.
- The approach described in this paper also works for 2 queues only and
makes no assumption about the end-points.
It does not require marking at all.
James Roberts et al.
Minimizing the overhead in implementing flow-aware networking.
In Proceedings of the 2005 ACM symposium on Architecture for networking and
communications systems (ANCS '05).
DOI: https://doi.org/10.1145/1095890.1095912
https://team.inria.fr/rap/files/2013/12/KMOR05a.pdf
- There is not one TCP, there is no one single transport in the wild.
There are many and there will be many more.
The docsis specs makes the assumption that to be a good guy you must be
part of one Church. The one specified.
This is a very religious assumption about end-points' behaviour.
All the others are bad guys. I'm the only one who sees this as deeply wrong?
- Almost 10 years ago I built a FQ prototype on an NPU based on Cavium with
Alcatel-Lucent (the team was based in Paris).
That was supposed to be an ALU7750 target. The problem was that not a
single ISP was even aware of the topic. Nobody was asking for it.
It's a chicken and egg problem Mikael. If you do not ask loudly nobody
builds it.
- I also built with Ikanos in 2010, a FQ prototype in their MIPS based SoC.
32 queues for the France Telecom livebox
and it worked. Ikanos was later acquired by Qualcomm and that SoC is not
used anymore in favour of Broadcom.
Hope this helps to make progress in the discussion
Luca
On Mon, Mar 25, 2019 at 8:55 AM Dave Taht <dave.taht at gmail.com> wrote:
> I don't really have time to debate this today.
>
> Since you forked this conversation back to FQ I need to state a few things.
>
> 1) SCE is (we think) compatible with existing single queue AQMs. CE
> should not be exerted in this case, just drop. Note that this is also
> what L4S wants to do with the "normal" queue (I refuse to call it
> classic).
>
> 2) SCE is optional. A transport that has a more agressive behavior,
> like dctcp, should fall back to being tcp-friendly if it
> sees no SCE marks and only CE or drop.
>
> 3) At 100Gbit speeds some form of multi-queue oft seems needed. (and
> this is in part why folk want to relax ordering requirements). So some
> form of multiple queuing is generally the case. At the higher speeds,
> DC's usually overprovision anyway.
>
> 4) The biggest cpu overhead for any of this stuff is per-tenant (in
> the dc) or per customer shaping. This benefits a lot from a hardware
> assist. (see senic). I've done quite a bit of DC work in the past 2
> years (rather than home routers), and have had a hard look at the
> underlying substrates for a few multi-tenant implementations....
>
> 4) "dualq" hasn't tried to address the fact that most 10Gbit and
> higher cards have 8 or more hardware queues in the first place.
>
> 5) Companies like preseem are shipping transparent bridges that do
> fq_codel/cake on customer traffic.
>
> I've long been in periodic negotions with makers of "big iron" like,
> for example, the new 128 core huwei box and others I cannot talk about
> at the moment, to get so far as an existence proof.
>
> So I'd like to kill the meme that SCE requires FQ, at least, for now,
> until after we do more tests.
>
> As for FQ everywhere, well, I'd like that, but it's not needed in
> devices that already have sufficient multiplexing.
>
>
>
>
>
> On Mon, Mar 25, 2019 at 8:16 AM Mikael Abrahamsson <swmike at swm.pp.se>
> wrote:
> >
> > On Sun, 24 Mar 2019, Sebastian Moeller wrote:
> >
> > > From my layman's perspective this is the the killer argument against
> the
> > > dualQ approach and for fair-queueing, IMHO only fq will be able to
> >
> > Do people on this email list think we're trying to trick you when we're
> > saying that FQ won't be available anytime soon on a lot of platforms that
> > need this kind of AQM?
> >
> > Since there is always demand for implementations, can we get an ASIC/NPU
> > implementation of FQ_CODEL done by someone who claims it's no problem?
> >
> > Personally I believe we need both. FQ is obviously superior to anything
> > else most of the time, but FQ is not making itself into the kind of
> > devices it needs to get into for the bufferbloat situation to improve, so
> > now what?
> >
> > Claiming to have a superior solution that is too expensive to go into
> > relevant devices, is that proposal still relevant as an alternative to a
> > different solution that actually is making itself into silicon?
> >
> > Again, FQ superior, but what what good is it if it's not being used?
> >
> > We need to have this discussion and come up with a joint understanding of
> > the world, otherwise we're never going to get anywhere.
> >
> > --
> > Mikael Abrahamsson email: swmike at swm.pp.se
> > _______________________________________________
> > Ecn-sane mailing list
> > Ecn-sane at lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/ecn-sane
>
>
>
> --
>
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-205-9740
> _______________________________________________
> Ecn-sane mailing list
> Ecn-sane at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/ecn-sane
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/ecn-sane/attachments/20190325/84dc0b86/attachment-0001.html>
More information about the Ecn-sane
mailing list