[Ecn-sane] Fwd: my backlogged comments on the ECT(1) interim call

Bob Briscoe ietf at bobbriscoe.net
Wed Apr 29 05:31:12 EDT 2020


Dave,

Please don't tar everything with the same brush. Inline...

On 27/04/2020 20:26, Dave Taht wrote:
> just because I read this list more often than tsvwg.
>
> ---------- Forwarded message ---------
> From: Dave Taht <dave.taht at gmail.com>
> Date: Mon, Apr 27, 2020 at 12:24 PM
> Subject: my backlogged comments on the ECT(1) interim call
> To: tsvwg IETF list <tsvwg at ietf.org>
> Cc: bloat <bloat at lists.bufferbloat.net>
>
>
> It looks like the majority of what I say below is not related to the
> fate of the "bit". The push to take the bit was
> strong with this one, and me... can't we deploy more of what we
> already got in places where it matters?
>
> ...
>
> so: A) PLEA: From 10 years now, of me working on bufferbloat, working
> on real end-user and wifi traffic and real networks....
>
> I would like folk here to stop benchmarking two flows that run for a long time
> and in one direction only... and thus exclusively in tcp congestion
> avoidance mode.

[BB] All the results that the L4S team has ever published include short 
flow mixes either with or without long flows.
     2020: http://folk.uio.no/asadsa/ecn-fbk/results_v2.2/full_heatmap_rrr/
     2019: 
http://bobbriscoe.net/projects/latency/dctth_journal_draft20190726.pdf#subsection.4.2
     2019: https://www.files.netdevconf.info/f/febbe8c6a05b4ceab641/?dl=1
     2015: 
http://bobbriscoe.net/projects/latency/dctth_preprint.pdf#subsection.7.2

I think this implies you have never actually looked at our data, which 
would be highly concerning if true.

Regarding asymmetric links, as you will see in the 2015 and 2019 papers, 
our original tests were conducted over Al-Lu's broadband testbed with 
real ADSL lines, real home routers, etc. When we switched to a Linux 
testbed, we checked we were getting identical results to the testbed 
that used real broadband kit, but I admit we omitted to emulate the 
asymmetric upstream. As I said, we can add asymmetric tests back again, 
and we should.

Nonetheless, when testing Accurate ECN feedback specifically we have 
been watching for the reverse path, given AccECN is designed to handle 
ACK thinning, so we have to test that, esp. over WiFi.

>
> Please. just. stop. Real traffic looks nothing like that. The internet
> looks nothing like that.

[BB] Right from the start, we also tested L4S with numerous real 
applications on the same real broadband equipment testbed. Here's a 
paper that accompanied the demo we did at the Multimedia Systems 
conference in 2015 (remote camera in racing car over cloud-rendered VR 
goggles, cloud-rendered sub-view from a panoramic camera at a football 
match controlled by finger-gestures, web sessions, game traffic, and 
video streaming sessions all in parallel over a 40Mb/s broadband link):

https://riteproject.files.wordpress.com/2015/10/uld4all-demo_mmsys.pdf

We had tested that demo on the Al-Lu testbed with real equipment, but 
obviously the testbed we took to the conference had to be portable.


> The netops folk I know just roll their eyes up at benchmarks like this
> that prove nothing and tell me to go to ripe meetings instead.
> When y'all talk about "not looking foolish for not mandating ecn now",
> you've already lost that audience with benchmarks like these.
>
> Sure, setup a background flow(s)  like that, but then hit the result
> with a mix of
> far more normal traffic? Please? networks are never used unidirectionally
> and both directions congesting is frequent.

[BB] You may not be aware of the following work going on in the IETF at 
the mo. to do ACK thinning in the transport layer in order to address 
reverse path congestion:

https://github.com/quicwg/base-drafts/issues/1978
https://tools.ietf.org/html/draft-iyengar-quic-delayed-ack
https://tools.ietf.org/html/draft-fairhurst-quic-ack-scaling
https://tools.ietf.org/html/draft-gomez-tcpm-delack-suppr-reqs


> To illustrate that problem...
>
> I have a really robust benchmark that we have used throughout the bufferbloat
> project that I would like everyone to run in their environments, the flent
> "rrul" test. Everybody on both sides has big enough testbeds setup that a few
> hours spent on doing that - and please add in asymmetric networks especially -
> and perusing the results ought to be enlightening to everyone as to the kind
> of problems real people have, on real networks.
>
> Can the L4S and SCE folk run the rrul test some day soon? Please?

[BB] Does this measure the delay of every packet, so we can measure 
delay percentiles? I've asked you this a couple of times on these lists 
over the years. It looks like it still uses ping. You will see that all 
our results measure the delay of /every/ data packet.

Real time applications are sensitive to the higher percentiles of delay. 
If anyone is extracting delay percentiles from data that's so sparsely 
sampled, their results will be meaningless.


>
> I rather liked this benchmark that tested another traffic mix,
>
> ( https://www.cablelabs.com/wp-content/uploads/2014/06/DOCSIS-AQM_May2014.pdf )
>
> although it had many flaws (like not doing dns lookups), I wish it
> could be dusted off and used to compare this
> new fangled ecn enabled stuff with the kind of results you can merely get
> with packet loss and rtt awareness. It would be so great to be able
> to directly compare all these new algorithms against this benchmark.
>
> Adding in a non ecn'd udp based routing protocol on heavily
> oversubscribed 100mbit link is also enlightening.
>
> I'd rather like to see that benchmark improved for a more modernized
> home traffic mix
> where it is projected there may be 30 devices on the network on average,
> in a few years.

[BB] That was the idea of the MMSYS demo above (we didn't bother with 
the devices that would be low data rate, 'cos we had other game-like 
traffic that stood in for that).

And incidentally, that's where we discovered and fixed problems with DNS 
requests and SYNs.

A benchmark for this sort of scenario would certainly be useful.

>
> If there is any one thing y'all can do to reduce my blood pressure and
> keep me engaged here whilst you
> debate the end of the internet as I understand it, it would be to run
> the rrul test as part of all your benchmarks.

PS. Links to all the above are off the L4S landing page:
     https://riteproject.eu/dctth/

Cheers



Bob
>
> thank you.
>
> B) Stuart Cheshire regaled us with several anecdotes - one concerning
> his problems
> with comcast's 1Gbit/35mbit service being unusable, under load, for
> videoconferencing. This is true. The overbuffering at the CMTSes
> still, has to be seen to be believed, at all rates. At lower rates
> it's possible to shape this, with another device (which is what
> the entire SQM deployment does in self defense and why cake has a
> specific docsis ingress mode), but it is cpu intensive
> and requires x86 hardware to do well at rates above 500Mbits, presently.
>
> So I wish CMTS makers (Arris and Cisco) were in this room. are they?
>
> (Stuart, if you'd like a box that can make your comcast link pleasurable
> under all workloads, whenever you get back to los gatos, I've got a few
> lying around. Was so happy to get a few ietfers this past week to apply
> what's off the shelf for end users today. :)
>
> C) I am glad bob said the L4S is finally looking at asymmetric
> networks, and starting to tackle ack-filtering and accecn issues
> there.
>
> But... I would have *started there*. Asymmetric access is the predominate form
> of all edge technologies.
>
> I would love to see flent rrul test results for 1gig/35mbit, 100/10, 200/10
> services, in particular. (from SCE also!). "lifeline" service (11/2)
> would be good
> to have results on. It would be especially good to have baseline
> comparison data from the measured, current deployment
> of the CMTSes at these rates, to start with, with no queue management in
> play, then pie on the uplink, then fq_codel on the uplink, and then
> this ecn stuff, and so on.
>
> D) The two CPE makers in the room have dismissed both fq and sce as
> being too difficult to implement. They did say that dualpi was
> actually implemented in software, not hardware.
>
> I would certainly like them to benchmark what they plan to offer in L4S
> vs what is already available in the edgerouter X, as one low end
> example among thousands.
>
> I also have to note, at higher speeds, all the buffering moves into
> the wifi and the results are currently ugly. I imagine
> they are exploring how to fix their wifi stacks also? I wish more folk
> were using RVR + latency benchmarks like this one:
>
> http://flent-newark.bufferbloat.net/~d/Airtime%20based%20queue%20limit%20for%20FQ_CoDel%20in%20wireless%20interface.pdf
>
> Same goes for the LTE folk.
>
> E) Andrew mcgregor mentioned how great it would be for a closeted musician to
> be able to play in real time with someone across town. that has been my goal
> for nearly 30 years now!! And although I rather enjoyed his participation in
> my last talk on the subject (
> https://blog.apnic.net/2020/01/22/bufferbloat-may-be-solved-but-its-not-over-yet/
> ) conflating
> a need for ecn and l4s signalling for low latency audio applications
> with what I actually said in that talk, kind of hurt. I achieved
> "my 2ms fiber based guitarist to fiber based drummer dream" 4+ years
> back with fq_codel and diffserv, no ecn required,
> no changes to the specs, no mandating packets be undroppable" and
> would like to rip the opus codec out of that mix one day.
>
> F) I agree with jana that changing the definition of RFC3168 to suit
> the RED algorithm (which is not pi or anything fancy) often present in
> network switches,
> today to suit dctcp, works. But you should say "configuring red to
> have l4s marking style" and document that.
>
> Sometimes I try to point out many switches have a form of DRR in them,
> and it's helpful to use that in conjunction with whatever diffserv
> markings you trust in your network.
>
> To this day I wish someone would publish how much they use DCTCP style
> signalling on a dc network relative to their other traffic.
>
> To this day I keep hoping that someone will publish a suitable
> set of RED parameters for a wide variety of switches and routers -
> for the most common switches and ethernet chips, for correct DCTCP usage.
>
> Mellonox's example:
> ( https://community.mellanox.com/s/article/howto-configure-ecn-on-mellanox-ethernet-switches--spectrum-x
> ) is not dctcp specific.
>
> many switches have a form of DRR in them, and it's helpful to use that
> in conjunction with whatever diffserv markings you trust in your
> network,
> and, as per the above example, segregate two red queues that way. From
> what I see
> above there is no way to differentiate ECT(0) from ECT(1) in that switch. (?)
>
> I do keep trying to point out the size of the end user ecn enabled
> deployment, starting with the data I have from free.fr. Are we
> building a network for AIs or people?
>
> G) Jana also made a point about 2 queues "being enough" (I might be
> mis-remembering the exact point). Mellonoxes ethernet chips at 10Gig expose
> 64 hardware queues, some new intel hardware exposes 2000+. How do these
> queues work relative to these algorithms?
>
> We have generally found hw mq to be far less of a benefit than the
> manufacturers think, especially as regard to
> lower latency or reduced cpu usage (as cache crossing is a bear).
> There is a lot of software work in this area left to be done, however
> they are needed to match queues to cpus (and tenants)
>
> Until sch_pie gained timestamping support recently, the rate estimator
> did not work correctly in a hw mq environment. Haven't looked over
> dualpi in this respect.
>
>
>
>
>
> --
> Make Music, Not War
>
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729
>
>

-- 
________________________________________________________________
Bob Briscoe                               http://bobbriscoe.net/



More information about the Ecn-sane mailing list