[Cerowrt-devel] RE : [Bloat] better business bufferbloat monitoring tools?
luca.muscariello at orange.com
luca.muscariello at orange.com
Thu May 14 11:40:45 EDT 2015
Bill
I beleive you hit the limit of what you can do with AQM w/o FQ.
something more can be achieved with paced sources as said in this thread.
I do not see incentives for ABR folks to do true pacing however.
doing partial pacing to fix the TSO/GSO problem is of course a must but won't solve the problem you mention.
see you on monday at the conference. I 'm giving a talk right before you.
Luca
-------- Message d'origine --------
De : "Bill Ver Steeg (versteb)"
Date :2015/05/14 00:54 (GMT+01:00)
À : Dave Taht
Cc : cerowrt-devel at lists.bufferbloat.net, bloat
Objet : Re: [Bloat] better business bufferbloat monitoring tools?
Dave That said - It has generally been my hope that most of the big movie streaming folk have moved to some form of pacing by now but have no data on it. (?)
Bill VerSteeg replies - Based on my recent tests, the production ABR flows are still quite bursty. There has been some work done in this area, but I do not think bloat is top-of-mind for the ABR folks, and I do not think it has made it into production systems. Some of the work is in the area of pacing TCP's micro-bursts using sch_fq-like methods. Some has been in the area of application rate estimation. Some of the IW10 pacing stuff may also be useful.
I am actually giving a talk on AQM to a small ABR video conference next week. The executive summary of my talk is "AQM makes bursty ABR flows less impactful to the network buffers (and thus cross traffic), but the bursts still cause problems. The problems are really bad on legacy buffer management algorithms. The new AQM algorithms take care of most of the issues, but bursts of data make the new algorithms work harder and do cause some second-order problems."
The main problem that I have seen in my testing has been in the CoDel/PIE (as opposed to FQ_XXX) variants. When the bottleneck link drops packets as the elephant bursts, the mice flows suffer. Rather than completing in a handful of RTTs, it takes several times longer for the timeouts and rexmits to complete the transfer. When running FQ_Codel or FQ_PIE, the elephant flow only impacts itself, as the mice are on their own queues. There are also some corner cases when the offered load is extremely high, but these seem to be third order effects.
I will let the list know what the current state of the art on pacing is after next week's conference, but I suspect that the ABR folks are still on a learning curve here.
Bvs
-----Original Message-----
From: Dave Taht [mailto:dave.taht at gmail.com]
Sent: Wednesday, May 13, 2015 9:37 AM
To: Bill Ver Steeg (versteb)
Cc: bloat; cerowrt-devel at lists.bufferbloat.net
Subject: Re: [Bloat] better business bufferbloat monitoring tools?
On Wed, May 13, 2015 at 6:20 AM, Bill Ver Steeg (versteb) <versteb at cisco.com> wrote:
> Time scales are important. Any time you use TCP to send a moderately large file, you drive the link into congestion. Sometimes this is for a few milliseconds per hour and sometimes this is for 10s of minutes per hour.
>
> For instance, watching a 3 Mbps video (Netflix/YouTube/whatever) on a 4 Mbps link with no cross traffic can cause significant bloat, particularly on older tail drop middleboxes. The host code does an HTTP get every N seconds, and drives the link as hard as it can until it gets the video chunk. It waits a second or two and then does it again. Rinse and Repeat. You end up with a very characteristic delay plot. The bloat starts at 0, builds until the middlebox provides congestion feedback, then sawtooths around at about the buffer size. When the burst ends, the middlebox burns down its buffer and bloat goes back to zero. Wait a second or two and do it again.
The dslreports tests are opening 8 or more full rate streams at once.
Not pretty results.
Web browsers expend most of their flows entirely in slow start.
Etc.
I am very concerned with what 4k streaming looks like, and just got an amazon box to take a look at it. (but have not put out the cash for a suitable monitor)
> You can't fix this by adding bandwidth to the link. The endpoint's TCP
> sessions will simply ramp up to fill the link. You will shorten the
> congested phase of the cycle, but TCP will ALWAYS FILL THE LINK (given
> enough time to ramp up)
It is important to keep stressing this point as the memes propagate outwards.
>
> The new AQM (and FQ_AQM) algorithms do a much better job of controlling the oscillatory bloat, but you can still see ABR video patterns in the delay figures.
It has generally been my hope that most of the big movie streaming folk have moved to some form of pacing by now but have no data on it.
(?)
Certainly I'm happy with what I saw of quic and have hope that http/2 will cut the number of simultaneous flows in progress.
But I return to my original point in that I would like to continue to find more ways to make the sub 5 minute behaviors visible and comprehensible to more people...
> Bvs
>
>
> -----Original Message-----
> From: bloat-bounces at lists.bufferbloat.net
> [mailto:bloat-bounces at lists.bufferbloat.net] On Behalf Of Dave Taht
> Sent: Tuesday, May 12, 2015 12:00 PM
> To: bloat; cerowrt-devel at lists.bufferbloat.net
> Subject: [Bloat] better business bufferbloat monitoring tools?
>
> One thread bothering me on dslreports.com is that some folk seem to think you only get bufferbloat if you stress test the network, where transient bufferbloat is happening all the time, everywhere.
>
> On one of my main sqm'd network gateways, day in, day out, it reports about 6000 drops or ecn marks on ingress, and about 300 on egress.
> Before I doubled the bandwidth that main box got, the drop rate used to be much higher, and a great deal of the bloat, drops, etc, has now moved into the wifi APs deeper into the network where I am not monitoring it effectively.
>
> I would love to see tools like mrtg, cacti, nagios and smokeping[1] be more closely integrated, with bloat related plugins, and in particular, as things like fq_codel and other ecn enabled aqms deploy, start also tracking congestive events like loss and ecn CE markings on the bandwidth tracking graphs.
>
> This would counteract to some extent the classic 5 minute bandwidth summaries everyone looks at, that hide real traffic bursts, latencies and loss at sub 5 minute timescales.
>
> mrtg and cacti rely on snmp. While loss statistics are deeply part of snmp, I am not aware of there being a mib for CE events and a quick google search was unrevealing. ?
>
> There is also a need for more cross-network monitoring using tools such as that done by this excellent paper.
>
> http://www.caida.org/publications/papers/2014/measurement_analysis_int
> ernet_interconnection/measurement_analysis_internet_interconnection.pd
> f
>
> [1] the network monitoring tools market is quite vast and has many commercial applications, like intermapper, forks of nagios, vendor specific producs from cisco, etc, etc. Far too many to list, and so far as I know, none are reporting ECN related stats, nor combining latency and loss with bandwidth graphs. I would love to know if any products, commercial or open source, did....
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
_______________________________________________
Bloat mailing list
Bloat at lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
_________________________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cerowrt-devel/attachments/20150514/eff46ba0/attachment-0002.html>
More information about the Cerowrt-devel
mailing list