[Cerowrt-devel] [Bloat] better business bufferbloat monitoring tools?
Bill Ver Steeg (versteb)
versteb at cisco.com
Wed May 13 09:20:43 EDT 2015
Time scales are important. Any time you use TCP to send a moderately large file, you drive the link into congestion. Sometimes this is for a few milliseconds per hour and sometimes this is for 10s of minutes per hour.
For instance, watching a 3 Mbps video (Netflix/YouTube/whatever) on a 4 Mbps link with no cross traffic can cause significant bloat, particularly on older tail drop middleboxes. The host code does an HTTP get every N seconds, and drives the link as hard as it can until it gets the video chunk. It waits a second or two and then does it again. Rinse and Repeat. You end up with a very characteristic delay plot. The bloat starts at 0, builds until the middlebox provides congestion feedback, then sawtooths around at about the buffer size. When the burst ends, the middlebox burns down its buffer and bloat goes back to zero. Wait a second or two and do it again.
You can't fix this by adding bandwidth to the link. The endpoint's TCP sessions will simply ramp up to fill the link. You will shorten the congested phase of the cycle, but TCP will ALWAYS FILL THE LINK (given enough time to ramp up)
The new AQM (and FQ_AQM) algorithms do a much better job of controlling the oscillatory bloat, but you can still see ABR video patterns in the delay figures.
Bvs
-----Original Message-----
From: bloat-bounces at lists.bufferbloat.net [mailto:bloat-bounces at lists.bufferbloat.net] On Behalf Of Dave Taht
Sent: Tuesday, May 12, 2015 12:00 PM
To: bloat; cerowrt-devel at lists.bufferbloat.net
Subject: [Bloat] better business bufferbloat monitoring tools?
One thread bothering me on dslreports.com is that some folk seem to think you only get bufferbloat if you stress test the network, where transient bufferbloat is happening all the time, everywhere.
On one of my main sqm'd network gateways, day in, day out, it reports about 6000 drops or ecn marks on ingress, and about 300 on egress.
Before I doubled the bandwidth that main box got, the drop rate used to be much higher, and a great deal of the bloat, drops, etc, has now moved into the wifi APs deeper into the network where I am not monitoring it effectively.
I would love to see tools like mrtg, cacti, nagios and smokeping[1] be more closely integrated, with bloat related plugins, and in particular, as things like fq_codel and other ecn enabled aqms deploy, start also tracking congestive events like loss and ecn CE markings on the bandwidth tracking graphs.
This would counteract to some extent the classic 5 minute bandwidth summaries everyone looks at, that hide real traffic bursts, latencies and loss at sub 5 minute timescales.
mrtg and cacti rely on snmp. While loss statistics are deeply part of snmp, I am not aware of there being a mib for CE events and a quick google search was unrevealing. ?
There is also a need for more cross-network monitoring using tools such as that done by this excellent paper.
http://www.caida.org/publications/papers/2014/measurement_analysis_internet_interconnection/measurement_analysis_internet_interconnection.pdf
[1] the network monitoring tools market is quite vast and has many commercial applications, like intermapper, forks of nagios, vendor specific producs from cisco, etc, etc. Far too many to list, and so far as I know, none are reporting ECN related stats, nor combining latency and loss with bandwidth graphs. I would love to know if any products, commercial or open source, did....
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
_______________________________________________
Bloat mailing list
Bloat at lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
More information about the Cerowrt-devel
mailing list