<div dir="ltr"><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, May 13, 2015 at 9:20 AM, Bill Ver Steeg (versteb) <span dir="ltr"><<a href="mailto:versteb@cisco.com" target="_blank">versteb@cisco.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Time scales are important. Any time you use TCP to send a moderately large file, you drive the link into congestion. Sometimes this is for a few milliseconds per hour and sometimes this is for 10s of minutes per hour.<br>
<br>
For instance, watching a 3 Mbps video (Netflix/YouTube/whatever) on a 4 Mbps link with no cross traffic can cause significant bloat, particularly on older tail drop middleboxes. The host code does an HTTP get every N seconds, and drives the link as hard as it can until it gets the video chunk. It waits a second or two and then does it again. Rinse and Repeat. You end up with a very characteristic delay plot. The bloat starts at 0, builds until the middlebox provides congestion feedback, then sawtooths around at about the buffer size. When the burst ends, the middlebox burns down its buffer and bloat goes back to zero. Wait a second or two and do it again.<br></blockquote><div><br></div><div><div class="gmail_default" style="font-size:small;display:inline">It's time to do some packet traces to see what the video providers are doing. In YouTube's case, I believe the traffic is using the new sched_fq qdisc, which does packet pacing; but exactly how this plays out by the time packets reach the home isn't entirely clear to me. Other video providers/CDN's may/may not have started generating clues.</div></div><div><div class="gmail_default" style="font-size:small;display:inline"><br></div></div><div><div class="gmail_default" style="font-size:small;display:inline">Also note that so far, no one is trying to pace the IW transmission at all.</div></div><div><div class="gmail_default" style="font-size:small;display:inline"></div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
You can't fix this by adding bandwidth to the link. The endpoint's TCP sessions will simply ramp up to fill the link. You will shorten the congested phase of the cycle, but TCP will ALWAYS FILL THE LINK (given enough time to ramp up)<br></blockquote><div><br></div><div><div class="gmail_default" style="font-size:small;display:inline">That has been the behavior in the past, but it's no longer safe to presume we should tar everyone with the same brush, rather, we should do a bit of science, and then try to hold people's feet to the fire that do not "play nice" with the network.</div></div><div><br></div><div class="gmail_default" style="font-size:small">Some packet captures in the home can easily sort this out.</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">Jim</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small"></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
The new AQM (and FQ_AQM) algorithms do a much better job of controlling the oscillatory bloat, but you can still see ABR video patterns in the delay figures.<br>
<span class="HOEnZb"><font color="#888888"><br>
Bvs<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
-----Original Message-----<br>
From: <a href="mailto:bloat-bounces@lists.bufferbloat.net">bloat-bounces@lists.bufferbloat.net</a> [mailto:<a href="mailto:bloat-bounces@lists.bufferbloat.net">bloat-bounces@lists.bufferbloat.net</a>] On Behalf Of Dave Taht<br>
Sent: Tuesday, May 12, 2015 12:00 PM<br>
To: bloat; <a href="mailto:cerowrt-devel@lists.bufferbloat.net">cerowrt-devel@lists.bufferbloat.net</a><br>
Subject: [Bloat] better business bufferbloat monitoring tools?<br>
<br>
One thread bothering me on <a href="http://dslreports.com" target="_blank">dslreports.com</a> is that some folk seem to think you only get bufferbloat if you stress test the network, where transient bufferbloat is happening all the time, everywhere.<br>
<br>
On one of my main sqm'd network gateways, day in, day out, it reports about 6000 drops or ecn marks on ingress, and about 300 on egress.<br>
Before I doubled the bandwidth that main box got, the drop rate used to be much higher, and a great deal of the bloat, drops, etc, has now moved into the wifi APs deeper into the network where I am not monitoring it effectively.<br>
<br>
I would love to see tools like mrtg, cacti, nagios and smokeping[1] be more closely integrated, with bloat related plugins, and in particular, as things like fq_codel and other ecn enabled aqms deploy, start also tracking congestive events like loss and ecn CE markings on the bandwidth tracking graphs.<br>
<br>
This would counteract to some extent the classic 5 minute bandwidth summaries everyone looks at, that hide real traffic bursts, latencies and loss at sub 5 minute timescales.<br>
<br>
mrtg and cacti rely on snmp. While loss statistics are deeply part of snmp, I am not aware of there being a mib for CE events and a quick google search was unrevealing. ?<br>
<br>
There is also a need for more cross-network monitoring using tools such as that done by this excellent paper.<br>
<br>
<a href="http://www.caida.org/publications/papers/2014/measurement_analysis_internet_interconnection/measurement_analysis_internet_interconnection.pdf" target="_blank">http://www.caida.org/publications/papers/2014/measurement_analysis_internet_interconnection/measurement_analysis_internet_interconnection.pdf</a><br>
<br>
[1] the network monitoring tools market is quite vast and has many commercial applications, like intermapper, forks of nagios, vendor specific producs from cisco, etc, etc. Far too many to list, and so far as I know, none are reporting ECN related stats, nor combining latency and loss with bandwidth graphs. I would love to know if any products, commercial or open source, did....<br>
<br>
--<br>
Dave Täht<br>
Open Networking needs **Open Source Hardware**<br>
<br>
<a href="https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67" target="_blank">https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67</a><br>
_______________________________________________<br>
</div></div><span class="im HOEnZb">Bloat mailing list<br>
<a href="mailto:Bloat@lists.bufferbloat.net">Bloat@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/bloat" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
</span><div class="HOEnZb"><div class="h5">_______________________________________________<br>
Cerowrt-devel mailing list<br>
<a href="mailto:Cerowrt-devel@lists.bufferbloat.net">Cerowrt-devel@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/cerowrt-devel" target="_blank">https://lists.bufferbloat.net/listinfo/cerowrt-devel</a><br>
</div></div></blockquote></div><br></div></div>