<br><br><div class="gmail_quote">On Sat, Feb 9, 2013 at 11:10 AM, this_is_not_my_name nor_is_this <span dir="ltr"><<a href="mailto:moeller0@gmx.de" target="_blank">moeller0@gmx.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Jonathan,<br>
<div class="im"><br>
<br>
On Feb 9, 2013, at 10:15 , Jonathan Morton wrote:<br>
<br>
> Latency caused by bufferbloat always appears at the bottleneck device. Usually that is the modem, and you've given no alternative that it could plausibly be. The modems you mention are slightly different model numbers, but that can hide substantial differences in internal configuration.<br>
><br>
> For a typical drop-tail queue, the induced latency under load is the size of the buffer divided by the speed of the link draining it. Assuming both modems have a 4Mbit uplink, 550ms is consistent with a 256KB buffer, and 220ms is consistent with a 48KB buffer - neither of which would seem excessively large to a modem builder who hasn't heard of bufferbloat. However with a shared cable infrastructure, it is possible that the uplink is constrained by other users on the same segment, which will skew this calculation.<br>
><br>
> To cure it without modifying the modem, you need to move the bottleneck to a point where you can control the buffer. You do this by introducing traffic shaping at slightly below the advertised modem uplink speed on one of your own machines and directing all upstream traffic through it.<br>
<br>
</div> This is great advise, I just want to mention that both openWRT's QOS settings as well as cerowrt's simple_qos.sh still show some buffering in netalyzr (in my case accidentally 550ms on the uplink with a 4000Kbit/s cable connection shaped down to 3880Kbit/s using cerowrt's (3.7.4-3) simple_qos script). </blockquote>
<div><br>A big problem with netanalyzer (presently) is that it measures a single UDP flow in its buffering analysis. It is unaware of packet scheduling (such as what fq_codel and sfq and qfq do). So it will show large buffers in a rate limited fq_codel case... *but they don't matter*, because of the fair queuing component.<br>
<br>I would rather like them to change their test to try and explicitly identify where some form of fair queuing is actually in use. SFQ for example, is widely deployed in <a href="http://free.fr">free.fr</a>'s ISP network, and I would hope a few other places.<br>
<br>The way to do that, is easy - send one high rate flow and one low rate flow, with a different tuple for the sent addresses, and observe what happens to both streams. <br><br>We do the same thing with high rate flows + ping in the rrul tests. <br>
<br>You observe large buffers with a single udp flood against fq_codel, with a single stream, for a while, but all other flows get through it very rapidly. So it's totally ok to have a large buffer for a single flow for a while!<br>
<br>I tried to talk to this in the stanford talk, I need to work on describing this behavior better.<br><br><a href="http://netseminar.stanford.edu/">http://netseminar.stanford.edu/</a><br><br><br><br><br><br> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
But real measurements with real TCP traffic (opening 60 webpages simultaneously while saturating the uplink with a big upload) showed that ping times are only marginally affected by the cable connection running constantly close to full saturation. My point netalyzr's worst case scenario is quite a bit detached from real world performance, and good queue management (using one of the codel derivates) will give excellent real world performance while still showing excessive buffering in netalyzr. Make what you will out of that… (my not-even-started-yet pet project is to separate tcp from udp traffic (or even better responsive/elastic from unresponsive/inesastic traffic)) and treat both to an according dropping strategy, i.e. drop UDP much harder, but given my time constraints that will happen in the next decade….<br>
<br>
best<br>
Sebastian<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
> - Jonathan Morton<br>
> On Feb 9, 2013 7:27 PM, "Forums1000" <<a href="mailto:forums1000@gmail.com">forums1000@gmail.com</a>> wrote:<br>
> Hi Jonathan and Dave<br>
><br>
> My entire LAN-network is gigabit. My cable subscription is 60 megabit down and 4 megabit up.<br>
> Now, both my routers' WAN-port and the cable modems' LAN port are also gigabit. The router can route LAN to WAN and the other way around (with NAT and connection tracking enabled) in excess of 100 megabit.<br>
><br>
> Now my cable modem is a Motorola Surfboard SV6120E and hers is a Motorola Surfboard CV6181E. My upload lag is 550ms and hers is only 220ms. Moreover, at her place there are Powerplugs in the path limiting her download to 30 megabit instead of 60 megabit. Yet, the upload lag is much lower than mine. There, it also did not matter where I ran Natalyzr, the result was always 220ms of bufferbload.<br>
><br>
> Could this still be only the modem?<br>
><br>
><br>
> On Sat, Feb 9, 2013 at 10:52 AM, Forums1000 <<a href="mailto:forums1000@gmail.com">forums1000@gmail.com</a>> wrote:<br>
> Hi everyone,<br>
><br>
> Can anyone give some tips on how to diagnose the sources of bufferbloat? According to the Netalyzr test at <a href="http://netalyzr.icsi.berkeley.edu/" target="_blank">http://netalyzr.icsi.berkeley.edu/</a>, I have 550ms of upload bufferbloat. I tried all kinds of stuff on my Windows 7 laptop:<br>
><br>
> - For the Intel(R) 82567LF Gigabit Network Connection, I put receive and transmit buffers to the lowest value of 80 (80 bytes? 80 packets? I don't know). I also disabled interrupt moderation.<br>
> Result? Still 550ms.<br>
> - Then I connected my laptop directly to my cable modem, bypassing my Mikrotik 450G router. Result? Still 550ms of bufferbloat.<br>
> - Then I put a 100 megabit switch between the cable modem an the laptop (as both cable modem and Intel NIC are gigabit). Result? Still 550ms of upload bufferbloat.<br>
><br>
> I'm out of ideas now. It seems I can't do anything at all to lower bufferbloat. Or the Netalyzr test is broken?:-)<br>
><br>
> many thanks for your advice,<br>
> Jeroen<br>
><br>
><br>
><br>
> _______________________________________________<br>
> Bloat mailing list<br>
> <a href="mailto:Bloat@lists.bufferbloat.net">Bloat@lists.bufferbloat.net</a><br>
> <a href="https://lists.bufferbloat.net/listinfo/bloat" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
><br>
> _______________________________________________<br>
> Bloat mailing list<br>
> <a href="mailto:Bloat@lists.bufferbloat.net">Bloat@lists.bufferbloat.net</a><br>
> <a href="https://lists.bufferbloat.net/listinfo/bloat" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
<br>
_______________________________________________<br>
Bloat mailing list<br>
<a href="mailto:Bloat@lists.bufferbloat.net">Bloat@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/bloat" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Dave Täht<br><br>Fixing bufferbloat with cerowrt: <a href="http://www.teklibre.com/cerowrt/subscribe.html" target="_blank">http://www.teklibre.com/cerowrt/subscribe.html</a>