<div dir="ltr"><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Apr 1, 2014 at 8:21 PM, Dave Taht <span dir="ltr"><<a href="mailto:dave.taht@gmail.com" target="_blank">dave.taht@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">On Thu, Mar 27, 2014 at 3:21 AM, Aaron Wood <<a href="mailto:woody77@gmail.com">woody77@gmail.com</a>> wrote:<br>
> Dave had asked about results from .32-12 on DSL, and in particular how pie<br>
> was fairing on dsl. I finally was able to setup a clean test env yesterday,<br>
> and ran a bunch of tests.<br>
><br>
> Results:<br>
> <a href="http://burntchrome.blogspot.com/2014/03/cerowrt-31032-12-sqm-comparison-on.html" target="_blank">http://burntchrome.blogspot.com/2014/03/cerowrt-31032-12-sqm-comparison-on.html</a><br>
><br>
> Takeaways:<br>
><br>
> - I'm still dropping a lot of "small-flow" packets when heavily loaded, I'm<br>
> not sure why. Free.fr's freebox doesn't do this, it's definitely in cero.<br>
<br>
</div>Try the simplest.qos model.</blockquote><div><br></div><div>Will do. </div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">
> I feel like _something_ is misconfigured, and that's why I'm dropping<br>
> packets the way that I am. I really would like to solve that. Free.fr's<br>
> sfq implementation behaves quite nicely, and by comparison, doesn't drop UDP<br>
> packets under load.<br>
<br>
</div>So you are not behind a freedombox revolution V6? (that's the linux<br>
3.6 fq_codel version)<br></blockquote><div><br></div><div>No I am behind the v6 box. I keep getting sfq and fq_codel swapped in my head.</div><div> </div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
post 3.6 fq_codel will drop some packets from all flows in order to reduce<br>
latency. I wish the netperf benchmark had better behavior for the udp<br>
flows rather than stopping after the first loss... some non-bursty<br>
packet loss really isn't a problem for things like voip, and dns<br>
traffic. But that said, I liked this 3.6 behavior of codel and<br>
sometimes wish we had a better solution for sparser flows under heavy<br>
load than what is in there now.<br></blockquote><div><br></div><div>Why was the change made to >3.6 fq_codel?</div><div> </div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Pie has sprouted a very large estimation window in the Linux release,<br>
and it's not a surprise to me that it doesn't work well below 10Mbit...<br></blockquote><div><br></div><div>Most people don't have uplink >10Mb still, right? My DOCSIS 3 modem at my last apartment here in Paris (Numericable) was 30Mb down, 1Mb up.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">A 3 way intercontinental call was pretty much perfect with fq_codel<br>
while running a rrul test:<br>
<br>
<a href="http://snapon.lab.bufferbloat.net/~d/mike/webrtc.svg" target="_blank">http://snapon.lab.bufferbloat.net/~d/mike/webrtc.svg</a><br>
<br>
(the dip in the data was when I ALSO transferred a large file)<br></blockquote><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
And the pie result, while not horrible, did have some sort of observable<br>
problems when I hit it with traffic in slow start, which I didn't<br>
manage to capture...<br>
<br>
<a href="http://snapon.lab.bufferbloat.net/~d/mike/pie.svg" target="_blank">http://snapon.lab.bufferbloat.net/~d/mike/pie.svg</a></blockquote><div><br></div><div>That's a pretty substantial difference (at least in how it looks graphed).</div>
<div> </div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">One thing I've noticed is the academic community is now seemingly defining<br>
"bufferbloat" as something that happens with > 200ms induced queuing<br>
delay delay, where I (we) define it as "excessive queueing", and in<br>
this group, are "settling" for <20ms delay in both directions in most<br>
cases. This really changes the mental model a lot. TCP reacts very<br>
differently at 200ms RTT than 5ms.<br>
<br>
I keep seeing papers that put one end of the link with 100ms worth of buffering,<br>
against X aqm on the other end of the link, and doing measurements...<br>
where here we artificially put it on both sides lacking hope the big<br>
dlsm/cmts vendors will do anything to fix it....<br></blockquote><div><br></div><div>published papers, at good conferences? or just "papers".</div><div> </div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="">
> This was all ipv4-only (I haven't asked the apartment owner to turn on ipv6<br>
> with Free.fr).<br>
><br>
> In about 2 months, I'll be back in the Bay Area. With either <a href="http://sonic.net" target="_blank">sonic.net</a> DSL<br>
> (bonded channels for 30/2 service), or with Comcast. Comcast most likely.<br>
> It would be nice to have 3-5Mb upload again.<br>
<br>
</div>sonic has a higher end service too, so far as I know.<br></blockquote><div><br></div><div>They can do bonded channels, so with two channels bonded with AnnexM, that could work (if it the range to the CO was still short). Although I'd be happy with the download that I have here (16Mb), if I could get 3Mb up with it using AnnexM. I probably wouldn't even miss the extra two 2Mb of the top if download dropped to 14Mb</div>
<div> </div><div>-Aaron</div></div></div></div>