<font face="arial" size="2"><p style="margin:0;padding:0;">Besides my diversionary ramble in previous post, let me observe this.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">Until you realize that maintaining buffers inside the network never helps with congestion in a resource limited network, you don't really understand the problem.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">The only reason to have buffers at all is to deal with transient burst arrivals, and the whole goal is to stanch the sources quickly.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">Therefore I suspect that commotion would probably work better by reducing buffering to fit into small machines' RAM constraints, and on larger machines, just letting the additional RAM be used for something else.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">For many "network experts" this idea is "heresy" because they've been told that maximum throughput is gained only by big buffers in the network. Buffers are thought of like processor cache memories - the more the better for throughput.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">That statement is generally not true. It is true in certain kinds of throughput tests (single flows between the same source and destination, where end-to-end packet loss rates are high and not mitigated by link-level retrying once or twice). But in those tests there is no congestion, just an arbitrarily high queueing delay, which does not matter for pure throughput tests.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">Buffering *is* congestion, when a link is shared among multiple flows.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;"><br class="WM_COMPOSE_SIGNATURE_START" /><br class="WM_COMPOSE_SIGNATURE_END" /><br /><br />On Sunday, May 25, 2014 3:56pm, "Dave Taht" <dave.taht@gmail.com> said:<br /><br /></p>
<div id="SafeStyles1401048141">
<p style="margin:0;padding:0;">> meant to cc cerowrt-devel on this...<br />> <br />> <br />> ---------- Forwarded message ----------<br />> From: Dave Taht <dave.taht@gmail.com><br />> Date: Sun, May 25, 2014 at 12:55 PM<br />> Subject: qos in open commotion?<br />> To: andygunn@opentechinstitute.org, commotion-dev@lists.chambana.net<br />> <br />> <br />> Dear Andy:<br />> <br />> In response to your thread on qos in open commotion my list started a thread<br />> <br />> https://lists.bufferbloat.net/pipermail/cerowrt-devel/2014-May/003044.html<br />> <br />> summary:<br />> <br />> You can and should run packet scheduling/aqm/qos in routers with 32MB<br />> of memory or less. Some compromises are needed:<br />> <br />> https://lists.bufferbloat.net/pipermail/cerowrt-devel/2014-May/003048.html<br />> <br />> FIRST:<br />> <br />> We strongly recomend that your edge gateways have aqm/packet<br />> scheduling/qos on all their connections to the internet. See<br />> innumerable posting on bufferbloat and the fixes for it...<br />> <br />> http://gettys.wordpress.com/<br />> <br />> Feel free to lift cerowrt's SQM scripts and gui from the ceropackages<br />> repo for your own purposes. Openwrt barrier breaker qos-scripts are<br />> pretty good too but don't work with ipv6 at the moment...<br />> <br />> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_SQM_for_CeroWrt_310<br />> <br />> For the kind of results we get on cable:<br />> <br />> http://snapon.lab.bufferbloat.net/~cero2/jimreisert/results.html<br />> <br />> Wifi has a built in QoS (802.11e) system but it doesn't work well in<br />> congested environments<br />> and optimizing wireless-n aggregation works better.<br />> <br />> As for fixing wifi, well, we know what to do, but never found any<br />> funding for it. Ath9k is still horribly overbuffered and while<br />> fq_codel takes some of the edge off of wifi (and recently we disabled<br />> 802.11e entirely in favor of fq_codel), and in cerowrt we reduce<br />> aggregation to get better latency also - much more work remains to<br />> truly make it scale down to levels of latency we consider reasonable<br />> while (In other words, wifi latencies suck horribly now no matter<br />> what yet we think we know how to improve that. Feel free to do<br />> measurements of your mesh with tools like netperf-wrapper. There are<br />> also a few papers out there now showing how bad wifi can get nowadays)<br />> <br />> As for replacing pfifo_fast, openwrt barrier breaker replaced<br />> pfifo_fast with fq_codel in barrier breaker a year ago.<br />> <br />> fq_codel by default is essentially zero cost (64k per interface*hw<br />> queues) and the default in openwrt on all interfaces by default now...<br />> <br />> but the typical router cpus are so weak it is rare it kicks in except<br />> at 100mbit and below. (where it can be wonderful) - and it's on a rate<br />> limited (eg dsl or cable) system where it's most obviously useful.<br />> <br />> Presently.<br />> <br />> Lastly, I've been running a deployed babel mesh network for 2 years<br />> with fq_codel in it, 2 SSIDs per nanostation m5 and picostation, and<br />> it runs pretty good. Recent tests on the ubnt edgerouter went well, as<br />> well...<br />> <br />> Please give this stuff a shot. Your users will love it.<br />> <br />> --<br />> Dave Täht<br />> <br />> NSFW:<br />> https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article<br />> <br />> <br />> --<br />> Dave Täht<br />> <br />> NSFW:<br />> https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article<br />> _______________________________________________<br />> Cerowrt-devel mailing list<br />> Cerowrt-devel@lists.bufferbloat.net<br />> https://lists.bufferbloat.net/listinfo/cerowrt-devel<br />></p>
</div></font>