[Bloat] [aqm] [iccrg] AQM deployment status?
Fred Baker (fred)
fred at cisco.com
Wed Nov 13 12:50:45 EST 2013
On Sep 25, 2013, at 12:24 PM, Mikael Abrahamsson <swmike at swm.pp.se> wrote:
> For higher end platforms, for instance all cisco CPU based routers (for some value of "all") can be configured with RED, fair-queue or similar, but they come with FIFO as default. This has been the same way since at least the mid 90ties as far as I know, long way back to cisco 1600 device etc.
>
> Higher end Cisco equipment such as ASR9k, 12000, CRS etc, all support WRED, and here it makes sense since they all have ~50ms worth of buffering or more. They also come with FIFO as default setting.
Yes. There are two reasons that we don't deploy RED by default.
One is that we operate on a principle we call the "principle of least surprise"; we try to not change our customer's configurations by changing the defaults, and at the time I wrote the RED/WRED code the default was FIFO. Yes, that was quite some time ago, and one could imagine newer equipment coming out with different defaults on deployment, but that's not what the various business units do.
The other is this matter of configuration. In the late 1990's, Cisco employed Van and Kathy, and asked them to figure out autoconfiguration values. That didn't work out. I don't say that as a slam on V+K; it's simply a fact. They started by recommending an alternative algorithm called RED-Lite (don't bother googling that; you get discothèques), and are now recommending CoDel. Now, there is probably a simplistic value that one could throw in, such as setting max-threshold to the queue memory allocated to the queue and min-threshold to some function logarithmically related to the bit rate that serves as an estimator of the number of bit-times of delay to tune to. That would probably be better than nothing, but it would be nice to have some science behind it.
That brings us back to the auto-tuning discussion.
To my way of thinking, a simplistic algorithm such as that logarithmic approach or a lookup table that starts from (perhaps) bit rate and neighbor ping RTT can come up with an initial set of parameters that the algorithm's own processes can modify to ambient traffic behavior. Kathy's simulations in the study I mentioned suggested that a 1.5 MBPS link might do well with min-threshold at 30 ms (e.g., ~4 MTU-sized messages or a larger number of smaller messages), and rates higher at some single digit number of ms, decreasing as the speed increased.
CoDel suggests a flat 5 ms value (at 1.5 MBPS, less than a single MTU, and at one gigabit, 83 12K bit messages). I could imagine the initialization algorithm selecting 5 ms above a certain speed, and the equivalent of 4 MTU intervals at lower line speeds where 4*MTU exceeds 5 ms. I could imagine a related algorithm then adjusting that initial value to interact well with Google's IW=10 behavior or whatever else it finds on the link.
PIE would probably start with a similar set of values, and tune its mark/drop interval to reach a value that works well with ambient traffic.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20131113/3892bccc/attachment-0001.sig>
More information about the Bloat
mailing list