[Bloat] ipspace.net: "QUEUING MECHANISMS IN MODERN SWITCHES"
Neil Davies
neil.davies at pnsol.com
Tue May 27 08:34:16 EDT 2014
Hagen
It comes down to the portion of the end-to-end quality attenuation (quality attenuation - ∆Q - incorporates both loss and delay) budget you want to assign to device and how you want it distributed amongst the competing flows (given that is all you can do - you can’t “destroy” loss or “destroy” delay, just differentially distribute it).
As for ingress/egress capacity being almost the same, that *REALLY* depends on the deployment scenario….
You can’t do traffic performance engineering in a vacuum - you need to have objectives for the application outcomes - that makes the problem context dependent.
When we do this for people we often find that there are several locations in the architecture where FIFO is the best solution (where you can prove that the natural relaxation times of the queues, given the offered load pattern, is sufficiently small so as not to induce to much quality attenuation). In other places you need to do more analysis.
Neil
On 27 May 2014, at 13:20, Hagen Paul Pfeifer <hagen at jauu.net> wrote:
> The question is if (codel/pie/whatever) AQM makes sense at all for
> 10G/40G hardware and higher performance irons? Igress/egress bandwidth
> is nearly identical, a larger/longer buffering should not happen. Line
> card memory is limited, a larger buffering is defacto excluded.
>
> Are there any documents/papers about high bandwidth equipment and
> bufferbloat effects?
>
> Hagen
More information about the Bloat
mailing list