[Bloat] when does the CoDel part of fq_codel help in the real world?

Luca Muscariello luca.muscariello at gmail.com
Tue Nov 27 08:37:29 EST 2018


Another bit to this.
A router queue is supposed to serve packets no matter what is running at
the controlled end-point, BBR, Cubic or else.
So, delay-based congestion controller still get hurt in today Internet
unless they can get their portion of buffer at the line card.
FQ creates incentives for end-points to send traffic in a smoother way
because the reward to the application is immediate and
measurable. But the end-point does not know in advance if FQ is there or
not.

So going back to sizing the link buffer, the rule I mentioned applies. And
it allows to get best completion times for a wider range of RTTs.
If you, Mikael don't want more than 10ms buffer, how do you achieve that?
You change the behaviour of the source and hope flow isolation is available.
If you just cut the buffer down to 10ms and do nothing else, the only thing
you get is a short queue and may throw away half of your link capacity.



On Tue, Nov 27, 2018 at 1:17 PM Jonathan Morton <chromatix99 at gmail.com>
wrote:

> > On 27 Nov, 2018, at 1:21 pm, Mikael Abrahamsson <swmike at swm.pp.se>
> wrote:
> >
> > It's complicated. I've had people throw in my face that I need 2xBDP in
> buffer size to smoothe things out. Personally I don't want more than 10ms
> buffer (max), and I don't see why I should need more than that even if
> transfers are running over hundreds of ms of light-speed-in-medium induced
> delay between the communicating systems.
>
> I think we can agree that the ideal CC algo would pace packets out
> smoothly at exactly the path capacity, neither building a queue at the
> bottleneck nor leaving capacity on the table.
>
> Actually achieving that in practice turns out to be difficult, because
> there's no general way to discover the path capacity in advance.  AQMs like
> Codel, in combination with ECN, get us a step closer by explicitly
> informing each flow when it is exceeding that capacity while the queue is
> still reasonably short.  FQ also helps, by preventing flows from
> inadvertently interfering with each other by imperfectly managing their
> congestion windows.
>
> So with the presently deployed state of the art, we have cwnds oscillating
> around reasonably short queue lengths, backing off sharply in response to
> occasional signals, then probing back upwards when that signal goes away
> for a while.  It's a big improvement over dumb drop-tail FIFOs, but it's
> still some distance from the ideal.  That's because the information
> injected by the bottleneck AQM is a crude binary state.
>
> I do not include DCTCP in the deployed state of the art, because it is not
> deployable in the RFC-compliant Internet; it is effectively incompatible
> with Codel in particular, because it wrongly interprets CE marks and is
> thus noncompliant with the ECN RFC.
>
> However, I agree with DCTCP's goal of achieving finer-grained control of
> the cwnd, through AQMs providing more nuanced information about the state
> of the path capacity and/or bottleneck queue.  An implementation that made
> use of ECT(1) instead of changing the meaning of CE marks would remain
> RFC-compliant, and could get "sufficiently close" to the ideal described
> above.
>
>  - Jonathan Morton
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20181127/9b2ce9f1/attachment.html>


More information about the Bloat mailing list