[Bloat] quick review and rant of "Identifying and Handling Non Queue Building Flows in a Bottleneck Link"

Toke Høiland-Jørgensen toke at toke.dk
Thu Nov 1 09:25:01 EDT 2018


Greg White <g.white at CableLabs.com> writes:

> Hi Toke, thanks for the pointer to your paper, I had not seen it
> before.

You're welcome :)

> I agree that could be a candidate algorithm. It is certainly simple.
> It may not be the only (or perhaps even the best) solution for the
> dual-queue case though. I'm thinking in the context of an L4S TCP
> flow, which can respond quickly to "new" ECN markings and achieve link
> saturation with ultra low (but not zero) queuing delay. A good
> property for a queue protection algorithm would be that these L4S
> flows could be consistently placed in the NQB queue. I think that the
> simple approach you mentioned would result in many L4S flows being
> deemed Queue Building.

Yes, I think you are right (depending on traffic mix, of course). It
might be possible to tweak it to work better, though. E.g., by changing
the threshold (moving flows to QB if they end up with more than X ms of
queue). This would only work if you start out all flows at NQB, with the
associated aggressive marking behaviour; so I'm not sure if a normal TCP
flow would ever manage to get to QB state before getting clobbered by
the NQB markings...

> Additionally, we've observed applications that send variable sized
> "messages" at a fixed rate (e.g. 20 messages/second) where the message
> sometimes exceeds the MTU and results in two closely spaced (possibly
> back-to-back) packets. This is a flow that I think should be
> considered to be NQB, but would get flagged as QB by the simple
> approach. You described this case in your paper, where you state that
> the first Q bytes of each burst will be treated as NQB (the first
> packet in the case we're talking about here), but the rest will be
> treated as QB. Assuming that message latency is important for these
> sorts of applications, this is equivalent to saying that the entire
> burst is considered as QB. In the fq_codel case, the message latency
> would be something like Q(n-1)(N+1)/R (assuming no other sparse flow
> arrivals), something like 1.3ms using the example values in your paper
> (plus n=2, N=10) which may be ok. In the dual-queue case it is a
> bigger deal, because the remaining packets would be put at the end of
> the QB queue, which could have a latency of 10 or 20 ms.

Sure, it's by no means a perfect mechanism. But it makes up for that by
it's simplicity, IMO. And it does work really well for *a lot* of
today's latency-sensitive traffic.

(In your case of two-MTU messages, you could tune the quantum to allow
those; but of course you can construct examples that won't work).

> So, a queue protection function that provides a bit more (but still
> limited) allowance for a flow to have packets in queue would likely
> work better in the dual-queue case.

Yeah, that's tricky, especially if you want it to be very accurate in
its distinction; which I sort of gather that you do, right?

-Toke



More information about the Bloat mailing list