[Bloat] Bloat done correctly?

Benjamin Cronce bcronce at gmail.com
Fri Jun 12 17:19:07 EDT 2015


> Hi Jonathan,
>
> On June 12, 2015 9:14:02 PM GMT+02:00, Jonathan Morton <chromatix99 at
gmail.com> wrote:
> >We have a test in Flent which tries to exercise this case: 50 flows in
one
> >direction and 1 in the other, all TCP. Where the 50 flows are on the
narrow
> >side of an asymmetric link, it is possible to see just what happens when
> >there isn't enough bandwidth for the acks of the single opposing flow.
> >
> >What I see is that acks behave like an unresponsive flow in themselves,
but
> >one that is reasonably tolerant to loss (more so than to delay). On a
> >standard AQM, the many flows end up yielding to the acks; on a
> >flow-isolating AQM, the acks are restricted to a fair (1/51) share, but
> >enough of them are dropped to (eventually) let the opposing flow get most
> >of the available bandwidth on its side. But on an FQ without AQM, acks
> >don't get dropped so they get delayed instead, and the opposing flow will
> >be ack clocked to a limited bandwidth until the ack queue overflows.
> >
> >Cake ends up causing odd behaviour this way. I have a suspicion about why
> >one of the weirder effects shows up - it has to get so aggressive about
> >dropping acks that the count variable for that queue wraps around.
> >Implementing saturating arithmetic there might help.
> >
> >There is a proposed TCP extension for ack congestion control, which
allows
> >the ack ratio to be varied in response to ack losses. This would be a
> >cleaner way to achieve the same effect, and would allow enabling ECN on
the
> >acks, but it's highly experimental.
>
>        This is reducing the ACK-rate to make losses less likely, but at
the same time it makes a single loss more costly, so whether this is a win
depends on whether the sparser ACK flow has a much higher probability to
pass trough the congested link. I wonder what percentage of an ACK flow can
be dropped without slowing the sender?
>
> >
> >- Jonathan Morton

I also question the general usefulness of sparser ACK flows just to
accommodate hyper-asymmetrical connections. The main causes of these issues
are old DSL techs or DOCSIS rollouts that haven't fully switched to
DOCSIS3. Many cable companies claim to be DOCSIS3 because their downstream
is 3.0, but their upstream is 2.0 because 3.0 requires old line-filters to
be replaced with newer ones that open up some low frequency channels. Once
these channels get opened up, assuming fiber doesn't get their first, there
will be a lot more available upstream bandwidth, assuming the ISPs
provision it.

Modern OSs already support naggle to be toggled on/off for a given TCP
flow. Maybe a change to the algorithm to detect TCP RTTs above a threshhold
or certain patterns in lost ACKs to trigger increasing the number of
packets naggle coalesces for ACKs. My understanding of naggle is it take
two parameters, a window and a max number. I think the default window is
something like 100ms and the default max coalesced ACKs are two. But you
can modify either value. So technically ACK rates can already be modified,
it's just not done dynamically, but the feature already exists. Instead of
making further changes to TCP, educate people on how to change their TCP
settings?

I could see this making strange issues with really low RTTs where the RTT
is lower than the naggle window making for a small receive window. Because
TCP implementations have a minimum of 2 segments, it matches with naggle's
default to combine two ACKs. If you suddenly decided to combine 3 segments,
two segments get sent, the other side receives the segments but does not
ACK them because it's waiting for a 3rd. The other side does not send any
more segments because it's waiting for an ACK. You suddenly get these
strange pulses based on the naggle window.

Because naggle's combine matches perfectly with the minimum outstanding
segments, this corner case does not exist. But I'll leave this to people
more knowledgeable than me. Just thinking out loud here.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20150612/8f0afd03/attachment-0003.html>


More information about the Bloat mailing list