[Bloat] Solving bufferbloat with TCP using packet delay

Maarten de Vries maarten at de-vri.es
Tue Mar 26 09:10:15 EDT 2013


I won't deny that there are problems with delay based congestion control,
but at least some of the same problems also apply to AQM.

In the presence of greedy UDP flows, AQM would highly bias those flows.
Sure, the more packets a flow sends, the more it will have it's packets
dropped. But only TCP will lower its congestion window and the greedy UDP
flow will happily continue filling the buffer. The effect is that only the
UDP flow will use most of the available bandwidth. Yes, the queue will be
shorter so the TCP flow will have a lower delay, but it will also have a
much lower throughput.

Of course, the same applies for delay based congestion control and the
greedy flow will still use most of the bandwidth and in addition the delay
will be higher. The point remains that neither AQM nor delay based
congestion control can provide a fair solution when there are greedy or
stupid flows present. Unless of course there are multiple queues for the
different types of flows, but yeah, where's that pink pony? Let's make it a
unicorn while we're at it.

And yes, there are much more endpoints than switches/routers. But I imagine
they are also much more homogeneous. I could be wrong about this since I
don't know too much about network equipment used by ISPs. Either way, it
seems to me that most endpoints are running either Windows, Linux (or
Android), a BSD variant or something made by Apple. And I would imagine
most embedded systems (the remaining endpoints that don't run a consumer
OS) aren't connected directly to the internet and won't be able to wreak
much havoc. Consumer operating systems are regularly updated as it is, so
sneaking in a new TCP variant should be *relatively* easy. Again, I might
be wrong, but these are just my thoughts.

I short, I'm not saying there are no problems, but I'm saying it might be
easy to ignore the idea as ineffective too quickly.

Kind regards,
Maarten de Vries



>  On Thu, 21 Mar 2013 07:21:52 +1100
>> grenville armitage <garmitage at swin.edu.au> wrote:
>>
>> >
>> >
>> > On 03/21/2013 02:36, Steffan Norberhuis wrote:
>> > > Hello Everyone,
>> > >
>> > > For a project for the Delft Technical University myself and 3
>> > > students are writing a review paper on the buffer bloat problem and
>> > > its possible solutions.
>> >
>> > My colleagues have been dabbling with delay-based CC algorithms,
>> > with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/)
>> > if that's of any interest.
>> >
>> > Some thoughts:
>> >
>> >   - When delay-based TCPs share bottlenecks with loss-based TCPs,
>> >       the delay-based TCPs are punished. Hard. They back-off,
>> >       as queuing delay builds, while the loss-based flow(s)
>> >       blissfully continue to push the queue to full (drop).
>> >       Everyone sharing the bottleneck sees latency fluctuations,
>> >       bounded by the bottleneck queue's effective 'length' (set
>> >       by physical RAM limits or operator-configurable threshold).
>> >
>> >   - The previous point suggests perhaps a hybrid TCP which uses
>> >       delay-based control, but switches (briefly) to loss-based
>> >       control if it detects the bottleneck queue is being
>> >       hammered by other, loss-based TCP flows. Challenging
>> >       questions arise as to what triggers switching between
>> >       delay-based and loss-based modes.
>> >
>> >   - Reducing a buffer's length requires meddling with the
>> >       bottleneck(s) (new firmware or new devices). Deploying
>> >       delay-based TCPs requires meddling with endpoints (OS
>> >       upgrade/patch). Many more of the latter than the former.
>> >
>> >   - But yes, in self-contained networks where the end hosts can all
>> >       be told to run a delay-based CC algorithm, delay-based CC
>> >       can mitigate the impact of bloated buffers in your bottleneck
>> >       network devices. Such homogeneous environments do exist, but
>> >       the Internet is quite different.
>> >
>> >   - Alternatively, if one could classify delay-based CC flows into one
>> >       queue and loss-based CC flows into another queue at each
>> >       bottleneck, the first point above might not be such a problem.
>> >       I also want a pink pony ;)  (Of course, once we're considering
>> >       tweak the bottlenecks with classifiers and multiple queues,
>> >       might as continue the surgery and reduce the bloated buffers too.)
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloa<https://lists.bufferbloat.net/listinfo/bloat>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20130326/30a298ac/attachment-0003.html>


More information about the Bloat mailing list