[Bloat] Solving bufferbloat with TCP using packet delay

Hagen Paul Pfeifer hagen at jauu.net
Thu Mar 21 04:26:32 EDT 2013


* Stephen Hemminger | 2013-03-20 16:16:22 [-0700]:

>Everyone has to go through the phase of thinking
>  "it can't be that hard, I can invent a better TCP congestion algorithm" 
>But it is hard, and the delay based algorithms are fundamentally
>flawed because they see reverse path delay and cross traffic as false
>positives.  The hybrid ones all fall back to loss under "interesting
>times" so they really don't buy much.
>
>Really not convinced that Bufferbloat will be solved by TCP.
>You can make a TCP algorithm that causes worse latency than Cubic or Reno
>very easily. But doing better is hard, especially since TCP really
>can't assume much about its underlying network. There maybe random
>delays and packet loss (wireless), there maybe spikes in RTT and
>sessions maybe long or short lived. And you can't assume the whole
>world is running your algorithm.

+1 plus: bufferbloat is a queue problem (say link layer), the right way is to
address the problem at that level. Sure, the network and transport layer is
involved and a key factor. But a pure (probably delay based) TCP congestion
control based solution do not solve the problem: we also have to deal with
(greedy) UDP (in a ideal world DCCP) applications as well.

Imagine a pure UDP setup: one greedy UDP application (media stream) and now
try to ping a host. You will experience the same bufferbloat problems. 

Hagen

-- 
http://protocollabs.com



More information about the Bloat mailing list