[Bloat] The wireless problem in a nutshell

Dave Täht d at taht.net
Sun Feb 6 09:41:59 PST 2011


Packet loss on wireless is *bursty* and causes TCP resets all over the
place. Losing three acks in a row is commonplace.

With a long Round Trip Time (RTT), such as from your house to youtube,
you can only rarely get even slightly close to good throughput over
wireless. Real packet loss rates can go as high as 100%. [1]

Even with a moderately lossy connection (3%), you can end up almost
permamently in slow start.

This is why my original (1998)[2] (and current) wireless architecture
always includes a proxy like squid or polipo on the last mile/household,
from the wire to the wireless. 

A TCP reset doesn't hurt you if your RTT is 2ms. You ramp up quickly on
either side of the wireless connection.

In the piece that I'm still struggling to write - I call this concept
the long “U” - where the U describes the huge amounts of available
bandwidth on either side of the choke point - usually at the home or
business gateway.

People think oh - adding a cache is what you are doing with a proxy -
yes caching helps, but, what you are also doing is dividing the TCP
streams into two pieces - the wireless piece is VERY short - and
congestion control then works correctly using existing techniques on the
wired and unwired portions of the connection.

The problem is, I never noticed until recently that everyone (else) was
trying to make long RTT paths work over wireless allllll the way across
the Internet!

And... To compensate, they were adding sufficient buffering inside the
wireless device AND retries - to get around real, local packet loss
rates that can lose hundreds of packets all at once, in a burst.

Which, as we all now know, clobbers latency.

That's the wireless problem in a nutshell[3]. 

If your local path is only 2ms, wireless + any given TCP algorithm
recovers from a packet loss burst, GREAT. There is no confusion between
wireless interference and congestion on the LFN. You have plenty of
bandwidth on both sides of the connection to recover. The proxy smooths
out the traffic on the wired AND wireless side.

RTT of 70ms... not so much. International links, forget about it. [4]

One positive aspect of this is many routers support proxying - polipo is
common. And an even more positive aspect is that supplying a proxy is
well supported both by browsers by both DHCP and the WPAD
standards. (Wpad will even work over IPv6 and polipo can function as an
IPv4/IPv6 ALG gateway)

Most overseas providers already use some level of transparent proxying. 

It doesn't help the scp upload problem much - but there are ssh proxies
out there too.

I personally regard the wireless packet loss burst problem as completely
intractable for long TCP links without using proxies - or insane amounts
of buffering.

The irony?

If everybody were to go and turn up a web proxy tomorrow - we'd make
bufferbloat vastly worse on both sides of the connection.

On the other hand, proxies make the problem tractable on both sides of
the connection - the wired side can apply ECN/SACK/DSACK and AQM, and
Qos - and the wireless side can do lots of stuff from the mac layer on
up, including the same techniques, on its side.

Once you have a much smaller RTT, the wireless problem gets much simpler.

If there are those on this list that are not running a proxy, try one out - you'll be amazed.

-- 
Dave Taht
http://nex-6.taht.net

[1] There are large numbers of compensation mechanisms from the mac
layer up that I'm not going to go into. They universally induce latency.
[2]
http://the-edge.blogspot.com/2010/10/who-invented-embedded-linux-based.html

We never documented the use of proxies. At the time, it was obvious -
everybody running a small business or home was using web proxies.

[3] Long distance wireless links and mesh networks have similar but
different problems

[4] Bursty packet loss doesn't affect non-tcp traffic as bad - udp voip, in
particular, doesn't have an issue.


More information about the Bloat mailing list