[Bloat] The wireless problem in a nutshell

Dave Täht d at taht.net
Fri Feb 11 06:50:38 PST 2011


d at taht.net (Dave Täht) writes:

> Juliusz Chroboczek <Juliusz.Chroboczek at pps.jussieu.fr> writes:
>
>>> what you are also doing is dividing the TCP streams into two pieces -
>>> the wireless piece is VERY short - and congestion control then works
>>> correctly using existing techniques on the wired and unwired portions
>>> of the connection.
>>
>> You've reinvented a useful technique, called "split-TCP".  It's commonly
>> used by the "WAN accelerators" that you can buy in order to get Microsoft
>> protocols working across the Internet.
>
> I'd been using this technique since 1996 or so, when I was working on a
> hybrid wireless broadcast system (on channel) 13, with modem uplinks. At
> the time I was like, "oh, proxies are useful to smooth and shorten the
> path" than thought it any great revelation - and have used it everywhere
> since. 
>
> I'm pretty sure it existed before then, but it has special applicability
> to wireless. 

"Socks" - which appeared sometime before 1992 [1] - counts as "before
then".

I've put some thought into the history of my use of proxies over
wireless - re-analyzing something that I'd incorporated into my gut back
in 1996.

The "split TCP" smoothing effect on the last mile I'd noticed *then* was
minor; the time savings were dominated by the latencies in the modem of
100ms or so. [2]

The mosquitonet research[3] had clearly shown the disastrous effects on
un-error-corrected wireless on TCP.

When we did the wireless router thing in 98, 12km latencies were in the
2-12ms range, and I specifically chose a newer technology - the first
almost-standardized version of 802.11 - because it did a low level of
link layer error detection/correction.

Even then, we regarded 1-3% packet loss as acceptable.

Today, with 802.11n, we're trying to shove 6x as much data over the air
in the same timeslot. Bursty packet loss is a disaster to a device that
is advertised to have 300Mbit speeds across the internet. Thus: seconds
of buffering and attempts at QoS to channel the more timely packets, and
a net result of moving users past the moon and back again on a regular basis.

AND - in the case of using a proxy, instead - 

Now that 802.11 wireless problem is in the last 30 meters, RTT is less
than one 1ms and the smoothing effect FAR more noticeable. - Or, it
would be, if the wireless device makers hadn't already added absurd
amounts of buffering. And we had decent AQM.

It's amazing how the changes in certain constants make certain formulas
more compelling. Let's divide C by 100, shall we? What happens?

I ran across Stuart Cheshire's work 4 times while time traveling.

Ruckus wireless has got bloat[4] - in 2007 he experiences 2.3 second
wireless latencies.

There's also a more formal version of his wonderful "it's the latency,
stupid" rant, with some good analogies[5]. To quote from his conclusion:

“As long as customers think that what they want is more throughput, and
 they don't care about latency, modem makers will continue to make design
 decisions that trade off worse latency for better throughput. 

 Modems are not the only problem here. In the near future we can expect
 to see big growth in areas such as ISDN, cable tv modems, ADSL modems
 and even wireless 'modems', all offering increases in bandwidth. If we
 don't also concentrate on improved bandwidth, we're not going to get it.

 [Elided] If we don't start caring about latency, we're going to find
 ourselves in a marketplace offering nothing but appallingly useless
 hardware.”

Written in 1996. Ah, irony, before my first cup of coffee.

Things that concern me:

1) Wireless Packet aggregation is now becoming more common, leading to
bursty behavior for short packets. What sorts of packets are being
aggregated? How much buffering is in place? I worry.

2) I think I can theorize that starting 8 TCP connections in a browser,
rather than 2, also "smooths" out bursty packet loss on the wireless
last hop.

3) Wild speculation - what packet loss the in home wireless network has
been experiencing - has been a significant factor in keeping the
internet operational.

4) I am very encouraged by the various standards for enabling web
proxies by default: dnsmasq will supply one via dhcp, browsers still
look for wpad. 

In combination with reduced buffer sizes, proxies, and AQM on the
gateway side - I am thinking it would be possible to reduce pressure on
the wireless client and AP makers for bloating buffers in the first place.

What remains is to shift the market. [6]

I imagine, if, for example, apple would use these techniques on their
tightly integrated wireless gateways and apple tv - that it would
provide a competitive advantage. 

The same goes for other manufacturers.

-- 
Dave Taht
http://nex-6.taht.net

1 http://en.wikipedia.org/wiki/SOCKS
2 Stuart Cheshire, Mary Baker "Metricom wireless experiences" 
  http://www.stuartcheshire.org/papers/wireless.ps
3 Mosquito net project has vanished from the net.
4 2007 http://www.stuartcheshire.org/papers/Ruckus-WiFi-Evaluation.pdf
5 1996 http://www.stuartcheshire.org/papers/LatencyQuest.ps
6 there is no rule 6


-- 
Dave Taht
http://nex-6.taht.net


More information about the Bloat mailing list