From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-iw0-f171.google.com (mail-iw0-f171.google.com [209.85.214.171]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 36A1E201757 for ; Sun, 29 May 2011 06:07:40 -0700 (PDT) Received: by iwn8 with SMTP id 8so3651329iwn.16 for ; Sun, 29 May 2011 06:23:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:date:message-id:subject:from:to :content-type; bh=KfCVnkityK05iijqhlOuMP5YSAb11OCJjsJ2Mu4UauM=; b=DNiZ9bXB3jtKRgPozeOJ3l11HS6Zyob6F5KY0/Szt3sGDDlkIZhKqRkk8wBgDO3dI3 6Tv6qM/YjJafwgyNpaZ0Cl28d9NHqu+bnEnyLnoMMVOnpFWJhv0DrNznr99I1isUrr0c lgZc56FlTEr+fjfHNMlVjnxdXD+F9nl9lJ+Ao= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=UV+UZJLCn2e2rIv2JAPTEEd5F7suBlp7L7cbsYdn9r3AJENwyUI+Ajyt2KKq2G95g/ 1nJtzke5d3XN33pK5UvIZSc1GFTpYE1qv9oWxX4lb0D2p1aujoyOCgvUFpm5hNaGPv9b esQjCv/2VrCrw8wcyA0MySydnsjlERvy+zUv8= MIME-Version: 1.0 Received: by 10.231.176.6 with SMTP id bc6mr4992672ibb.147.1306675404474; Sun, 29 May 2011 06:23:24 -0700 (PDT) Received: by 10.231.39.203 with HTTP; Sun, 29 May 2011 06:23:24 -0700 (PDT) Date: Sun, 29 May 2011 07:23:24 -0600 Message-ID: From: Dave Taht To: bloat Content-Type: multipart/alternative; boundary=0016364c7ab35209b304a46a13ec Subject: [Bloat] tiny monsters: multicast packets X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 May 2011 13:07:40 -0000 --0016364c7ab35209b304a46a13ec Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable So after my experiments [1] yesterday with the wndr3700v2 hardware[2], I came away even more convinced that the wireless world and the wired worlds should not be bridged together. All the AQMs out there assume that it takes the same period of time to deliver a packet consisting of X bytes to the next hop. Wired more or less does that. Wireless breaks that assumption. It wasn't so bad, back in the early days o= f 802.11b - 802.11b ran as fast as 11Mbit, and multicast, at 2... for a ratio of 5.5x1. 11g came around, and runs at 54Mbit, and - (if you don't run in mixed mode, still supporting B), you can multicast at 6Mbit. But nearly everyone does still run mixed mode, so multicast is stuck at 2... for a ratio of 26x1.... instead of a mere 9x1. Soo... 11n has come around, and I shudder to think of what packet rate rati= o of "normal" vs "multicast packets" do to the assumptions of the rest of the stack. So with just a little multicast going through your wireless network, any assumptions the higher level portions of the stack might make are invalid. HTB? hah, uses fixed buckets... RED? a single multicast packet is a monster packet, how's it supposed to find it in the swamp? Worse, most multicast packets are statistically rare and needed for the network to actually continue to function. In my last 2 months of travel, I have seen multicast packets, such as ARP, DHCP, MDNS, and now babel, all failing far, far, far more often than is desirable. I have seen DHCP fail completely for hours at a time, I've seen ARP take dozens of queries to resolve. Nextly, It is trivial to trigger the symptoms of bufferbloat, with a multicast stream. Perhaps eBDP can handle multicast well, but certainly AQMs are going to hav= e headaches that are difficult to solve at ratios between normal and multicas= t packets this poor. Lastly, most home router vendors bridge wired and wireless together, sort o= f like jamming together "jet engines and a vw bugs", and I finally broke the= m apart [3], to try and look at them separately - as even the switch is displaying 100+ms of buffering when I slam it with 4 simultaneous iperf streams [4] 1: https://lists.bufferbloat.net/pipermail/bloat-devel/2011-May/000156.html 2: http://www.bufferbloat.net/projects/bismark/wiki/Capetown 3: http://www.bufferbloat.net/issues/186 4: http://www.bufferbloat.net/projects/bismark-testbed/wiki/Experiment_-_Qo= S --=20 Dave T=E4ht SKYPE: davetaht US Tel: 1-239-829-5608 http://the-edge.blogspot.com --0016364c7ab35209b304a46a13ec Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable So after my experiments [1] yesterday with the wndr3700v2 hardware[2], I ca= me away
even more convinced that the wireless world and the wired world= s should
not be bridged together.

All the AQMs out there assume t= hat it takes the same period of time to deliver a packet consisting of X by= tes to the next hop. Wired more or less does that.

Wireless breaks that assumption. It wasn't so bad, back in the earl= y days of 802.11b - 802.11b ran as fast as 11Mbit, and multicast, at 2... f= or a ratio of 5.5x1.

11g came around, and runs at 54Mbit, and - (if= you don't run in mixed mode, still supporting B), you can multicast at= 6Mbit. But nearly everyone does still run mixed mode, so multicast is stuc= k at 2... for a ratio of 26x1.... instead of a mere 9x1.

Soo... 11n has come around, and I shudder to think of what packet rate = ratio of "normal" vs "multicast packets" do to the assu= mptions of the rest of the stack.

So with just a little multicast go= ing through your wireless network, any assumptions the higher level portion= s of the stack might make are invalid. HTB? hah, uses fixed buckets... RED?= a single multicast packet is a monster packet, how's it supposed to fi= nd it in the swamp?

Worse, most multicast packets are statistically rare and needed for the= network to actually continue to function.

In my last = 2 months of travel, I have seen multicast packets, such as ARP, DHCP, MDNS,= and now babel, all failing far, far, far more often than is desirable. I h= ave seen DHCP fail completely for hours at a time, I've seen ARP take d= ozens of queries to resolve.

Nextly, It is trivial to trigger the symptoms of bufferbloat, with a mu= lticast stream.

Perhaps eBDP can handle multicast well, but certainl= y AQMs are going to have headaches that are difficult to solve at ratios be= tween normal and multicast packets this poor.

Lastly, most home router vendors bridge wired and wireless together, so= rt of like jamming together "jet engines and a vw bugs",=A0 and I= finally broke them apart [3], to try and look at them separately - as even= the switch is displaying 100+ms of buffering when I slam it with 4 simulta= neous iperf streams [4]

1: https://lists.bufferbloat.net/pipermail/bloat-devel/2011-M= ay/000156.html
2: http://www.bufferbloat.net/projects/bismark/wiki/Capeto= wn
3: http://www.bufferbloat= .net/issues/186
4: http://www.bufferbloat.net/projects/bi= smark-testbed/wiki/Experiment_-_QoS

--
Dave T=E4ht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://the-edge.blog= spot.com
--0016364c7ab35209b304a46a13ec--