From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-x22a.google.com (mail-oi0-x22a.google.com [IPv6:2607:f8b0:4003:c06::22a]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id E3F5021F55D; Wed, 3 Sep 2014 17:33:01 -0700 (PDT) Received: by mail-oi0-f42.google.com with SMTP id v63so6132874oia.29 for ; Wed, 03 Sep 2014 17:33:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=+WJei8lbm6GmC2cdivTWV1ikmqyIfs+IsKqqcGULRu0=; b=EzeraL2xkSIvyF6QKHSmN6rzPgqrUtQ4Bz1bKsiu5wvF9DOj4qrwnag7BgUumcxnrG CRrmh4p8tAr4mWIbRTY9R/GcWp7qPFQVG/267EToUOV9EEiBTUUIBjBe5Fq2yY17/+FC jATCPBxY/4WGU8ukgd7yAXyjl16uEbsg7N21DzxAQzaMpqtStF3N9KbYyeoIQdH1Nne/ pAB0tEd01mjvIVY2+A5kwrZ2tIysSi01Zen60WOTd8lfyIPXWFEBhYlPTi0NOKt23IMA AbadLI8jCnP9BanGcMBviR2jsPVErS7n6J08wKn2AAx4HNwNQTp5H0mr96IW2WcLtzvq VCiA== MIME-Version: 1.0 X-Received: by 10.60.96.129 with SMTP id ds1mr755215oeb.43.1409790780775; Wed, 03 Sep 2014 17:33:00 -0700 (PDT) Received: by 10.202.227.76 with HTTP; Wed, 3 Sep 2014 17:33:00 -0700 (PDT) In-Reply-To: References: <87ppfijfjc.fsf@toke.dk> <4FF4917C-1B6D-4D5F-81B6-5FC177F12BFC@gmail.com> <4DA71387-6720-4A2F-B462-2E1295604C21@gmail.com> <0DB9E121-7073-4DE9-B7E2-73A41BCBA1D1@gmail.com> <0D3E3220-C12A-4238-974B-D83D13EF354E@gmail.com> <83C39F40-5D07-43B4-8D3A-5A087CCB2735@gmx.de> Date: Wed, 3 Sep 2014 17:33:00 -0700 Message-ID: From: Dave Taht To: "Bill Ver Steeg (versteb)" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: "cerowrt-devel@lists.bufferbloat.net" , bloat Subject: Re: [Bloat] [Cerowrt-devel] Comcast upped service levels -> WNDR3800 can't cope... X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Sep 2014 00:33:30 -0000 On Wed, Sep 3, 2014 at 4:17 PM, Bill Ver Steeg (versteb) wrote: > Speaking of IPv6 performance testing- In a recent FTTH field deployment, = the network operator deployed an IPv6-only network and tunneled all subscri= ber IPv4 traffic over an IPv6 tunnel to the upstream network edge. It then = unpacked the IPv4 traffic from the IPv6 tunnel and sent it on its merry way= . I tried to deploy ipv4 over ipv6 encapsulation when I was in Nicaragua 6 years back (the alternative was triple nat, IPv4 addresses were really scarce on the ground), and got beat by the encapsulation overhead, performance, multiple bugs and bufferbloat, then. I figure that most of that has improved - in particular I imagine that their encapsulated traffic still has a 1500 MTU for the ipv4 traffic? The original gear I'd had for the experiment could do a 2k MTU, which was very helpful in making the ipv4 encapsulation 1500 bytes as much of the internet expects, but a later version couldn't quite get past 1540 bytes without problems. > Long story short, the 4o6 tunneling code in the residential gateway was n= ot nearly as performant as the IPv6 forwarding code. I actually got better = IPv4 throughput running an IPv6 VPN on my end device, then sending my IPv4 = traffic through that tunnel - thus avoiding the tunnel code on the gateway.= If I recall correctly, the tunnel code capped out at about 20 Mbps and the= IPv6 code went up to the 50Mbps SLA rate. I stumbled into this while runni= ng some IPTV video tests while running throughput benchmarks on my PC (with= apparently pseudo-random results, until we figured out the various tunnels= ). Took me a while to figure it out. Delay also spiked when the gateway got= bogged down...... I can believe it. I have seen many "bumps in the wire" do bad things when run past their limits. Notable were several PPPoe and PPPoA boxes. Older cablemodems, and last generation access points are going to all have similar problems when hooked up at these higher speeds. In the future, stuff that does this sort of tunneling or encapsulation, or while coverting from one media type to another, (say ethernet->cable, ethernet->gpon, etc) may also run into it when the provider ups their access speeds from one band to another, as both comcast and verizon have. This is of course, both a problem and an opportunity. A problem because it will generate more support calls, and an opportunity to sell better gear into the marketplace as ISP speeds are upgraded. Some enterprising manufacturer could make a point of pitching their product(s) as actually capable of modern transfer speeds on modern ISPs, doing benchmarks, etc. Given the mass delusional product naming in the home ap marketplace, where nearly every product is named and pitched over the base capability of the standards used, rather than the sordid reality, I don't think anything short of a consumer reports, or legal action, will result in sanity here. Gigabit "routers", indeed, when only the switch is cable of that! Nothing I've tried below 100 bucks can forward, well, at a gigabit, with a number of real-world firewall rules. Even using x86 gear is kind of problematic thus far. http://snapon.lab.bufferbloat.net/~cero2/nuc-to-puck/results.html > More capable gateways were deployed in the latter stages of the deploymen= t, and they seemed to keep up with the 50 Mbps SLA rate. What was the measured latency under load? > > Bill Ver Steeg > > > > > > -----Original Message----- > From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.buf= ferbloat.net] On Behalf Of Dave Taht > Sent: Wednesday, September 03, 2014 3:31 PM > To: Sebastian Moeller > Cc: cerowrt-devel@lists.bufferbloat.net; bloat > Subject: Re: [Bloat] [Cerowrt-devel] Comcast upped service levels -> WNDR= 3800 can't cope... > > On Wed, Sep 3, 2014 at 12:22 PM, Sebastian Moeller wrot= e: >> Hi Aaron, >> >> >> On Sep 3, 2014, at 17:12 , Aaron Wood wrote: >> >>> On Wed, Sep 3, 2014 at 4:08 AM, Jonathan Morton = wrote: >>> Given that the CPU load is confirmed as high, the pcap probably isn't a= s useful. The rest would be interesting to look at. >>> >>> Are you able to test with smaller packet sizes? That might help to iso= late packet-throughput (ie. connection tracking) versus byte-throughput pro= blems. >>> >>> - Jonathan Morton >>> >>> Doing another test setup will take a few days (maybe not until the week= end). But I can get the data uploaded, and do some preliminary crunching o= n it. >> >> So the current SQM system allows to shape on multiple interfaces= , so you could set up the shaper on se00 and test between sw10 and se00 (sh= ould work if you reliably get fast enough wifi connection, something like c= ombined shaped bandwidth <=3D 70% of wifi rate should work). That would avo= id the whole firewall and connection tracking logic. >> My home wifi environment is quite variable/noisy and not >> well-suited for this test: with rrul_be I got stuck at around 70Mbps com= bined bandwidth, with different distributions of the up and down-leg for no= -shaping, shaping to 50Mbps10Mbps, and shaping to 100Mbps50Mbps. SIRQ got p= retty much pegged at 96-99% during all netperf-wrapper runs, so I assume th= is to be the bottleneck (the radio was in the > 200mbps range during the te= st with occasional drops to 150mbps). So my conclusion would: be it really = is the shaping that is limited on my wndr3700v2 with cerowrt 3.10.50-1, aga= in if I would be confident about the measurement which I am not (but EOUTOF= TIME). That or my rf environment might only allow for roughly 70-80Mbps com= bined throughput. For what it is worth: test where performed between macboo= k running macosx 10.9.4 and hp proliant n54l running 64bit openSuse 13.1, k= ernel 3.11.10-17 (AMD turion with tg3 gbit ethernet adapter (BQL enabled), = running fq_codel on eth0), with sha ping on the se00 interface. > > > A note on wifi throughput. CeroWrt routes, rather than bridges, between i= nterfaces. So I would expect for simple benchmarks, openwrt (which bridges)= might show much better wifi<-> ethernet behavior. > > We route, rather than bridge wifi, because of 1) it made it easier to deb= ug it, and 2) the theory that multicast on busier networks messes up wifi f= ar more than not-bridging slows it down. Have not accumulated a lot of proo= f of this, but this was kind of enlightening: > http://tools.ietf.org/html/draft-desmouceaux-ipv6-mcast-wifi-power-usage-= 00 > > I note that my regular benchmarking environment has mostly been 2 or more= routers with nat and firewalling disabled. > > Given the trend towards looking at iptables and nat overhead on this thre= ad, an ipv6 benchmark on this box might be revealing. > >> Best Regards >> Sebastian >> >> >>> >>> -Aaron >>> _______________________________________________ >>> Cerowrt-devel mailing list >>> Cerowrt-devel@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >> >> _______________________________________________ >> Cerowrt-devel mailing list >> Cerowrt-devel@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/cerowrt-devel > > > > -- > Dave T=C3=A4ht > > https://www.bufferbloat.net/projects/make-wifi-fast > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat --=20 Dave T=C3=A4ht https://www.bufferbloat.net/projects/make-wifi-fast