From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lb0-x22d.google.com (mail-lb0-x22d.google.com [IPv6:2a00:1450:4010:c04::22d]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id D510621F524 for ; Sat, 23 Aug 2014 23:26:39 -0700 (PDT) Received: by mail-lb0-f173.google.com with SMTP id u10so10947673lbd.4 for ; Sat, 23 Aug 2014 23:26:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=nflqSRbCG7UzqqLqw+rDmMH3Qe2yBm6mFUMzMcNtBrI=; b=Lt5lnVJL2XQ/3aiaNXrLszu/0o04XbYlo+QzkpNqsKyE2DqsmHZCxi9tbyo5An9aUX huYSGjjewF8Vu67fEhjKp7pMM1jh17Mtg1WurlKZ3PkuHrmYE/yqlIdSnllHKP4kRMHC jJHtKgoU9dOP+XgjcJBlYyVm0GOM1AzC69Ri9o4yZgfOkqK0wBroiGy/Z3Upvkw9W5bt UHKQQ8El2atFvR5iaZincIW4z56EV3Lx3Gyh34ayFKrhoRZseqzzZ4nAUqP2dabkZbii XLHstEbX3kp429kgMzRRtChcVOoUsI3NICujOX0wbICyaHT75yMHH945eebxWKaoTlYX Bbrg== X-Received: by 10.112.44.129 with SMTP id e1mr677115lbm.78.1408861597313; Sat, 23 Aug 2014 23:26:37 -0700 (PDT) Received: from bass.home.chromatix.fi (178-55-89-211.bb.dnainternet.fi. [178.55.89.211]) by mx.google.com with ESMTPSA id sc8sm55845573lbb.27.2014.08.23.23.26.35 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 23 Aug 2014 23:26:36 -0700 (PDT) Mime-Version: 1.0 (Apple Message framework v1085) Content-Type: text/plain; charset=us-ascii From: Jonathan Morton In-Reply-To: Date: Sun, 24 Aug 2014 09:26:33 +0300 Content-Transfer-Encoding: quoted-printable Message-Id: <5DEDBF8E-EA25-4CEC-B235-1F0122CAB228@gmail.com> References: <91696A3A-EF44-4A1A-8070-D3AF25D0D9AC@netapp.com> <64CD1035-2E14-4CA6-8E90-C892BAD48EC6@netapp.com> <4C1661D0-32C6-48E7-BAE6-60C98D7B2D69@ifi.uio.no> <8651E326-171F-472F-9456-920A9E43367D@gmail.com> <4265A8B2-10DA-455A-BA61-41907752365D@gmail.com> To: David Lang X-Mailer: Apple Mail (2.1085) Cc: bloat Mainlinglist Subject: Re: [Bloat] sigcomm wifi X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 24 Aug 2014 06:26:40 -0000 On 24 Aug, 2014, at 8:12 am, David Lang wrote: > On Sun, 24 Aug 2014, Jonathan Morton wrote: >> I think multi-target MIMO is more useful than single-target MIMO for = the congested case. It certainly helps that the client doesn't need to = explicitly support MIMO for it to work. >=20 > better yes, but at what price difference? :-) >=20 > If the APs cost $1000 each instead of $100 each, you are better off = with more of the cheaper APs. ...until you run out of channels to run them on. Then, if you still = need more capacity, multi-target MIMO is probably still worth it. Hopefully, it won't be as much as a tenfold price difference. >> However, I think there is a sliding scale on this. With the modern = modulation schemes (and especially with wide channels), the handshake = and preamble really are a lot of overhead. If you have the chance to = triple your throughput for a 20% increase in channel occupation, you = need a *really* good reason not to take it. >=20 > if you can send 300% as much data in 120% the time, then the overhead = of sending a single packet is huge (you _far_ spend more airtime on the = overhead than the packet itself) >=20 > now, this may be true for small packets, which is why this should be = configurable, and configurable in terms of data size, not packet count. >=20 > by the way, the same effect happens on wired ethernet networks, see = Jumbo Frames and the advantages of using them. >=20 > the advantages are probably not 300% data in 120% time, but more like = 300% data in 270% time, and at that point, the fact that you are 2.7x as = likely to loose the packet to another transmission very quickly make it = the wrong thing to do. The conditions are probably different in each direction. The AP is more = likely to be sending large packets (DNS response, HTTP payload) while = the client is more likely to send small packets (DNS request, TCP SYN, = HTTP GET). The AP is also likely to want to aggregate a TCP SYN/ACK = with another packet. So yes, intelligence of some sort is needed. And I should probably look = up just how big the handshake and preamble are in relative terms - but I = do already know that under ideal conditions, recent wifi variants still = get a remarkably small percentage of their theoretical data rate as = actual throughput - and that's with big packets and aggregation. >>> But even with that, doesn't TCP try to piggyback the ack on the next = packet of data anyway? so unless it's a purely one-way dataflow, this = still wouldn't help. >>=20 >> Once established, a HTTP session looks exactly like that. I also see = no reason in theory why a TCP ack couldn't be piggybacked on the *next* = available link ack, which would relax the latency requirements = considerably. >=20 > I don't understand (or we are talking past each other again) >=20 > laptop -- ap -- 50 hops -- server >=20 > packets from the server to the laptop could have an ack piggybacked by = the driver on the wifi link ack, but for packets the other direction, = the ap can't possibly know that the server will ever respond, so it = can't reply with a TCP level ack when it does the link level ack. Which is fine, because the bulk of the traffic will be from the AP to = the client. Unless you're running servers wirelessly, which seems dumb, = or you've got a bunch of journalists uploading copy and photos, which = seems like a more reasonable use case. But what I meant is that the TCP ack doesn't need to be piggybacked on = the link-level ack for the same packet - it can go on a later one. = Think VJ compression in PPP - there's a small lookup table which can be = used to fill in some of the data. >> If that were implemented and deployed successfully, it would mean = that the majority of RTS/CTS handshakes initiated by clients would be to = send DNS queries, TCP handshakes and HTTP request headers, all of which = are actually important. It would, I think, typically reduce contention = by a large margin. >=20 > only if stand-alone ack packets are really a significant portion of = the network traffic. I think they are significant, in terms of the number of uncoordinated = contentions for the channel. Remember, the AP occupies a privileged position in the network. It = transmits the bulk of the data, and the bulk of the number of individual = packets. It knows when it's already busy itself, so the backoff = algorithm never kicks in for the noise it makes itself. It can be a = model citizen of the wireless spectrum. By contrast, clients send much less on an individual basis, but they = have to negotiate with the AP *and* every other client for airtime to do = so. Every TCP ack disrupts the AP's flow of traffic. If the AP = aggregates three HTTP payload packets into a single transmission, then = it must expect to receive a TCP ack coming the other way - in other = words, to be interrupted - for every such aggregate packet it has sent. = The less often clients have to contend for the channel, the more time = the AP can spend distributing its self-coordinated, useful traffic. Let's suppose a typical HTTP payload is 45kB (including TCP/IP = wrapping). That can be transmitted in 10 triples of 1500B packets. = There would also be a DNS request and response, a TCP handshake (SYN, = SYN/ACK), a HTTP request (ACK/GET), and a TCP close (FIN/ACK, ACK), = which I'll assume can't be aggregated with other traffic, associated = with the transaction. So the AP must transmit 13 times to complete this small request. As = things currently stand, the client must *also* transmit - 14 times. The = wireless channel is therefore contended for 27 times, of which 10 (37%) = are pure TCP acks that could piggyback on a subsequent link-layer ack. I'd say 37% is significant, wouldn't you? - Jonathan Morton