From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 7B3213CB3D for ; Thu, 12 Dec 2019 19:59:34 -0500 (EST) Received: by mail-pj1-f46.google.com with SMTP id r11so419690pjp.12 for ; Thu, 12 Dec 2019 16:59:34 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=WuwF0hiqxZ6JMd/RlSlfL/wmLshzp5F40AkEMlUmWNc=; b=o1ZiohUbzMKfgtMnTYnH2Y3qTsDnrHNHwznuntwL6lpObPrAIj0+D56XbfQSvTbEys 27GPsTuZ9hD41EsU8DUYrzaXHhwf3s6nUST0Lw0nDbyb1heZuihkl1XW00yPjxC7UpQj x6EIxqgba6nz86SQpYSFvMeM1FLEOEjUi1zAHwWqm0+3lf7c5yv9CeRjQAMsAfd5YVVH xY4EmxUT2WlP6WFwMOAGhAAaCM0nAkjFC2QCnidlx3Q+uYVub6okoNZM3utn9tzYLEqU vaIj1cjLHjR2SHFAEm6JGDJKT418R3Gd41bm84GWFLbCxopDF/6/d99olJli6+DX5RsW bHeQ== X-Gm-Message-State: APjAAAXYVhDxuqgA0AOqgn/RKvP/GBqMl0qWHbdOtRXu0gX7QY3HMBrA VhdF984hNUOcoT7LqXaEAEA= X-Google-Smtp-Source: APXvYqxXyNyItcfYohs4GSTLqwL9op5FlQ4cyMEbgWOOcLbnLAOKyp4FNBqo5QP3kZQEAZu661acgQ== X-Received: by 2002:a17:902:ba97:: with SMTP id k23mr4977171pls.343.1576198773415; Thu, 12 Dec 2019 16:59:33 -0800 (PST) Received: from cweber-x250.corp.meraki.com (192-195-83-200.static.monkeybrains.net. [192.195.83.200]) by smtp.gmail.com with ESMTPSA id h16sm8840432pfn.85.2019.12.12.16.59.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 12 Dec 2019 16:59:32 -0800 (PST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.1\)) From: Simon Barber In-Reply-To: Date: Thu, 12 Dec 2019 16:59:30 -0800 Cc: Johannes Berg , Make-Wifi-fast , linux-wireless , Netdev , Neal Cardwell Content-Transfer-Encoding: quoted-printable Message-Id: <22B5F072-630A-44BE-A0E5-BF814A6CB9B0@superduper.net> References: <14cedbb9300f887fecc399ebcdb70c153955f876.camel@sipsolutions.net> <99748db5-7898-534b-d407-ed819f07f939@gmail.com> To: Dave Taht X-Mailer: Apple Mail (2.3445.9.1) Subject: Re: [Make-wifi-fast] debugging TCP stalls on high-speed wifi X-BeenThere: make-wifi-fast@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Dec 2019 00:59:34 -0000 I=E2=80=99m currently adding ACK thinning to Linux=E2=80=99s GRO code. = Quite a simple addition given the way that code works. Simon > On Dec 12, 2019, at 3:42 PM, Dave Taht wrote: >=20 > On Thu, Dec 12, 2019 at 1:12 PM Johannes Berg = wrote: >>=20 >> Hi Eric, >>=20 >> Thanks for looking :) >>=20 >>>> I'm not sure how to do headers-only, but I guess -s100 will work. >>>>=20 >>>> https://johannes.sipsolutions.net/files/he-tcp.pcap.xz >>>>=20 >>>=20 >>> Lack of GRO on receiver is probably what is killing performance, >>> both for receiver (generating gazillions of acks) and sender >>> (to process all these acks) >> Yes, I'm aware of this, to some extent. And I'm not saying we should = see >> even close to 1800 Mbps like we have with UDP... >>=20 >> Mind you, the biggest thing that kills performance with many ACKs = isn't >> the load on the system - the sender system is only moderately loaded = at >> ~20-25% of a single core with TSO, and around double that without = TSO. >> The thing that kills performance is eating up all the medium time = with >> small non-aggregated packets, due to the the half-duplex nature of = WiFi. >> I know you know, but in case somebody else is reading along :-) >=20 > I'm paying attention but pay attention faster if you cc = make-wifi-fast. >=20 > If you captured the air you'd probably see the sender winning the > election for airtime 2 or more times in a row, > it's random and oft dependent on on a variety of factors. >=20 > Most Wifi is *not* "half" duplex, which implies it ping pongs between > send and receive. >=20 >>=20 >> But unless somehow you think processing the (many) ACKs on the sender >> will cause it to stop transmitting, or something like that, I don't >> think I should be seeing what I described earlier: we sometimes (have >> to?) reclaim the entire transmit queue before TCP starts pushing data >> again. That's less than 2MB split across at least two TCP streams, I >> don't see why we should have to get to 0 (which takes about 7ms) = until >> more packets come in from TCP? >=20 > Perhaps having a budget for ack processing within a 1ms window? >=20 >> Or put another way - if I free say 400kB worth of SKBs, what could be >> the reason we don't see more packets be sent out of the TCP stack = within >> the few ms or so? I guess I have to correlate this somehow with the = ACKs >> so I know how much data is outstanding for ACKs. (*) >=20 > yes. >=20 > It would be interesting to repeat this test in ht20 mode, and/or using >=20 > flent --socket-stats --step-size=3D.04 --te=3Dupload_streams=3D2 -t > whatever_variant_of_test tcp_nup >=20 > That will capture some of the tcp stats for you. >=20 >>=20 >> The sk_pacing_shift is set to 7, btw, which should give us 8ms of >> outstanding data. For now in this setup that's enough(**), and indeed >> bumping the limit up (setting sk_pacing_shift to say 5) doesn't = change >> anything. So I think this part we actually solved - I get basically = the >> same performance and behaviour with two streams (needed due to GBit = LAN >> on the other side) as with 20 streams. >>=20 >>=20 >>> I had a plan about enabling compressing ACK as I did for SACK >>> in commit >>> = https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/= ?id=3D5d9f4262b7ea41ca9981cc790e37cca6e37c789e >>>=20 >>> But I have not done it yet. >>> It is a pity because this would tremendously help wifi I am sure. >>=20 >> Nice :-) >>=20 >> But that is something the *receiver* would have to do. >=20 > Well it is certainly feasible to thin acks on the driver as we did in > cake. More general. More cpu intensive. I'm happily just awaiting > eric's work instead. >=20 > One thing comcast inadvertently does to most flows is remark them cs1, > which tosses big data into the bk queue and acks into the be queue. It > actually helps sometimes. >=20 >>=20 >> The dirty secret here is that we're getting close to 1700 Mbps TCP = with >> Windows in place of Linux in the setup, with the same receiver on the >> other end (which is actually a single Linux machine with two GBit >> network connections to the AP). So if we had this I'm sure it'd = increase >> performance, but it still wouldn't explain why we're so much slower = than >> Windows :-) >>=20 >> Now, I'm certainly not saying that TCP behaviour is the only reason = for >> the difference, we already found an issue for example where due to a >> small Windows driver bug some packet extension was always used, and = the >> AP is also buggy in that it needs the extension but didn't request it >> ... so the two bugs cancelled each other out and things worked well, = but >> our Linux driver believed the AP ... :) Certainly there can be more >> things like that still, I just started on the TCP side and ran into = the >> queueing behaviour that I cannot explain. >>=20 >>=20 >> In any case, I'll try to dig deeper into the TCP stack to understand = the >> reason for this transmit behaviour. >>=20 >> Thanks, >> johannes >>=20 >>=20 >> (*) Hmm. Now I have another idea. Maybe we have some kind of problem >> with the medium access configuration, and we transmit all this data >> without the AP having a chance to send back all the ACKs? Too bad I >> can't put an air sniffer into the setup - it's a conductive setup. >=20 > see above >>=20 >>=20 >> (**) As another aside to this, the next generation HW after this will >> have 256 frames in a block-ack, so that means instead of up to 64 (we >> only use 63 for internal reasons) frames aggregated together we'll be >> able to aggregate 256 (or maybe we again only 255?). >=20 > My fervent wish is to somehow be able to mark every frame we can as = not > needing a retransmit in future standards. I've lost track of what ax > can do. ? And for block ack retries > to give up far sooner. >=20 > you can safely drop all but the last three acks in a flow, and the > txop itself provides > a suitable clock. >=20 > And, ya know, releasing packets ooo doesn't hurt as much as it used > to, with rack. >> Each one of those >> frames may be an A-MSDU with ~11k content though (only 8k in the = setup I >> have here right now), which means we can get a LOT of data into a = single >> PPDU ... >=20 > Just wearing my usual hat, I would prefer to optimize for service > time, not bandwidth, in the future, > using smaller txops with this more data in them, than the biggest > txops possible. >=20 > If you constrain your max txop to 2ms in this test, you will see tcp > in slow start ramp up faster, > and the ap scale to way more devices, with way less jitter and > retries. Most flows never get out of slowstart. >=20 >> . we'll probably have to bump the sk_pacing_shift to be able to >> fill that with a single TCP stream, though since we run all our >> performance numbers with many streams, maybe we should just leave it = :) >=20 > Please. Optimizing for single flow performance is an academic's game. >=20 >>=20 >>=20 >=20 >=20 > --=20 > Make Music, Not War >=20 > Dave T=C3=A4ht > CTO, TekLibre, LLC > http://www.teklibre.com > Tel: 1-831-435-0729 > _______________________________________________ > Make-wifi-fast mailing list > Make-wifi-fast@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/make-wifi-fast