From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id BB1CC3B2A4 for ; Thu, 21 Jun 2018 15:41:29 -0400 (EDT) Received: from hms-beagle2.lan ([79.192.246.80]) by mail.gmx.com (mrgmx002 [212.227.17.190]) with ESMTPSA (Nemesis) id 0MD9NE-1fQWTr2rEK-00GYPe; Thu, 21 Jun 2018 21:41:27 +0200 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 11.4 \(3445.8.2\)) From: Sebastian Moeller In-Reply-To: Date: Thu, 21 Jun 2018 21:41:26 +0200 Cc: Kathleen Nichols , bloat Content-Transfer-Encoding: quoted-printable Message-Id: <6DCE29AF-5F91-4F06-93A8-0F5D197C9D06@gmx.de> References: <8736xgsdcp.fsf@toke.dk> <838b212e-7a8c-6139-1306-9e60bfda926b@gmail.com> <8f80b36b-ef81-eadc-6218-350132f4d56a@pollere.com> To: =?utf-8?Q?Dave_T=C3=A4ht?= X-Mailer: Apple Mail (2.3445.8.2) X-Provags-ID: V03:K1:tHzqNJf1fZqlYUQvZYnsslsWSxGnZDbYoimT/ba9Gtc7yVU5j58 OmZkgOavo6CGS0CVTLTcV1GemMrWmfX69Ww+svgxN3XFuHrqeh5LDzYC1qGJ0yjJsdqq6KG +1vhtED/M443V72ZQk653V4NNC2+H9uSn3ba/qSi6u/7en7fLZkN8ilzoP3aoIBsJAk8MZz iZzvZqYJp+CmEtkUxHUlA== X-UI-Out-Filterresults: notjunk:1;V01:K0:/5ro4+147pE=:xr9fTBeN7/jdT5ebLs1o0O 7aT4c+irLNn3YHcg1FZKWVIcYM392ag61pfCYrOKqLbt2aP5W/qllAbFm3a8wbhTqxMX0BHHP pwYX+erd1uVt95z5++eN2Q1UKBSQHBAxgAzVyUJhAkN9yID21rt7UJi7zM7gELp7kFuQUWwef j9Fq4KsKW7beZ55sD250XrHVR0dY604lNsUp5JrvmWl5Ppt9ubKKHKyfmx4iJv4x2+izG42Xa K4G/13rOlRZVj/JjoftywiI4DLfx7XSh4FVCDmE33wbOSmd/JiAfADTBvVU1kzPcNKNW3I576 A9qSOUnkkcN22EQV4a1uJVd9DleTKluONK24lIc7S/8Tfa5vTSacMynztT2jp49O+xOovP7XE B6Z0i3qyrnRc+SenKCLaWjIM+1cVT7RNK9Twjm18XKrQpUoqYNj62mYOR18GCRErzB5YiRaRE Sfw+nvFiGvv0KlWs55nsXcKym760DF1r1wpxznJ2iOFJhMhuSz81vOkCWU5eLwo8j+nJR/biB E1ywMJIWPPBqnz+xm6Nk+X6PY440zM/MlIrpKOSnZigrvn9tF56HcT19fwoSUUU8/EmYvNpOX DSPfi9vs9AYWFF04Qtf2CtCGlReHqtnWaSBDlsM0hfD5clCh80XuaGvUBjtj80viPD1UUrqcP GhDHBRBj6IcsnP4seUxWGRkx3XfyVCOHYPCCPmwa1y+Ty76nW55aQEwdqYCzm9/vKIXw5yH7E 9Y/Au+DPPV3p0fDbycmDgv74UIxjfeym5lq4EfL5Cch1fYk3gelQZgeJw8+Q/HqpDtM4BWSIB vpYglTr Subject: Re: [Bloat] lwn.net's tcp small queues vs wifi aggregation solved X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Jun 2018 19:41:30 -0000 Hi All, > On Jun 21, 2018, at 21:17, Dave Taht wrote: >=20 > On Thu, Jun 21, 2018 at 9:43 AM, Kathleen Nichols = wrote: >> On 6/21/18 8:18 AM, Dave Taht wrote: >>=20 >>> This is a case where inserting a teeny bit more latency to fill up = the >>> queue (ugh!), or a driver having some way to ask the probability of >>> seeing more data in the >>> next 10us, or... something like that, could help. >>>=20 >>=20 >> Well, if the driver sees the arriving packets, it could infer that an >> ack will be produced shortly and will need a sending opportunity. >=20 > Certainly in the case of wifi and lte and other simplex technologies > this seems feasible... >=20 > 'cept that we're all busy finding ways to do ack compression this > month and thus the > two big tcp packets =3D 1 ack rule is going away. Still, an estimate, > with a short timeout > might help. That short timeout seems essential, just because a link is = wireless, does not mean the ACKs for passing TCP packets will appear = shortly, who knows what routing happens after the wireless link (think = city-wide mesh network). In a way such a solution should first figure = out whether waiting has any chance of being useful, by looking at te = typical delay between Data packets and the matching ACKs. >=20 > Another thing I've longed for (sometimes) is whether or not an > application like a web > browser signalling the OS that it has a batch of network packets > coming would help... To make up for the fact that wireless uses unfortunately uses a = very high per packet overhead it just tries to "hide" by amortizing it = over more than one data packet. How about trying to find a better, less = wasteful MAC instead ;) (and now we have two problems...) Now really = from a latency perspective it clearly is better to ovoid overhead = instead of use "batching" to better amortize it since batching increases = latency (I stipulate that there are condition in which clever batching = will not increase the noticeable latency if it can hide inside another = latency increasing process). >=20 > web browser: > setsockopt(batch_everything) > parse the web page, generate all your dns, tcp requests, etc, etc > setsockopt(release_batch) >=20 >> Kathie >>=20 >> (we tried this mechanism out for cable data head ends at Com21 and it >> went into a patent that probably belongs to Arris now. But that was = for >> cable. It is a fact universally acknowledged that a packet of data = must >> be in want of an acknowledgement.) >=20 > voip doesn't behave this way, but for recognisable protocols like tcp > and perhaps quic... I note that for voip, waiting does not make sense as all packets = carry information and keeping jitter low will noticeably increase a = calls perceived quality (if just by allowing the application yo use a = small de-jitter buffer and hence less latency). There is a reason why = wifi's voice access class, oith has the highest probability to get the = next tx-slot and also is not allowed to send aggregates (whether that is = fully sane is another question, answering which I do not feel = competent). I also think that on a docsis system it is probably a decent = heuristic to assume that the endpoints will be a few milliseconds away = at most (and only due to the coarse docsis grant-request clock). Best Regards Sebastian >=20 >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat >=20 >=20 >=20 > --=20 >=20 > Dave T=C3=A4ht > CEO, TekLibre, LLC > http://www.teklibre.com > Tel: 1-669-226-2619 > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat