From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id C1FFB3CB52 for ; Tue, 6 Jul 2021 15:09:04 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1625598540; bh=N5Kcebf6IAwR0xKmwie8UaJ/nmGz/E3n/RZNayM94bc=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=l8pfdzpHZuMk4cHSiGgSWI9Fz/l83v5IycUqlmCtgCDKuK3uhapv6e27His9/ooBD gp0B7oMDGxwURIYl3yiCilbjSpEp7KnEyqzj0IH5FoZR9VbU34Ha/5163lvxxNg1B5 FLMA89wWwJ0d606bhr5lxQIYLJsopsDg5/T2gTJs= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from [192.168.42.229] ([77.10.4.250]) by mail.gmx.net (mrgmx105 [212.227.17.168]) with ESMTPSA (Nemesis) id 1MUXtY-1laTro0B0q-00QWLL; Tue, 06 Jul 2021 21:09:00 +0200 Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.21\)) From: Sebastian Moeller In-Reply-To: Date: Tue, 6 Jul 2021 21:08:59 +0200 Cc: Matt Mathis , bloat Content-Transfer-Encoding: quoted-printable Message-Id: <882B2F3F-C437-47B4-9375-9BB98BC391B0@gmx.de> References: <62E013CF-ECE9-4022-B092-DFCE2176F772@gmail.com> To: Christoph Paasch X-Mailer: Apple Mail (2.3445.104.21) X-Provags-ID: V03:K1:UNuE8TtbRps5xrTvpxkiYIq/ejVNj9bDLpG3tL0BmIw1BTzcfB9 tY5K1lrDW1Kqbaw8XcgPo9NNE8skpRctieUVj/56KzmdW2dI86O5kSBTY7FAXHTjvz0QB7U DDc5PLWlUQtM94C/AWej6d2IBsFJjGsWn0pWokiNDSNRCq6SNdtJbGKkWbi+8k+TtFPNK7S zchizzzFNohv+CjBBzVuw== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:6YNPUa+mz2E=:Fv949XlnLW2RXXrPOZxfg5 3iZ4Eo3kvp6iugiupvMCuCaCypKpfyN7wUzHjAzUxADNsQIAMb8ywEbKwWeYKpmPYv91GjnN9 10OPacjttRp/jJVzZowkvFmMjODrQa+z+OMFu+lGR6j2lFtmIEa/4qj65soV8s+ZMRpXZYh5W MzZNHhuwCsiYsSd3gIJ448MS8I2MPTOhP3gUsfPtn+IMPCAUtZBhrCyiyZCsCoqscDP/wfKgs h+qIOa6DImg4NTdWx86pYrot1deLGMFecfPY8FaFwV1GI7xrYa30eTeXkmhQKCAUWfyRmHHRe xOCghk3s6UGrDHHVbxqf/elxASNlbvOcZ2xdJdFY5z8wZNHtmWYumoDVaqgGrMjcNvq5zlFeg IwBVPKHLTYReAJ9IFQ7YaryJr8j0d87bNDLXSnOIkzSvNXg40RuIxeHBkrnwS6YfyFu/Vb1Pm tUu7KGoTE/mQe52tSw2RwgtqWfyQAho3rmcUMyA0X5+0HpEL7cYd7VgN+NTK3nZD65iL2QIav rqMbLWEqXEYMkaz/Bfy4Cqp+FjCoNwgAOqxgKXx0JoFeVxc8Va6Ttm9vrkOHUJeEn7QAQ+9Ym zsIp/+ofVQ9Hv6eFaJkZymp4JEy62A7v8+K1VicGsc3569JlsJITiRWg71Lf3h7AY3ThHMOIG NXWpHjypj5A+qanj/ZYlBVdQJnJIadkYc0k8BkMsLNEDVzYUb8xdbVA/1AEtOevTfV4PSaDdu k3otFRYPk/u6+en93xaYXuDrMbQX8P8r8LRrSr3sswmblvUQurCds0cxMu/24xvmAzdemj5Xx TIZbHpzU5/dBEDVaBj7XK5gQ+JHyafQjkue6rbZeVqTah/riQr052iuWf3Hf3n+g1nxRcYPFm 8KPOEK4I3kmQkncFbYV7/zgy1wnWZc9auR0b6soaus6k5BmzPiX0kxM6EebeHp1exx/25nVvP sDtHJSirb8k7JjQ1Ye6zaUAQJlhOxlR56cC3sgdW+E+Va2+dz/32owA0QLNp3lvfiTbbcHKhK W8cbkKEl2vhYIaBMr4Ek76UafJHRFwgmjOY1NbEi8qeriGoUD7H7A+7bi3+coJ6jJ2TroMyMP iPaLtviWPb+4yvmtQ7YBBSmzXEWnB0OGC6D Subject: Re: [Bloat] Apple WWDC Talks on Latency/Bufferbloat X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jul 2021 19:09:05 -0000 Hello Christoph, thanks for your detailed response! > On Jul 6, 2021, at 20:54, Christoph Paasch wrote: >=20 > Hello Sebastian, >=20 > On 06/29/21 - 09:58, Sebastian Moeller wrote: >> Hi Christoph, >>=20 >> one question below: >>=20 >>> On Jun 18, 2021, at 01:43, Christoph Paasch via Bloat >>> wrote: >>>=20 >>> Hello, >>>=20 >>> On 06/17/21 - 11:16, Matt Mathis via Bloat wrote: >>>> Is there a paper or spec for RPM? >>>=20 >>> we try to publish an IETF-draft on the methodology before the = upcoming >>> IETF in July. >>>=20 >>> But, in the mean-time please see inline: >>>=20 >>>> There are at least two different ways to define RPM, both of which >>>> might be relevant. >>>>=20 >>>> At the TCP layer: it can be directly computed from a packet = capture. >>>> The trick is to time reverse a trace and compute the critical path >>>> backwards through the trace: what event triggered each segment or = ACK, >>>> and count round trips. This would be super robust but does not = include >>>> the queueing required in the kernel socket buffers. I need to = think >>>> some more about computing TCP RPM from tcp_info or other kernel >>>> instrumentation - it might be possible. >>>=20 >>> We explicitly opted against measuring purely TCP-level round-trip = times. >>> Because there are countless transparent TCP-proxies out there that = would >>> skew these numbers. Our goal with RPM/Responsiveness is to measure = how >>> an end-user would experience the network. Which means, = DNS-resolution, >>> TCP handshake-time, TLS-handshake, HTTP/2 Request/response. Because, = at >>> the end, that's what actually matters to the users. >>>=20 >>>> A different RPM can be done in the application, above TCP, for = example >>>> by ping-ponging messages. This would include the delays traversing = the >>>> kernel socket buffers which have to be at least as large as a full >>>> network RTT. >>>>=20 >>>> This is perhaps an important point: due to the retransmit and >>>> reassuebly queues (which are required to implement robust data >>>> delivery) TCP must be able hold at least a full RTT of data in it's = own >>>> buffers, which means that under some conditions the RTT as seen by = the >>>> application has be be at least twice the network's RTT, including = any >>>> bloat in the network. >>>=20 >>> Currently, we measure RPM on separate connections (not the = load-bearing >>> ones). We are also measuring on the load-bearing connections = themselves >>> through H2 Ping frames. But for the reasons you described we haven't = yet >>> factored it into the RPM-number. >>>=20 >>> One way may be to inspect with TCP_INFO whether or not the = connections >>> had retransmissions and then throw away the number. On the other = hand, >>> if the network becomes extremely lossy under working conditions, it = does >>> impact the user-experience and so it could make sense to take this = into >>> account. >>>=20 >>>=20 >>> In the end, we realized how hard it is to accurately measure = bufferbloat >>> within a reasonable time-frame (our goal is to finish the test = within >>> ~15 seconds). >>=20 >> [SM] I understand that 10-15 seconds is the amount of time users >> have been trained to expect an on-line speedtest to take, but >> experiments with flent/RRUL showed that there are latency = affection >> processes on slower timescales that are better visible if one = can >> also run a test for 60 - 300 seconds (e.g. cyclic WiFi channel >> probing). Does your tool optionally allow to specify a longer >> run-time? >=20 > Currently the tool does not have a "deep-dive"-mode. There are a few = things > (besides running longer) that a "deep-dive"-mode could provide. For = example, > traceroute-style probes during the test to identify the location of = the > bufferbloat. [SM] Oh, shiny ;) To be useful/interpretable such a tracerouter style = path traversal should be performed from both sides of a link (I am sure = you know, but my go to slide-deck is = https://archive.nanog.org/sites/default/files/10_Roisman_Traceroute.pdf). = But it would be sweet if there was a reliable way to get bi-directional = traceroutes over path one actually uses. > Use H3 for testing and/or run TCP on a different port to > identify traffic-classifiers/transparent TCP-proxies that treat things > differently. Study the impact of TCP bulk transfer on UDP latency. And = so > on... > Such a deep-dive mode would be possible in the command-line tool but = very > unlikely in the UI-mode. [SM] Fair enough, thanks. >=20 > Our primary goal in this first iteration is to provide a tool that = gives a > quick insight into how bad/good the bufferbloat is on the network in = such a > way that a non-expert user can run it and understand the result. [SM] Worthy goal. > We also want it to be using standard protocols so that any basic = web-server can > be configured to serve as an endpoint to it and because that's the = protocols > that the users are actually using in the end. [SM] +1; Yes, tests with the production protocols, ideally to the = "production" servers seems like a great way forward. Regards Sebastian >=20 >=20 > Cheers, > Christoph >=20 >=20 >> Thinking of it, to keep everybody on their toes, how >> about occasionally running a test with longer run-time (maybe = after >> asking the users consent) and store the test duration as part of = the >> results? >>=20 >>=20 >> Best Regards Sebastian >>=20 >>=20 >>>=20 >>> We hope that with the IETF-draft we can get the right people = together to >>> iterate over it and squash out a very accurate measurement that >>> represents what users would experience. >>>=20 >>>=20 >>> Cheers, Christoph >>>=20 >>>=20 >>>>=20 >>>> Thanks, --MM-- The best way to predict the future is to create it. = - >>>> Alan Kay >>>>=20 >>>> We must not tolerate intolerance; however our response must be >>>> carefully measured: too strong would be hypocritical and risks >>>> spiraling out of control; too weak risks being mistaken for tacit >>>> approval. >>>>=20 >>>>=20 >>>> On Sat, Jun 12, 2021 at 9:11 AM Rich Brown = >>>> wrote: >>>>=20 >>>>>> On Jun 12, 2021, at 12:00 PM, bloat-request@lists.bufferbloat.net >>>>>> wrote: >>>>>>=20 >>>>>> Some relevant talks / publicity at WWDC -- the first mentioning >>>>>> CoDel, queueing, etc. Featuring Stuart Cheshire. iOS 15 adds a >>>>>> developer test >>>>> for >>>>>> loaded latency, reported in "RPM" or round-trips per minute. >>>>>>=20 >>>>>> I ran it on my machine: nowens@mac1015 ~ % = /usr/bin/networkQuality >>>>>> =3D=3D=3D=3D SUMMARY =3D=3D=3D=3D Upload capacity: 90.867 Mbps = Download capacity: >>>>>> 93.616 Mbps Upload flows: 16 Download flows: 20 Responsiveness: >>>>>> Medium (840 RPM) >>>>>=20 >>>>> Does anyone know how to get the command-line version for current = (not >>>>> upcoming) macOS? Thanks. >>>>>=20 >>>>> Rich _______________________________________________ Bloat mailing >>>>> list Bloat@lists.bufferbloat.net >>>>> https://lists.bufferbloat.net/listinfo/bloat >>>>>=20 >>>=20 >>>> _______________________________________________ Bloat mailing list >>>> Bloat@lists.bufferbloat.net >>>> https://lists.bufferbloat.net/listinfo/bloat >>>=20 >>> _______________________________________________ Bloat mailing list >>> Bloat@lists.bufferbloat.net = https://lists.bufferbloat.net/listinfo/bloat