From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.taht.net (mail.taht.net [176.58.107.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 5A5163B2A4; Tue, 12 Dec 2017 13:36:58 -0500 (EST) Received: from nemesis.taht.net (unknown [IPv6:2603:3024:1536:86f0:2e0:4cff:fec1:1206]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.taht.net (Postfix) with ESMTPSA id 7D28921425; Tue, 12 Dec 2017 18:36:56 +0000 (UTC) From: Dave Taht To: Luca Muscariello Cc: Mikael Abrahamsson , dpreed@reed.com, "cerowrt-devel\@lists.bufferbloat.net" , bloat References: <92906bd8-7bad-945d-83c8-a2f9598aac2c@lackof.org> <87bmjff7l6.fsf_-_@nemesis.taht.net> <1512417597.091724124@apps.rackspace.com> Date: Tue, 12 Dec 2017 10:36:55 -0800 In-Reply-To: (Luca Muscariello's message of "Tue, 12 Dec 2017 16:09:34 +0100") Message-ID: <87wp1rbxo8.fsf@nemesis.taht.net> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Bloat] [Cerowrt-devel] DC behaviors today X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Dec 2017 18:36:58 -0000 Luca Muscariello writes: > I think everything is about response time, even throughput.=20 > > If we compare the time to transmit a single packet from A to B, including > propagation delay, transmission delay and queuing delay, > to the time to move a much larger amount of data from A to B we use throu= ghput > in this second case because it is a normalized > quantity w.r.t. response time (bytes over delivery time). For a single > transmission we tend to use latency. > But in the end response time is what matters. > > Also, even instantaneous throughput is well defined only for a time scale= which > has to be much larger than the min RTT (propagation + transmission delays) > Agree also that looking at video, latency and latency budgets are better > quantities than throughput. At least more accurate. > > On Fri, Dec 8, 2017 at 8:05 AM, Mikael Abrahamsson wro= te: > > On Mon, 4 Dec 2017, dpreed@reed.com wrote: >=20=20=20=20=20 > I suggest we stop talking about throughput, which has been the mi= staken > idea about networking for 30-40 years. >=20=20=20=20=20=20=20=20=20 > > We need to talk both about latency and speed. Yes, speed is talked ab= out too > much (relative to RTT), but it's not irrelevant. >=20=20=20=20=20 > Speed of light in fiber means RTT is approx 1ms per 100km, so from St= ockholm > to SFO my RTT is never going to be significantly below 85ms (8625km g= reat > circle). It's current twice that. >=20=20=20=20=20 > So we just have to accept that some services will never be deliverable > across the wider Internet, but have to be deployed closer to the cust= omer > (as per your examples, some need 1ms RTT to work well), and we need l= ower > access latency and lower queuing delay. So yes, agreed. >=20=20=20=20=20 > However, I am not going to concede that speed is "mistaken idea about > networking". No amount of smarter queuing is going to fix the problem= if I > don't have enough throughput available to me that I need for my appli= cation. In terms of the bellcurve here, throughput has increased much more rapidly than than latency has decreased, for most, and in an increasing majority of human-interactive cases (like video streaming), we often have enough throughput. And the age old argument regarding "just have overcapacity, always" tends to work in these cases. I tend not to care as much about how long it takes for things that do not need R/T deadlines as humans and as steering wheels do. Propigation delay, while ultimately bound by the speed of light, is also affected by the wires wrapping indirectly around the earth - much slower than would be possible if we worked at it: https://arxiv.org/pdf/1505.03449.pdf Then there's inside the boxes themselves: A lot of my struggles of late has been to get latencies and adaquate sampling techniques down below 3ms (my previous value for starting to reject things due to having too much noise) - and despite trying fairly hard, well... a process can't even sleep accurately much below 1ms, on bare metal linux. A dream of mine has been 8 channel high quality audio, with a video delay of not much more than 2.7ms for AR applications. For comparison, an idle quad core aarch64 and dual core x86_64: root@nanopineo2:~# irtt sleep Testing sleep accuracy... Sleep Duration Mean Error % Error 1ns 13.353=C2=B5s 1335336.9 10ns 14.34=C2=B5s 143409.5 100ns 13.343=C2=B5s 13343.9 1=C2=B5s 12.791=C2=B5s 1279.2 10=C2=B5s 148.661=C2=B5s 1486.6 100=C2=B5s 150.907=C2=B5s 150.9 1ms 168.001=C2=B5s 16.8 10ms 131.235=C2=B5s 1.3 100ms 145.611=C2=B5s 0.1 200ms 162.917=C2=B5s 0.1 500ms 169.885=C2=B5s 0.0 d@nemesis:~$ irtt sleep Testing sleep accuracy... Sleep Duration Mean Error % Error 1ns 668ns 66831.9 10ns 672ns 6723.7 100ns 557ns 557.6 1=C2=B5s 57.749=C2=B5s 5774.9 10=C2=B5s 63.063=C2=B5s 630.6 100=C2=B5s 67.737=C2=B5s 67.7 1ms 153.978=C2=B5s 15.4 10ms 169.709=C2=B5s 1.7 100ms 186.685=C2=B5s 0.2 200ms 176.859=C2=B5s 0.1 500ms 177.271=C2=B5s 0.0 >=20=20=20=20=20 > --=20 > Mikael Abrahamsson email: swmike@swm.pp.se > _______________________________________________ >=20=20=20=20=20 >=20=20=20=20=20 > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat >=20=20=20=20=20 > > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat