From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-x233.google.com (mail-qt0-x233.google.com [IPv6:2607:f8b0:400d:c0d::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 957BA3B2A4; Wed, 13 Dec 2017 05:45:55 -0500 (EST) Received: by mail-qt0-x233.google.com with SMTP id g9so3062634qth.9; Wed, 13 Dec 2017 02:45:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=kfO9G6vP93Cr3+86Qx6kyurvX+QVqI7Re9+/O0lgyrM=; b=tTg2Aya9Bm0MGPqk8E1/ksQwjNnRjVYDaoyaVqB2EJb25W+kjx6dnxRMe2mVSQQInF /SfANy+qZ6d7Yzll0ZvtqbfZ3QZl0B6zAYwBTlvJXjG2GvvVo+YGbL3KGKuqbO0/j6o7 3O0fFfkKQ2dAPbwB/38gg8QBv/3dLCfyZmhVGWQhKdbSEU1VUQKuvAAClvFUC4fkV7lt TRVtH1kbSOpcWSGB1glFxh8aeojDdtk2uAWR7OOllgBMWtexKzrBkIi5fUPg7c9oQLUZ ZT+FqOJrfEmNtyDokgjPxecAI6slLMjxBcwwIBMiQhIOQ+OR02NiUCgD++E00pDhvSg7 pQ4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=kfO9G6vP93Cr3+86Qx6kyurvX+QVqI7Re9+/O0lgyrM=; b=dUio9nA0kxnsg+cGQuYXv2t/iOfhygprpzvwVi3e8mIKhmSyeTeH1fPYDdmRmpEKzf j55PTICi6t+1aHOHepP1Exhk3UZpVyOVyP/RVTbKrEmEk+ib0sy45rXQ+bkksoOV7Ucq +mWmcaZhfIqSkuAc/wkLK0Oc+eAPegcaELUMrP6AvNNNx6WjzGxtYz5NnGsu0o0WB8gs 8NzF529ksCnpov0hGAUFhF2n+Ehg44Wl2NeCioOnDuMwLMMOydiK9ZtftTeC8TcZNPPM CdK2nln9gAiRBpzycNNLUQtc6CC4g+b41t/mxLc9e0zdLIay4QwkdWWKEGlxK46AEpEi EVmg== X-Gm-Message-State: AKGB3mIei4PRiCO8xv63gblL81QNy/ljH94C+BmZucwMcuhn4Peh9rFo PYB4Ip8FI6lquaZKFn9DGEdncTpWuUcyd7zVjEQ= X-Google-Smtp-Source: ACJfBotg6TZg9Bgqb3aCgBZFbG0uldou8RF7dWCMlunPuXfgxJBnPHbyvc5a/3X4r9PruEA1Tp7/jj1QyFQ/L4ZmdIw= X-Received: by 10.237.37.85 with SMTP id w21mr10396937qtc.268.1513161955010; Wed, 13 Dec 2017 02:45:55 -0800 (PST) MIME-Version: 1.0 Received: by 10.12.191.227 with HTTP; Wed, 13 Dec 2017 02:45:54 -0800 (PST) In-Reply-To: <1513119230.638732339@apps.rackspace.com> References: <92906bd8-7bad-945d-83c8-a2f9598aac2c@lackof.org> <87bmjff7l6.fsf_-_@nemesis.taht.net> <1512417597.091724124@apps.rackspace.com> <87wp1rbxo8.fsf@nemesis.taht.net> <1513119230.638732339@apps.rackspace.com> From: Luca Muscariello Date: Wed, 13 Dec 2017 11:45:54 +0100 Message-ID: To: David Reed Cc: Dave Taht , Mikael Abrahamsson , "cerowrt-devel@lists.bufferbloat.net" , bloat Content-Type: multipart/alternative; boundary="001a1141fd72d0eee305603678c5" Subject: Re: [Bloat] [Cerowrt-devel] DC behaviors today X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Dec 2017 10:45:55 -0000 --001a1141fd72d0eee305603678c5 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable +1 on all. Except that Little's Law is very general as it applies to any ergodic process. It just derives from the law of large numbers. And BTW, Little's law is a very powerful law. We use it unconsciously all the time. On Tue, Dec 12, 2017 at 11:53 PM, wrote: > Luca's point tends to be correct - variable latency destroys the stabilit= y > of flow control loops, which destroys throughput, even when there is > sufficient capacity to handle the load. > > > > This is an indirect result of Little's Lemma (which is strictly true only > for Poisson arrival, but almost any arrival process will have a similar > interaction between latency and throughput). > > > > However, the other reason I say what I say so strongly is this: > > > > Rant on. > > > > Peak/avg. load ratio always exceeds a factor of 10 or more, IRL. Only > "benchmark setups" (or hot-rod races done for academic reasons or marketi= ng > reasons to claim some sort of "title") operate at peak supportable load a= ny > significant part of the time. > > > > The reason for this is not just "fat pipes are better", but because > bitrate of the underlying medium is an insignificant fraction of systems > operational and capital expense. > > > > SLA's are specified in "uptime" not "bits transported", and a clogged pip= e > is defined as down when latency exceeds a small number. > > > > Typical operating points of corporate networks where the users are happy > are single-digit percentage of max load. > > > > This is also true of computer buses and memory controllers and storage > interfaces IRL. Again, latency is the primary measure, and the system nev= er > focuses on operating points anywhere near max throughput. > > > > Rant off. > > > On Tuesday, December 12, 2017 1:36pm, "Dave Taht" said: > > > > > Luca Muscariello writes: > > > > > I think everything is about response time, even throughput. > > > > > > If we compare the time to transmit a single packet from A to B, > including > > > propagation delay, transmission delay and queuing delay, > > > to the time to move a much larger amount of data from A to B we use > > throughput > > > in this second case because it is a normalized > > > quantity w.r.t. response time (bytes over delivery time). For a singl= e > > > transmission we tend to use latency. > > > But in the end response time is what matters. > > > > > > Also, even instantaneous throughput is well defined only for a time > scale > > which > > > has to be much larger than the min RTT (propagation + transmission > delays) > > > Agree also that looking at video, latency and latency budgets are > better > > > quantities than throughput. At least more accurate. > > > > > > On Fri, Dec 8, 2017 at 8:05 AM, Mikael Abrahamsson > > wrote: > > > > > > On Mon, 4 Dec 2017, dpreed@reed.com wrote: > > > > > > I suggest we stop talking about throughput, which has been the > > mistaken > > > idea about networking for 30-40 years. > > > > > > > > > We need to talk both about latency and speed. Yes, speed is talked > about > > too > > > much (relative to RTT), but it's not irrelevant. > > > > > > Speed of light in fiber means RTT is approx 1ms per 100km, so from > > Stockholm > > > to SFO my RTT is never going to be significantly below 85ms (8625km > > great > > > circle). It's current twice that. > > > > > > So we just have to accept that some services will never be deliverabl= e > > > across the wider Internet, but have to be deployed closer to the > > customer > > > (as per your examples, some need 1ms RTT to work well), and we need > > lower > > > access latency and lower queuing delay. So yes, agreed. > > > > > > However, I am not going to concede that speed is "mistaken idea about > > > networking". No amount of smarter queuing is going to fix the problem > if > > I > > > don't have enough throughput available to me that I need for my > > application. > > > > In terms of the bellcurve here, throughput has increased much more > > rapidly than than latency has decreased, for most, and in an increasing > > majority of human-interactive cases (like video streaming), we often > > have enough throughput. > > > > And the age old argument regarding "just have overcapacity, always" > > tends to work in these cases. > > > > I tend not to care as much about how long it takes for things that do > > not need R/T deadlines as humans and as steering wheels do. > > > > Propigation delay, while ultimately bound by the speed of light, is als= o > > affected by the wires wrapping indirectly around the earth - much slowe= r > > than would be possible if we worked at it: > > > > https://arxiv.org/pdf/1505.03449.pdf > > > > Then there's inside the boxes themselves: > > > > A lot of my struggles of late has been to get latencies and adaquate > > sampling techniques down below 3ms (my previous value for starting to > > reject things due to having too much noise) - and despite trying fairly > > hard, well... a process can't even sleep accurately much below 1ms, on > > bare metal linux. A dream of mine has been 8 channel high quality audio= , > > with a video delay of not much more than 2.7ms for AR applications. > > > > For comparison, an idle quad core aarch64 and dual core x86_64: > > > > root@nanopineo2:~# irtt sleep > > > > Testing sleep accuracy... > > > > Sleep Duration Mean Error % Error > > > > 1ns 13.353=C2=B5s 1335336.9 > > > > 10ns 14.34=C2=B5s 143409.5 > > > > 100ns 13.343=C2=B5s 13343.9 > > > > 1=C2=B5s 12.791=C2=B5s 1279.2 > > > > 10=C2=B5s 148.661=C2=B5s 1486.6 > > > > 100=C2=B5s 150.907=C2=B5s 150.9 > > > > 1ms 168.001=C2=B5s 16.8 > > > > 10ms 131.235=C2=B5s 1.3 > > > > 100ms 145.611=C2=B5s 0.1 > > > > 200ms 162.917=C2=B5s 0.1 > > > > 500ms 169.885=C2=B5s 0.0 > > > > > > d@nemesis:~$ irtt sleep > > > > Testing sleep accuracy... > > > > > > Sleep Duration Mean Error % Error > > > > 1ns 668ns 66831.9 > > > > 10ns 672ns 6723.7 > > > > 100ns 557ns 557.6 > > > > 1=C2=B5s 57.749=C2=B5s 5774.9 > > > > 10=C2=B5s 63.063=C2=B5s 630.6 > > > > 100=C2=B5s 67.737=C2=B5s 67.7 > > > > 1ms 153.978=C2=B5s 15.4 > > > > 10ms 169.709=C2=B5s 1.7 > > > > 100ms 186.685=C2=B5s 0.2 > > > > 200ms 176.859=C2=B5s 0.1 > > > > 500ms 177.271=C2=B5s 0.0 > > > > > > > > -- > > > Mikael Abrahamsson email: swmike@swm.pp.se > > > _______________________________________________ > > > > > > > > > Bloat mailing list > > > Bloat@lists.bufferbloat.net > > > https://lists.bufferbloat.net/listinfo/bloat > > > > > > > > > > > > _______________________________________________ > > > Bloat mailing list > > > Bloat@lists.bufferbloat.net > > > https://lists.bufferbloat.net/listinfo/bloat > > > --001a1141fd72d0eee305603678c5 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
+1 on all.
Except that Little's Law is very genera= l as it applies to any ergodic process.
It just derives from the = law of large numbers. And BTW, Little's law is a very powerful law.
We use it unconsciously all the time.


<= /div>

On Tue= , Dec 12, 2017 at 11:53 PM, <dpreed@reed.com> wrote:

Luca's point tends = to be correct - variable latency destroys the stability of flow control loo= ps, which destroys throughput, even when there is sufficient capacity to ha= ndle the load.

=C2=A0

This is = an indirect result of Little's Lemma (which is strictly true only for P= oisson arrival, but almost any arrival process will have a similar interact= ion between latency and throughput).

=C2=A0

However,= the other reason I say what I say so strongly is this:

=C2=A0

Rant on.=

=C2=A0

Peak/avg= . load ratio always exceeds a factor of 10 or more, IRL. Only "benchma= rk setups" (or hot-rod races done for academic reasons or marketing re= asons to claim some sort of "title") operate at peak supportable = load any significant part of the time.

=C2=A0

The reas= on for this is not just "fat pipes are better", but because bitra= te of the underlying medium is an insignificant fraction of systems operati= onal and capital expense.

=C2=A0

SLA'= s are specified in "uptime" not "bits transported", and= a clogged pipe is defined as down when latency exceeds a small number.

=C2=A0

Typical = operating points of corporate networks where the users are happy are single= -digit percentage of max load.

=C2=A0

This is = also true of computer buses and memory controllers and storage interfaces I= RL. Again, latency is the primary measure, and the system never focuses on = operating points anywhere near max throughput.

=C2=A0

Rant off= .



On Tuesday, December 12, 2017 1:36pm, &= quot;Dave Taht" <dave@taht.net> said:

> > Luca Muscariello <luca.muscariello@gmail.com> writes:
>
>= > I think everything is about response time, even throughput.
> &= gt;
> > If we compare the time to transmit a single packet from A = to B, including
> > propagation delay, transmission delay and queu= ing delay,
> > to the time to move a much larger amount of data fr= om A to B we use
> throughput
> > in this second case becaus= e it is a normalized
> > quantity w.r.t. response time (bytes over= delivery time). For a single
> > transmission we tend to use late= ncy.
> > But in the end response time is what matters.
> >= ;
> > Also, even instantaneous throughput is well defined only for= a time scale
> which
> > has to be much larger than the min= RTT (propagation + transmission delays)
> > Agree also that looki= ng at video, latency and latency budgets are better
> > quantities= than throughput. At least more accurate.
> >
> > On Fri,= Dec 8, 2017 at 8:05 AM, Mikael Abrahamsson <swmike@swm.pp.se>
> wrote:
> = >
> > On Mon, 4 Dec 2017, dpreed@reed.com wrote:
> >
> > I sugg= est we stop talking about throughput, which has been the
> mistaken> > idea about networking for 30-40 years.
> >
> >= ;
> > We need to talk both about latency and speed. Yes, speed is = talked about
> too
> > much (relative to RTT), but it's = not irrelevant.
> >
> > Speed of light in fiber means RTT= is approx 1ms per 100km, so from
> Stockholm
> > to SFO my = RTT is never going to be significantly below 85ms (8625km
> great
= > > circle). It's current twice that.
> >
> > S= o we just have to accept that some services will never be deliverable
&g= t; > across the wider Internet, but have to be deployed closer to the> customer
> > (as per your examples, some need 1ms RTT to wor= k well), and we need
> lower
> > access latency and lower qu= euing delay. So yes, agreed.
> >
> > However, I am not go= ing to concede that speed is "mistaken idea about
> > network= ing". No amount of smarter queuing is going to fix the problem if
&= gt; I
> > don't have enough throughput available to me that I = need for my
> application.
>
> In terms of the bellcurve= here, throughput has increased much more
> rapidly than than latency= has decreased, for most, and in an increasing
> majority of human-in= teractive cases (like video streaming), we often
> have enough throug= hput.
>
> And the age old argument regarding "just have o= vercapacity, always"
> tends to work in these cases.
> > I tend not to care as much about how long it takes for things that do=
> not need R/T deadlines as humans and as steering wheels do.
>= ;
> Propigation delay, while ultimately bound by the speed of light,= is also
> affected by the wires wrapping indirectly around the earth= - much slower
> than would be possible if we worked at it:
> <= br>> = https://arxiv.org/pdf/1505.03449.pdf
>
> Then there&#= 39;s inside the boxes themselves:
>
> A lot of my struggles of= late has been to get latencies and adaquate
> sampling techniques do= wn below 3ms (my previous value for starting to
> reject things due t= o having too much noise) - and despite trying fairly
> hard, well... = a process can't even sleep accurately much below 1ms, on
> bare m= etal linux. A dream of mine has been 8 channel high quality audio,
> = with a video delay of not much more than 2.7ms for AR applications.
>=
> For comparison, an idle quad core aarch64 and dual core x86_64:>
> root@nanopineo2:~# irtt sleep
>
> Testing slee= p accuracy...
>
> Sleep Duration Mean Error % Error
> > 1ns 13.353=C2=B5s 1335336.9
>
> 10ns 14.34=C2=B5s 14340= 9.5
>
> 100ns 13.343=C2=B5s 13343.9
>
> 1=C2=B5s = 12.791=C2=B5s 1279.2
>
> 10=C2=B5s 148.661=C2=B5s 1486.6
&g= t;
> 100=C2=B5s 150.907=C2=B5s 150.9
>
> 1ms 168.001=C2= =B5s 16.8
>
> 10ms 131.235=C2=B5s 1.3
>
> 100ms 1= 45.611=C2=B5s 0.1
>
> 200ms 162.917=C2=B5s 0.1
>
>= ; 500ms 169.885=C2=B5s 0.0
>
>
> d@nemesis:~$ irtt slee= p
>
> Testing sleep accuracy...
>
>
> Slee= p Duration Mean Error % Error
>
> 1ns 668ns 66831.9
> > 10ns 672ns 6723.7
>
> 100ns 557ns 557.6
>
>= ; 1=C2=B5s 57.749=C2=B5s 5774.9
>
> 10=C2=B5s 63.063=C2=B5s 63= 0.6
>
> 100=C2=B5s 67.737=C2=B5s 67.7
>
> 1ms 153= .978=C2=B5s 15.4
>
> 10ms 169.709=C2=B5s 1.7
>
> = 100ms 186.685=C2=B5s 0.2
>
> 200ms 176.859=C2=B5s 0.1
> =
> 500ms 177.271=C2=B5s 0.0
>
> >
> > --
= > > Mikael Abrahamsson email: swmike@swm.pp.se
> > __________________________= _____________________
> >
> >
> > Bloat mai= ling list
> > Bloat@lists.bufferbloat.net
> > https://lists.buf= ferbloat.net/listinfo/bloat
> >
> >
> >=
> > _______________________________________________
> = > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > htt= ps://lists.bufferbloat.net/listinfo/bloat
>


--001a1141fd72d0eee305603678c5--