From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-x229.google.com (mail-qt0-x229.google.com [IPv6:2607:f8b0:400d:c0d::229]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 7E8593B29E; Wed, 13 Dec 2017 11:41:17 -0500 (EST) Received: by mail-qt0-x229.google.com with SMTP id i40so4406523qti.8; Wed, 13 Dec 2017 08:41:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=oPhnBcRIgRY6UAJ4lIKEqxqP/soQcKNoELAFqeyBBgg=; b=uK/1CdtIpFmi/fsfc2GUEIrAcbEyPBNrGr2CsMhw6ZGB5zQx1RL8jJJSLvfJTLLy70 UGEniGTJ83OvrzaRxIbF78APO6y7TURzL2r3QNfSBHYT1Uj6xYWmDU6ps/DawDrPzZcz yEzQULkVpQsSJtRz2Iwc9Mjhg1I087XsPNewRKgH0ehb8NuA/sn3U/oskp2HABt/atdE pu/A7zt2ue4frTTbbydSBr0rOxTH2E2fEVxl3iVT11cYdFLPGlnw/Wox8XYstj33+m0y /Mm13EMIcWE1Ou7nXcpQjHeP5qVwaFhEvA7w83W1GvKP11TrlG3uT7M2MGYC4cZi1+rE SWvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=oPhnBcRIgRY6UAJ4lIKEqxqP/soQcKNoELAFqeyBBgg=; b=LsJa8GI+bXX0+jM07D2+qmdC/utsjnAlF8bU6WrOG/9JhZIGLLBKaUmlZ1OTFaEIpl RzVfHowPcVIR4+/r/sQffb11CF9KrxjUP+fxA5ymv7iqDVIRReSPM+lft7NEsKRNRbzo iJvbjg/4OYLmmvbgT6DbKwWBa4+woJtQpYvs4d7TZd35vEVgGGGcxL5/DKnjp0QYtgzk KhW1e4cS7/WJfbKR9rVXJG0QNvqA1w88emGfu6mXyi2dZgy/7QoXMIGwR1/ymhQiXxEn dNqXrlUBJkf2xfFKtB/uH88i7TbN8nxu4sI9uTG0ommLOHxZoOuSouAlsd+hDi/kuoQt h98Q== X-Gm-Message-State: AKGB3mJJu4OMiUC0XLj5KcBs7K9AmaIWebtnG9v8AW6OGcz3f/z2VMUv Zy11BkYa3bRPm4aKms5kcfIItjL313q3PFM4HxQ= X-Google-Smtp-Source: ACJfBotKJnP5OR9DUN8OwLmL40SbL7VIBtJWgsSq6sAk1dOXr9N7i0G5O+vM3v2dCBTtXsYZb/CdLAYt/jtJ0hH2IG4= X-Received: by 10.237.59.183 with SMTP id r52mr11858433qte.121.1513183277019; Wed, 13 Dec 2017 08:41:17 -0800 (PST) MIME-Version: 1.0 Received: by 10.140.102.179 with HTTP; Wed, 13 Dec 2017 08:41:16 -0800 (PST) Received: by 10.140.102.179 with HTTP; Wed, 13 Dec 2017 08:41:16 -0800 (PST) In-Reply-To: <7D300E07-536C-4ABD-AE38-DDBAF30E80D7@pnsol.com> References: <92906bd8-7bad-945d-83c8-a2f9598aac2c@lackof.org> <87bmjff7l6.fsf_-_@nemesis.taht.net> <1512417597.091724124@apps.rackspace.com> <87wp1rbxo8.fsf@nemesis.taht.net> <1513119230.638732339@apps.rackspace.com> <7D300E07-536C-4ABD-AE38-DDBAF30E80D7@pnsol.com> From: Jonathan Morton Date: Wed, 13 Dec 2017 18:41:16 +0200 Message-ID: To: Neil Davies Cc: "David P. Reed" , cerowrt-devel@lists.bufferbloat.net, bloat Content-Type: multipart/alternative; boundary="94eb2c192110b4f6c205603b6f7a" Subject: Re: [Cerowrt-devel] [Bloat] DC behaviors today X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Dec 2017 16:41:17 -0000 --94eb2c192110b4f6c205603b6f7a Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable > Have you considered what this means for the economics of the operation of networks? What other industry that =E2=80=9Cmoves things around=E2=80=9D (i= .e logistical or similar) system creates a solution in which they have 10x as much infrastructure than their peak requirement? Ten times peak demand? No. Ten times average demand estimated at time of deployment, and struggling badly with peak demand a decade later, yes. And this is the transportation industry, where a decade is a *short* time - like less than a year in telecoms. - Jonathan Morton On 13 Dec 2017 17:27, "Neil Davies" wrote: > > On 12 Dec 2017, at 22:53, dpreed@reed.com wrote: > > Luca's point tends to be correct - variable latency destroys the stabilit= y > of flow control loops, which destroys throughput, even when there is > sufficient capacity to handle the load. > > > This is an indirect result of Little's Lemma (which is strictly true only > for Poisson arrival, but almost any arrival process will have a similar > interaction between latency and throughput). > > > Actually it is true for general arrival patterns (can=E2=80=99t lay my ha= nds on > the reference for the moment - but it was a while back that was shown) - > what this points to is an underlying conservation law - that =E2=80=9Cdel= ay and > loss=E2=80=9D are conserved in a scheduling process. This comes out of th= e > M/M/1/K/K queueing system and associated analysis. > > There is conservation law (and Klienrock refers to this - at least in > terms of delay - in 1965 - http://onlinelibrary.wiley.com/doi/10.1002/nav= . > 3800120206/abstract) at work here. > > All scheduling systems can do is =E2=80=9Cdistribute=E2=80=9D the resulti= ng =E2=80=9Cdelay and > loss=E2=80=9D differentially amongst the (instantaneous set of) competing= streams. > > Let me just repeat that - The =E2=80=9Cdelay and loss=E2=80=9D are a cons= erved quantity - > scheduling can=E2=80=99t =E2=80=9Cdestroy=E2=80=9D it (they can influence= higher level protocol > behaviour) but not reduce the total amount of =E2=80=9Cdelay and loss=E2= =80=9D that is > being induced into the collective set of streams... > > > However, the other reason I say what I say so strongly is this: > > > Rant on. > > > Peak/avg. load ratio always exceeds a factor of 10 or more, IRL. Only > "benchmark setups" (or hot-rod races done for academic reasons or marketi= ng > reasons to claim some sort of "title") operate at peak supportable load a= ny > significant part of the time. > > > Have you considered what this means for the economics of the operation of > networks? What other industry that =E2=80=9Cmoves things around=E2=80=9D = (i.e logistical or > similar) system creates a solution in which they have 10x as much > infrastructure than their peak requirement? > > > The reason for this is not just "fat pipes are better", but because > bitrate of the underlying medium is an insignificant fraction of systems > operational and capital expense. > > > Agree that (if you are the incumbent that =E2=80=98owns=E2=80=99 the low = level > transmission medium) that this is true (though the costs of lighting a ne= w > lambda are not trivial) - but that is not the experience of anyone else i= n > the digital supply time > > > SLA's are specified in "uptime" not "bits transported", and a clogged pip= e > is defined as down when latency exceeds a small number. > > > Do you have any evidence you can reference for an SLA that treats a few m= s > as =E2=80=9Cdown=E2=80=9D? Most of the SLAs I=E2=80=99ve had dealings wit= h use averages over fairly > long time periods (e.g. a month) - and there is no quality in averages. > > > Typical operating points of corporate networks where the users are happy > are single-digit percentage of max load. > > > Or less - they also detest the costs that they have to pay the network > providers to try and de-risk their applications. There is also the issue > that they measure averages (over 5min to 15min) they completely fail to > capture (for example) the 15seconds when delay and jitter was high so the > CEO=E2=80=99s video conference broke up. > > > This is also true of computer buses and memory controllers and storage > interfaces IRL. Again, latency is the primary measure, and the system nev= er > focuses on operating points anywhere near max throughput. > > > Agreed - but wouldn=E2=80=99t it be nice if they could? I=E2=80=99ve work= ed on h/w systems > where we have designed system to run near limits (the set-top box market = is > pretty cut-throat and the closer to saturation you can run and still > deliver the acceptable outcome the cheaper the box the greater the profit > margin for the set-top box provider) > > > Rant off. > > > Cheers > > Neil > > > On Tuesday, December 12, 2017 1:36pm, "Dave Taht" said: > > > > > Luca Muscariello writes: > > > > > I think everything is about response time, even throughput. > > > > > > If we compare the time to transmit a single packet from A to B, > including > > > propagation delay, transmission delay and queuing delay, > > > to the time to move a much larger amount of data from A to B we use > > throughput > > > in this second case because it is a normalized > > > quantity w.r.t. response time (bytes over delivery time). For a singl= e > > > transmission we tend to use latency. > > > But in the end response time is what matters. > > > > > > Also, even instantaneous throughput is well defined only for a time > scale > > which > > > has to be much larger than the min RTT (propagation + transmission > delays) > > > Agree also that looking at video, latency and latency budgets are > better > > > quantities than throughput. At least more accurate. > > > > > > On Fri, Dec 8, 2017 at 8:05 AM, Mikael Abrahamsson > > wrote: > > > > > > On Mon, 4 Dec 2017, dpreed@reed.com wrote: > > > > > > I suggest we stop talking about throughput, which has been the > > mistaken > > > idea about networking for 30-40 years. > > > > > > > > > We need to talk both about latency and speed. Yes, speed is talked > about > > too > > > much (relative to RTT), but it's not irrelevant. > > > > > > Speed of light in fiber means RTT is approx 1ms per 100km, so from > > Stockholm > > > to SFO my RTT is never going to be significantly below 85ms (8625km > > great > > > circle). It's current twice that. > > > > > > So we just have to accept that some services will never be deliverabl= e > > > across the wider Internet, but have to be deployed closer to the > > customer > > > (as per your examples, some need 1ms RTT to work well), and we need > > lower > > > access latency and lower queuing delay. So yes, agreed. > > > > > > However, I am not going to concede that speed is "mistaken idea about > > > networking". No amount of smarter queuing is going to fix the problem > if > > I > > > don't have enough throughput available to me that I need for my > > application. > > > > In terms of the bellcurve here, throughput has increased much more > > rapidly than than latency has decreased, for most, and in an increasing > > majority of human-interactive cases (like video streaming), we often > > have enough throughput. > > > > And the age old argument regarding "just have overcapacity, always" > > tends to work in these cases. > > > > I tend not to care as much about how long it takes for things that do > > not need R/T deadlines as humans and as steering wheels do. > > > > Propigation delay, while ultimately bound by the speed of light, is als= o > > affected by the wires wrapping indirectly around the earth - much slowe= r > > than would be possible if we worked at it: > > > > https://arxiv.org/pdf/1505.03449.pdf > > > > Then there's inside the boxes themselves: > > > > A lot of my struggles of late has been to get latencies and adaquate > > sampling techniques down below 3ms (my previous value for starting to > > reject things due to having too much noise) - and despite trying fairly > > hard, well... a process can't even sleep accurately much below 1ms, on > > bare metal linux. A dream of mine has been 8 channel high quality audio= , > > with a video delay of not much more than 2.7ms for AR applications. > > > > For comparison, an idle quad core aarch64 and dual core x86_64: > > > > root@nanopineo2:~# irtt sleep > > > > Testing sleep accuracy... > > > > Sleep Duration Mean Error % Error > > > > 1ns 13.353=C2=B5s 1335336.9 > > > > 10ns 14.34=C2=B5s 143409.5 > > > > 100ns 13.343=C2=B5s 13343.9 > > > > 1=C2=B5s 12.791=C2=B5s 1279.2 > > > > 10=C2=B5s 148.661=C2=B5s 1486.6 > > > > 100=C2=B5s 150.907=C2=B5s 150.9 > > > > 1ms 168.001=C2=B5s 16.8 > > > > 10ms 131.235=C2=B5s 1.3 > > > > 100ms 145.611=C2=B5s 0.1 > > > > 200ms 162.917=C2=B5s 0.1 > > > > 500ms 169.885=C2=B5s 0.0 > > > > > > d@nemesis:~$ irtt sleep > > > > Testing sleep accuracy... > > > > > > Sleep Duration Mean Error % Error > > > > 1ns 668ns 66831.9 > > > > 10ns 672ns 6723.7 > > > > 100ns 557ns 557.6 > > > > 1=C2=B5s 57.749=C2=B5s 5774.9 > > > > 10=C2=B5s 63.063=C2=B5s 630.6 > > > > 100=C2=B5s 67.737=C2=B5s 67.7 > > > > 1ms 153.978=C2=B5s 15.4 > > > > 10ms 169.709=C2=B5s 1.7 > > > > 100ms 186.685=C2=B5s 0.2 > > > > 200ms 176.859=C2=B5s 0.1 > > > > 500ms 177.271=C2=B5s 0.0 > > > > > > > > -- > > > Mikael Abrahamsson email: swmike@swm.pp.se > > > _______________________________________________ > > > > > > > > > Bloat mailing list > > > Bloat@lists.bufferbloat.net > > > https://lists.bufferbloat.net/listinfo/bloat > > > > > > > > > > > > _______________________________________________ > > > Bloat mailing list > > > Bloat@lists.bufferbloat.net > > > https://lists.bufferbloat.net/listinfo/bloat > > > ------------------------------ > > Spam > > Not spam > > Forget previous vote > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > > > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > > --94eb2c192110b4f6c205603b6f7a Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

> Have you considered what this means for the economics o= f the operation of networks? What other industry that =E2=80=9Cmoves things= around=E2=80=9D (i.e logistical or similar) system creates a solution in w= hich they have 10x as much infrastructure than their peak requirement?

Ten times peak demand?=C2=A0 No.

Ten times average demand estimated at time of deployment, an= d struggling badly with peak demand a decade later, yes.=C2=A0 And this is = the transportation industry, where a decade is a *short* time - like less t= han a year in telecoms.

- Jonathan Morton


On 13 Dec 2017 17= :27, "Neil Davies" <n= eil.davies@pnsol.com> wrote:

On 12 Dec 2017, at 22:53, dpreed@reed.com wrote:

Luca's point tends to be correct - v= ariable latency destroys the stability of flow control loops, which destroy= s throughput, even when there is sufficient capacity to handle the load.

= =C2=A0

This is an indirect result of Little's Lemma (which is strictly= true only for Poisson arrival, but almost any arrival process will have a = similar interaction between latency and throughput).

Actually it is true for general arrival patter= ns (can=E2=80=99t lay my hands on the reference for the moment - but it was= a while back that was shown) - what this points to is an underlying conser= vation law - that =E2=80=9Cdelay and loss=E2=80=9D are conserved in a sched= uling process. This comes out of the M/M/1/K/K queueing system and associat= ed analysis.

There is =C2=A0conservation law (and = Klienrock refers to this - at least in terms of delay - in 1965 -=C2=A0http://onlinelibrary.wiley.com/doi/10.1002/nav.= 3800120206/abstract) at work here.

All schedul= ing systems can do is =E2=80=9Cdistribute=E2=80=9D the resulting =E2=80=9Cd= elay and loss=E2=80=9D differentially amongst the (instantaneous set of) co= mpeting streams.=C2=A0

Let me just repeat that - T= he =E2=80=9Cdelay and loss=E2=80=9D are a conserved quantity - scheduling c= an=E2=80=99t =E2=80=9Cdestroy=E2=80=9D it (they can influence higher level = protocol behaviour) but not reduce the total amount of =E2=80=9Cdelay and l= oss=E2=80=9D that is being induced into the collective set of streams...

=C2=A0

However, the other reason I say what I sa= y so strongly is this:

=C2=A0

Rant on.

=C2=A0

Peak/avg. load ratio alwa= ys exceeds a factor of 10 or more, IRL. Only "benchmark setups" (= or hot-rod races done for academic reasons or marketing reasons to claim so= me sort of "title") operate at peak supportable load any signific= ant part of the time.

Ha= ve you considered what this means for the economics of the operation of net= works? What other industry that =E2=80=9Cmoves things around=E2=80=9D (i.e = logistical or similar) system creates a solution in which they have 10x as = much infrastructure than their peak requirement?

=C2=A0

The reason for this is not just "fat pipes are better"= , but because bitrate of the underlying medium is an insignificant fraction= of systems operational and capital expense.

Agree that (if you are the incumbent that =E2=80=98own= s=E2=80=99 the low level transmission medium) that this is true (though the= costs of lighting a new lambda are not trivial) - but that is not the expe= rience of anyone else in the digital supply time

=C2=A0

SLA's are specified in "uptime" not "bits tra= nsported", and a clogged pipe is defined as down when latency exceeds = a small number.

Do you h= ave any evidence you can reference for an SLA that treats a few ms as =E2= =80=9Cdown=E2=80=9D? Most of the SLAs I=E2=80=99ve had dealings with use av= erages over fairly long time periods (e.g. a month) - and there is no quali= ty in averages.

=C2=A0

Typical operating point= s of corporate networks where the users are happy are single-digit percenta= ge of max load.

Or less = - they also detest the costs that they have to pay the network providers to= try and de-risk their applications. There is also the issue that they meas= ure averages (over 5min to 15min) they completely fail to capture (for exam= ple) the 15seconds when delay and jitter was high so the CEO=E2=80=99s vide= o conference broke up.

=C2=A0

This is also= true of computer buses and memory controllers and storage interfaces IRL. = Again, latency is the primary measure, and the system never focuses on oper= ating points anywhere near max throughput.
<= div>
Agreed - but wouldn=E2=80=99t it be nice if they could? = I=E2=80=99ve worked on h/w systems where we have designed system to run nea= r limits (the set-top box market is pretty cut-throat and the closer to sat= uration you can run and still deliver the acceptable outcome the cheaper th= e box the greater the profit margin for the set-top box provider)

=

=C2=A0

Rant off.


Cheers

Neil


On Tuesday, December 12, 2017 1:36pm, "Dav= e Taht" <dave@ta= ht.net> said:

>=C2=A0
> Luca Muscariello <luca.muscariello@gmail.com> writes:=
>=C2=A0<= /span>
> > I think everything is about response time, even through= put.
> >
> > If we compare the time to transmit a single = packet from A to B, including
> > propagation delay, transmission = delay and queuing delay,
> > to the time to move a much larger amo= unt of data from A to B we use
> throughput
> > in this seco= nd case because it is a normalized
> > quantity w.r.t. response ti= me (bytes over delivery time). For a single
> > transmission we te= nd to use latency.
> > But in the end response time is what matter= s.
> >
> > Also, even instantaneous throughput is well de= fined only for a time scale
> which
> > has to be much large= r than the min RTT (propagation + transmission delays)
> > Agree a= lso that looking at video, latency and latency budgets are better
> &= gt; quantities than throughput. At least more accurate.
> >
>= ; > On Fri, Dec 8, 2017 at 8:05 AM, Mikael Abrahamsson <swmike@swm.pp.se>
> w= rote:
> >
> > On Mon, 4 Dec 2017, dpreed@reed.com wrote:
> >
&g= t; > I suggest we stop talking about throughput, which has been the
&= gt; mistaken
> > idea about networking for 30-40 years.
> &g= t;
> >
> > We need to talk both about latency and speed. = Yes, speed is talked about
> too
> > much (relative to RTT),= but it's not irrelevant.
> >
> > Speed of light in f= iber means RTT is approx 1ms per 100km, so from
> Stockholm
> &= gt; to SFO my RTT is never going to be significantly below 85ms (8625km
= > great
> > circle). It's current twice that.
> ><= br>> > So we just have to accept that some services will never be del= iverable
> > across the wider Internet, but have to be deployed cl= oser to the
> customer
> > (as per your examples, some need = 1ms RTT to work well), and we need
> lower
> > access latenc= y and lower queuing delay. So yes, agreed.
> >
> > Howeve= r, I am not going to concede that speed is "mistaken idea about
>= ; > networking". No amount of smarter queuing is going to fix the p= roblem if
> I
> > don't have enough throughput available= to me that I need for my
> application.
>=C2=A0
> In terms of th= e bellcurve here, throughput has increased much more
> rapidly than t= han latency has decreased, for most, and in an increasing
> majority = of human-interactive cases (like video streaming), we often
> have en= ough throughput.
>=C2=A0
> And the age old argument regarding "jus= t have overcapacity, always"
> tends to work in these cases.
= >=C2=A0
> I tend not to care as much about how long it takes for things th= at do
> not need R/T deadlines as humans and as steering wheels do.>=C2=A0
> Propigation delay, while ultimately bound by the speed of ligh= t, is also
> affected by the wires wrapping indirectly around the ear= th - much slower
> than would be possible if we worked at it:
>= =C2=A0> h= ttps://arxiv.org/pdf/1505.03449.pdf
>=C2=A0
> Then there's= inside the boxes themselves:
>=C2=A0
> A lot of my struggles of late h= as been to get latencies and adaquate
> sampling techniques down belo= w 3ms (my previous value for starting to
> reject things due to havin= g too much noise) - and despite trying fairly
> hard, well... a proce= ss can't even sleep accurately much below 1ms, on
> bare metal li= nux. A dream of mine has been 8 channel high quality audio,
> with a = video delay of not much more than 2.7ms for AR applications.
>=C2=A0
> = For comparison, an idle quad core aarch64 and dual core x86_64:
>=C2=A0

&g= t; root@nanopineo2:~# irtt sleep
>=C2=A0
> Testing sleep accuracy...>=C2=A0
> Sleep Duration Mean Error % Error
>=C2=A0
> 1ns 13.353=C2= =B5s 1335336.9
>=C2=A0
> 10ns 14.34=C2=B5s 143409.5
>=C2=A0
> 100n= s 13.343=C2=B5s 13343.9
>=C2=A0
> 1=C2=B5s 12.791=C2=B5s 1279.2
>= =C2=A0> 10=C2=B5s 148.661=C2=B5s 1486.6
>=C2=A0
> 100=C2=B5s 150.907=C2= =B5s 150.9
>=C2=A0
> 1ms 168.001=C2=B5s 16.8
>=C2=A0
> 10ms 131.235= =C2=B5s 1.3
>=C2=A0
> 100ms 145.611=C2=B5s 0.1
>=C2=A0
> 200ms 162.= 917=C2=B5s 0.1
>=C2=A0
> 500ms 169.885=C2=B5s 0.0
>=C2=A0
>=C2=A0
> d= @nemesis:~$ irtt sleep
>=C2=A0
> Testing sleep accuracy...
>=C2=A0

>= ;=C2=A0<= br>> Sleep Duration Mean Error % Error
>=C2=A0
> 1ns 668ns 66831.9>=C2=A0
> 10ns 672ns 6723.7
>=C2=A0
> 100ns 557ns 557.6
>=C2=A0

>= ; 1=C2=B5s 57.749=C2=B5s 5774.9
>=C2=A0
> 10=C2=B5s 63.063=C2=B5s 630.6=
>=C2=A0<= /span>
> 100=C2=B5s 67.737=C2=B5s 67.7
>=C2=A0
> 1ms 153.978=C2= =B5s 15.4
>=C2=A0
> 10ms 169.709=C2=B5s 1.7
>=C2=A0
> 100ms 186.685= =C2=B5s 0.2
>=C2=A0
> 200ms 176.859=C2=B5s 0.1
>=C2=A0
> 500ms 177.= 271=C2=B5s 0.0
>=C2=A0
> >
> > --
> > Mikael Abrah= amsson email: swmike@= swm.pp.se
> > ___________________________________________= ____
> >
> >
> > Bloat mailing list
> >= Bloat@lis= ts.bufferbloat.net
> > https://lists.bufferbloat.net/list= info/bloat
> >
> >
> >
> > ________= _______________________________________
> > Bloat mailing lis= t
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat= .net/listinfo/bloat
>
_____= __________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.= net/listinfo/bloat


_________= ______________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net<= /a>
https://lists.bufferbloat.net/listinfo/bloat
--94eb2c192110b4f6c205603b6f7a--