From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from outbound02.roaringpenguin.co.uk (outbound02.roaringpenguin.co.uk [109.69.238.89]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id E08463B29E; Wed, 13 Dec 2017 14:55:49 -0500 (EST) Received: from relay.smtp.pnsol.com (ec2-54-246-165-192.eu-west-1.compute.amazonaws.com [54.246.165.192]) by outbound02.roaringpenguin.co.uk (8.14.4/8.14.4/Debian-8+deb8u2) with ESMTP id vBDJtgep002932 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NOT); Wed, 13 Dec 2017 19:55:43 GMT Received: from mail.la.pnsol.com ([172.20.5.206]) by relay.smtp.pnsol.com with esmtp (Exim 4.82) (envelope-from ) id 1ePD7L-0000j5-3e; Wed, 13 Dec 2017 19:54:59 +0000 Received: from git.pnsol.com ([172.20.5.238] helo=roam.smtp.pnsol.com) by mail.la.pnsol.com with esmtp (Exim 4.76) (envelope-from ) id 1ePD7E-0005Gq-OD; Wed, 13 Dec 2017 19:54:52 +0000 Received: from [172.20.5.109] by roam.smtp.pnsol.com with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.82) (envelope-from ) id 1ePD7E-0001vj-Ef; Wed, 13 Dec 2017 19:54:52 +0000 Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\)) Content-Type: multipart/signed; boundary="Apple-Mail=_AACC55FE-CAE4-4EBC-8D94-7FDAA09CB40F"; protocol="application/pgp-signature"; micalg=pgp-sha512 From: Neil Davies X-Priority: 3 (Normal) X-Mailbutler-Link-Tracking-Uuid: In-Reply-To: <1513188494.316722195@apps.rackspace.com> Date: Wed, 13 Dec 2017 19:55:29 +0000 Cc: Jonathan Morton , cerowrt-devel@lists.bufferbloat.net, bloat Message-Id: <34FB5FF9-1490-4355-B2F3-76E519479287@pnsol.com> References: <92906bd8-7bad-945d-83c8-a2f9598aac2c@lackof.org> <87bmjff7l6.fsf_-_@nemesis.taht.net> <1512417597.091724124@apps.rackspace.com> <87wp1rbxo8.fsf@nemesis.taht.net> <1513119230.638732339@apps.rackspace.com> <7D300E07-536C-4ABD-AE38-DDBAF30E80D7@pnsol.com> <1513188494.316722195@apps.rackspace.com> To: dpreed@reed.com X-Mailer: Apple Mail (2.3273) X-Spam-Score: undef - relay 54.246.165.192 marked with skip_spam_scan X-CanIt-Geo: ip=54.246.165.192; country=IE; region=Leinster; city=Dublin; latitude=53.3389; longitude=-6.2595; http://maps.google.com/maps?q=53.3389,-6.2595&z=6 X-CanItPRO-Stream: pnsol-com:outbound (inherits from pnsol-com:default, PredictableNetworkSolutions:default, Wholesale:default, CTL:default, base:default) X-Canit-Stats-ID: Bayes signature not available X-CanIt-Archive-Cluster: Rl4fxI1RVOq/TTUYGiKHQwqlXy8 X-CanIt-Archived-As: pnsol-com/20171213 / 06UJvTHu9 X-Scanned-By: CanIt (www . roaringpenguin . com) on 109.69.238.89 Subject: Re: [Bloat] [Cerowrt-devel] DC behaviors today X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Dec 2017 19:55:50 -0000 --Apple-Mail=_AACC55FE-CAE4-4EBC-8D94-7FDAA09CB40F Content-Type: multipart/alternative; boundary="Apple-Mail=_48738234-6605-49AD-8E17-8020BFD86FE7" --Apple-Mail=_48738234-6605-49AD-8E17-8020BFD86FE7 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 Please - my email was not an intention to troll - I wanted to establish = a dialogue, I am sorry if I=E2=80=99ve offended. > On 13 Dec 2017, at 18:08, dpreed@reed.com wrote: >=20 > Just to be clear, I have built and operated a whole range of network = platforms, as well as diagnosing problems and planning deployments of = systems that include digital packet delivery in real contexts where cost = and performance matter, for nearly 40 years now. So this isn't only some = kind of radical opinion, but hard-won knowledge across my entire career. = I also havea very strong theoretical background in queueing theory and = control theory -- enough to teach a graduate seminar, anyway. I accept that - if we are laying out bona fides, I have acted as thesis = advisor to people working in this area over 20 years, and I continue to = work with network operators, system designers and research organisations = (mainly in the EU) in this area. > That said, there are lots of folks out there who have opinions = different than mine. But far too many (such as those who think big = buffers are "good", who brought us bufferbloat) are not aware of how = networks are really used or the practical effects of their poor models = of usage. >=20 > If it comforts you to think that I am just stating an "opinion", which = must be wrong because it is not the "conventional wisdom" in the circles = where you travel, fine. You are entitled to dismiss any ideas you don't = like. But I would suggest you get data about your assumptions. >=20 > I don't know if I'm being trolled, but a couple of comments on the = recent comments: >=20 > 1. Statistical multiplexing viewed as an averaging/smoothing as an = idea is, in my personal opinion and experience measuring real network = behavior, a description of a theoretical phenomenon that is not real = (e.g. "consider a spherical cow") that is amenable to theoretical = analysis. Such theoretical analysis can make some gross estimates, but = it breaks down quickly. The same thing is true of common economic theory = that models practical markets by linear models (linear systems of = differential equations are common) and gaussian probability = distributions (gaussians are easily analyzed, but wrong. You can read = the popular books by Nassim Taleb for an entertaining and enlightening = deeper understanding of the economic problems with such modeling). I would fully accept that seeing statistical (or perhaps, better named, = stochastic) multiplexing as an averaging process is vast over = simplification of the complexity. However, I see the underlying = mathematics as capturing a much richer description(s), for example of = the transient behaviour - queuing theory (in its usual undergraduate = formulation) tends to gloss over the edge / extreme conditions as well = as dealing with non-stationary arrival phenomena (such as can occur in = the presence of adaptive protocols). For example - one approach to solve the underlying Markov Chain systems = (as the operational semantic representation of a queueing system) is to = represent them as transition matrices and then =E2=80=9Csolve=E2=80=9D = those matrices for steady state [as you probably know - think of that as = backstory for the interested reader]. We=E2=80=99ve used such transition matrices to examine =E2=80=9Crelaxation= times=E2=80=9D of queueing / scheduling algorithms - i.e given that a = buffer has filled, how quickly will the system relax back towards = =E2=80=9Csteady state=E2=80=9D. There are assumptions behind this, of = course, but viewing the buffer state as a probability distribution and = seeing how that distribution evolves after, say, an impulse change in = load helps at lot to generate new approaches. Cards on the table - I don=E2=80=99t see networks as a (purely) natural = phenomena (as, say, chemistry or physics) - but as a more mathematical = one. Queuing systems are (relatively) simple automata being pushed = through their states by (non-stationary but broadly characterisable in = stochastic terms) arrivals and departures (which are not so = stochastically varied as they are related to the actual packet sizes.). = There are rules to that mathematical game imposed by real-world physics, = but there are other ways of constructing (and configuring) the actions = of those automata to create =E2=80=9Cbetter=E2=80=9D solutions (for = various types of =E2=80=9Cbetter=E2=80=9D). >=20 > One of the features well observed in real measurements of real systems = is that packet flows are "fractal", which means that there is a = self-similarity of rate variability all time scales from micro to macro. = As you look at smaller and smaller time scales, or larger and larger = time scales, the packet request density per unit time never smooths out = due to "averaging over sources". That is, there's no practical = "statistical multiplexing" effect. There's also significant correlation = among many packet arrivals - assuming they are statistically independent = (which is required for the "law of large numbers" to apply) is often far = from the real situation - flows that are assumed to be independent are = usually strongly coupled. I remember this debate and its evolution, Hurst parameters and all that. = I also understand that a collection of on/off Poisson sources looks = fractal - I found that =E2=80=9Cthe universe if fractal - live with = it=E2=80=9D ethos of limited practical use (except to help people say it = was not solvable). When I saw those results the question I asked myself = (because not seeing them a =E2=80=9Cnatural=E2=80=9D phenomena) "what is = the right way to interact with the traffic patterns to regain acceptable = levels of mathematical understanding?=E2=80=9D - i.e what is the right = intervention. I agree that flows become coupled - every time two flows share a common = path/resource they have that potential, the strength of that coupling = and how to decouple them is what is useful to understand. It does not = take much =E2=80=9Crandomness=E2=80=9D (i.e perturbation of streams = arrival patterns) to radically reduce that coupling - thankfully such = randomness tends to occur due to issues of differential path length = (hence delay). Must admit I like randomness (in limited amounts) - it is very useful - = CDMA is just one example of such. >=20 > The one exception where flows average out at a constant rate is when = there is a "bottleneck". Then, there being no more capacity, the = constant rate is forced, not by statistical averaging but by a very = different process. One that is almost never desirable. >=20 > This is just what is observed in case after case. Designers may = imagine that their networks have "smooth averaging" properties. There's = a strong thread in networking literature that makes this = pretty-much-always-false assumption the basis of protocol designs, = thinking about "Quality of Service" and other sorts of things. You can = teach graduate students about a reality that does not exist, and get = papers accepted in conferences where the reviewers have been trained in = the same tradition of unreal assumptions. Agreed - there is a massive disconnect between a lot of the literature = (and the people who make their living generating it - [to those people, = please don=E2=80=99t take offence, queueing theory is really useful it = is just the real world is a lot more non-stationary than you model]) and = reality. >=20 > 2. I work every day with "datacenter" networking and distributed = systems on 10 GigE and faster Ethernet fabrics with switches and = trunking. I see the packet flows driven by distributed computing in real = systems. Whenever the sustained peak load on a switch path reaches 100%, = that's not "good", that's not "efficient" resource usage. That is a = situation where computing is experiencing huge wasted capacity due to = network congestion that is dramatically slowing down the desired = workload. Imagine that there were two flows - one that required low latency (e.g a = real-time response as it was part of a large distributed computation) = and other flows that could make useful progress if they suffered the = delay (and to some extent, the loss effects) of the other traffic. If the operational scenario you are working in consists of =E2=80=9Cmono = service=E2=80=9D (as you describe above) then there is no room for any = differential service - I would content that (as important as data = centres style systems are) they are not a universal phenomenon. It is my understanding that Google uses this two tier notion to get high = utilisation from their network interconnects while still preserving the = performance of their services. I see large scale (i.e. public internets) = not as a mono-service but as a =E2=80=9Cpoly service=E2=80=9D - there = are multiple demands for timeliness etc that exist out there for =E2=80=9C= real services=E2=80=9D. >=20 > Again this is because *real workloads* in distributed computation = don't have smooth or averagable rates over interconnects. Latency is = everything in that application too! Yep - understand that - designed and built large scale message passing = supercomputers in the =E2=80=9880s and =E2=80=9890s - even wrote a book = on how to construct, measure and analyse their interconnects. Still have = 70+ Inmos transputers (and the cross-bar switching infrastructure) in = the garage. >=20 > Yes, because one buys switches from vendors who don't know how to = build or operate a server or a database at all, you see vendors trying = to demonstrate their amazing throughput, but the people who build these = systems (me, for example) are not looking at throughput or statistical = multiplexing at all! We use "throughput" as a proxy for "latency under = load". (and it is a poor proxy! Because vendors throw in big buffers, = causing bufferbloat. See Arista Networks' attempts to justify their huge = buffers as a "good thing" -- when it is just a case of something you = have to design around by clocking the packets so they never accumulate = in a buffer). Again - we are in violent agreement - this is the (misguided) belief of = product managers that =E2=80=9Cmore is better=E2=80=9D - so they put = more and more buffering in to their systems >=20 > So, yes, the peak transfer rate matters, of course. And sometimes it = is utilized for very good reason (when the latency of a file transfer as = a whole is the latency that matters). But to be clear, just because as a = user I want to download a Linux distro update as quickly as possible = when it happens does NOT imply that the average load at any time scale = is "statistically averaged" for residential networking. Quite the = opposite! I buy Gigabit service to my house because I cannot predict = when I will need it, but I almost never need it. My average rate (except = once a month or so) is miniscule. This is true even though my house is a = heavy user of Netflix. Again - violent agreement - what matters is =E2=80=9Cthe outcome=E2=80=9D;= bulk data transport is just one case (and, unfortunately the one that = appears most frequently in those papers mentioned above); what the = Netflix user is interested in is =E2=80=9Cprobability of buffering event = per watched hour=E2=80=9D or =E2=80=9Ctime to first frame being = displayed=E2=80=9D Take heart - you are really not alone here, there are plenty of people = in the Telecoms industry that understand this (engineering, not = marketing or senior management). What has happened is that people have = been sold =E2=80=9Ctop speed=E2=80=9D and others (like the Googles and = Netflix of this world) are _extremely_ worried that if the transport = quality of their data suffers their business models disappear. Capacity planning this is difficult - undressing the behavioural = dynamics of (application level) demand is what is needed. This is a = large weakness in the planning of the digital supply chains of today. >=20 > The way that Gigbit residential service affects my "quality of = service" is almost entirely that I get good "response time" to = unpredictable demands. How quickly a Netflix stream can fill its play = buffer is the measure. The data rate of any Netflix stream is, on = average much, much less than a Gigabit. Buffers in the network would = ruin my Netflix experience, because the buffering is best done at the = "edge" as the End-to-End argument usually suggests. It's certainly NOT = because of statistical multiplexing. Not quite as violent agreement here - Netflix (once streaming) is not = that sensitive to delay - a burst of a 100ms-500ms for a second or so = does not put their key outcome (assuring that the payout buffer does not = empty) at too much risk. We=E2=80=99ve worked with people who have created risks for Netflix = delivery (accidentally I might add - they though they were doing =E2=80=9C= the right thing=E2=80=9D) by increasing their network infrastructure to = 100G delivery everywhere. That change (combined with others made by CDN = people - TCP offload engines) created so much non-stationarity in the = load so as to cause delay and loss spikes that *did* cause VoD playout = buffers to empty. This is an example of where =E2=80=9Cmore capacity=E2=80= =9D produced worse outcomes. This is still a pretty young industry - plenty of room for new original = research out there (but for those paper creators reading this out there = - step away from the TCP bulk streams, they are not the thing that is = really interesting, the dynamic behavioural aspects are much more = interesting to mine for new papers) >=20 > So when you are tempted to talk about "statistical multiplexing" = smoothing out traffic flow take a pause and think about whether that = really makes sense as a description of reality. I see =E2=80=9Ctrad=E2=80=9D statistical multiplexing as the way that = the industry has conned itself into creating (probably) unsustainable = delivery models - it has put itself on a =E2=80=9Ckeep building = bigger=E2=80=9D approach to just stand still - all because it doesn=E2=80=99= t face up to issues of managing =E2=80=9Cdelay and loss=E2=80=9D = coherently. The inherent two-degrees of freedom and the fact that such = attenuation is conserved. >=20 > fq_codel is a good thing because it handles the awkward behavior at = "peak load". It smooths out the impact of running out of resources. But = that impact is still undesirable - if many Netflix flows are adding up = to peak load, a new Netflix flow can't start very quickly. That results = in terrible QoS from a Netflix user's point of view. I would suggest that there are other ways of dealing with the impact of = =E2=80=9Cpeak=E2=80=9D (i.e where instantaneous demand exceeds supply = over a long enough timescale to start effecting the most delay/loss = sensitive application in the collective multiplexed stream). I would = also agree that if all the streams are of the same =E2=80=9Cbound on = delay and loss=E2=80=9D requirements (i.e *all* Netflix) then 100%+ of = all the same load (over, again the appropriate timescale - which for = Netflix VoD in streaming is about 20s to 30s) then end-user = disappointment is the only thing that can occur. Again, not intended to troll - I think we are agreeing that current (as = per most literature / received wisdom) have just about run their course = - my assertion is that mathematics needed is out there (it is _not_ = traditional queueing theory - but does spring from similar roots). Cheers Neil >=20 >=20 >=20 >=20 > On Wednesday, December 13, 2017 11:41am, "Jonathan Morton" = said: >=20 > > Have you considered what this means for the economics of the = operation of networks? What other industry that =E2=80=9Cmoves things = around=E2=80=9D (i.e logistical or similar) system creates a solution in = which they have 10x as much infrastructure than their peak requirement? > Ten times peak demand? No. > Ten times average demand estimated at time of deployment, and = struggling badly with peak demand a decade later, yes. And this is the = transportation industry, where a decade is a *short* time - like less = than a year in telecoms. > - Jonathan Morton >=20 > On 13 Dec 2017 17:27, "Neil Davies" > wrote: >=20 > On 12 Dec 2017, at 22:53, dpreed@reed.com = wrote: >=20 > Luca's point tends to be correct - variable latency destroys the = stability of flow control loops, which destroys throughput, even when = there is sufficient capacity to handle the load. >=20 > This is an indirect result of Little's Lemma (which is strictly true = only for Poisson arrival, but almost any arrival process will have a = similar interaction between latency and throughput). > Actually it is true for general arrival patterns (can=E2=80=99t lay my = hands on the reference for the moment - but it was a while back that was = shown) - what this points to is an underlying conservation law - that = =E2=80=9Cdelay and loss=E2=80=9D are conserved in a scheduling process. = This comes out of the M/M/1/K/K queueing system and associated analysis. > There is conservation law (and Klienrock refers to this - at least in = terms of delay - in 1965 - = http://onlinelibrary.wiley.com/doi/10.1002/nav.3800120206/abstract = ) at = work here. > All scheduling systems can do is =E2=80=9Cdistribute=E2=80=9D the = resulting =E2=80=9Cdelay and loss=E2=80=9D differentially amongst the = (instantaneous set of) competing streams. > Let me just repeat that - The =E2=80=9Cdelay and loss=E2=80=9D are a = conserved quantity - scheduling can=E2=80=99t =E2=80=9Cdestroy=E2=80=9D = it (they can influence higher level protocol behaviour) but not reduce = the total amount of =E2=80=9Cdelay and loss=E2=80=9D that is being = induced into the collective set of streams... >=20 >=20 > However, the other reason I say what I say so strongly is this: >=20 > Rant on. >=20 > Peak/avg. load ratio always exceeds a factor of 10 or more, IRL. Only = "benchmark setups" (or hot-rod races done for academic reasons or = marketing reasons to claim some sort of "title") operate at peak = supportable load any significant part of the time. > Have you considered what this means for the economics of the operation = of networks? What other industry that =E2=80=9Cmoves things around=E2=80=9D= (i.e logistical or similar) system creates a solution in which they = have 10x as much infrastructure than their peak requirement? >=20 >=20 > The reason for this is not just "fat pipes are better", but because = bitrate of the underlying medium is an insignificant fraction of systems = operational and capital expense. > Agree that (if you are the incumbent that =E2=80=98owns=E2=80=99 the = low level transmission medium) that this is true (though the costs of = lighting a new lambda are not trivial) - but that is not the experience = of anyone else in the digital supply time >=20 >=20 > SLA's are specified in "uptime" not "bits transported", and a clogged = pipe is defined as down when latency exceeds a small number. > Do you have any evidence you can reference for an SLA that treats a = few ms as =E2=80=9Cdown=E2=80=9D? Most of the SLAs I=E2=80=99ve had = dealings with use averages over fairly long time periods (e.g. a month) = - and there is no quality in averages. >=20 >=20 > Typical operating points of corporate networks where the users are = happy are single-digit percentage of max load. > Or less - they also detest the costs that they have to pay the network = providers to try and de-risk their applications. There is also the issue = that they measure averages (over 5min to 15min) they completely fail to = capture (for example) the 15seconds when delay and jitter was high so = the CEO=E2=80=99s video conference broke up. >=20 >=20 > This is also true of computer buses and memory controllers and storage = interfaces IRL. Again, latency is the primary measure, and the system = never focuses on operating points anywhere near max throughput. > Agreed - but wouldn=E2=80=99t it be nice if they could? I=E2=80=99ve = worked on h/w systems where we have designed system to run near limits = (the set-top box market is pretty cut-throat and the closer to = saturation you can run and still deliver the acceptable outcome the = cheaper the box the greater the profit margin for the set-top box = provider) >=20 >=20 > Rant off. >=20 > Cheers > Neil >=20 > On Tuesday, December 12, 2017 1:36pm, "Dave Taht" > said: >=20 > > > > Luca Muscariello > writes: > > > > > I think everything is about response time, even throughput. > > > > > > If we compare the time to transmit a single packet from A to B, = including > > > propagation delay, transmission delay and queuing delay, > > > to the time to move a much larger amount of data from A to B we = use > > throughput > > > in this second case because it is a normalized > > > quantity w.r.t. response time (bytes over delivery time). For a = single > > > transmission we tend to use latency. > > > But in the end response time is what matters. > > > > > > Also, even instantaneous throughput is well defined only for a = time scale > > which > > > has to be much larger than the min RTT (propagation + transmission = delays) > > > Agree also that looking at video, latency and latency budgets are = better > > > quantities than throughput. At least more accurate. > > > > > > On Fri, Dec 8, 2017 at 8:05 AM, Mikael Abrahamsson = > > > wrote: > > > > > > On Mon, 4 Dec 2017, dpreed@reed.com = wrote: > > > > > > I suggest we stop talking about throughput, which has been the > > mistaken > > > idea about networking for 30-40 years. > > > > > > > > > We need to talk both about latency and speed. Yes, speed is talked = about > > too > > > much (relative to RTT), but it's not irrelevant. > > > > > > Speed of light in fiber means RTT is approx 1ms per 100km, so from > > Stockholm > > > to SFO my RTT is never going to be significantly below 85ms = (8625km > > great > > > circle). It's current twice that. > > > > > > So we just have to accept that some services will never be = deliverable > > > across the wider Internet, but have to be deployed closer to the > > customer > > > (as per your examples, some need 1ms RTT to work well), and we = need > > lower > > > access latency and lower queuing delay. So yes, agreed. > > > > > > However, I am not going to concede that speed is "mistaken idea = about > > > networking". No amount of smarter queuing is going to fix the = problem if > > I > > > don't have enough throughput available to me that I need for my > > application. > > > > In terms of the bellcurve here, throughput has increased much more > > rapidly than than latency has decreased, for most, and in an = increasing > > majority of human-interactive cases (like video streaming), we often > > have enough throughput. > > > > And the age old argument regarding "just have overcapacity, always" > > tends to work in these cases. > > > > I tend not to care as much about how long it takes for things that = do > > not need R/T deadlines as humans and as steering wheels do. > > > > Propigation delay, while ultimately bound by the speed of light, is = also > > affected by the wires wrapping indirectly around the earth - much = slower > > than would be possible if we worked at it: > > > > https://arxiv.org/pdf/1505.03449.pdf = > > > > Then there's inside the boxes themselves: > > > > A lot of my struggles of late has been to get latencies and adaquate > > sampling techniques down below 3ms (my previous value for starting = to > > reject things due to having too much noise) - and despite trying = fairly > > hard, well... a process can't even sleep accurately much below 1ms, = on > > bare metal linux. A dream of mine has been 8 channel high quality = audio, > > with a video delay of not much more than 2.7ms for AR applications. > > > > For comparison, an idle quad core aarch64 and dual core x86_64: > > > > root@nanopineo2:~# irtt sleep > > > > Testing sleep accuracy... > > > > Sleep Duration Mean Error % Error > > > > 1ns 13.353=C2=B5s 1335336.9 > > > > 10ns 14.34=C2=B5s 143409.5 > > > > 100ns 13.343=C2=B5s 13343.9 > > > > 1=C2=B5s 12.791=C2=B5s 1279.2 > > > > 10=C2=B5s 148.661=C2=B5s 1486.6 > > > > 100=C2=B5s 150.907=C2=B5s 150.9 > > > > 1ms 168.001=C2=B5s 16.8 > > > > 10ms 131.235=C2=B5s 1.3 > > > > 100ms 145.611=C2=B5s 0.1 > > > > 200ms 162.917=C2=B5s 0.1 > > > > 500ms 169.885=C2=B5s 0.0 > > > > > > d@nemesis:~$ irtt sleep > > > > Testing sleep accuracy... > > > > > > Sleep Duration Mean Error % Error > > > > 1ns 668ns 66831.9 > > > > 10ns 672ns 6723.7 > > > > 100ns 557ns 557.6 > > > > 1=C2=B5s 57.749=C2=B5s 5774.9 > > > > 10=C2=B5s 63.063=C2=B5s 630.6 > > > > 100=C2=B5s 67.737=C2=B5s 67.7 > > > > 1ms 153.978=C2=B5s 15.4 > > > > 10ms 169.709=C2=B5s 1.7 > > > > 100ms 186.685=C2=B5s 0.2 > > > > 200ms 176.859=C2=B5s 0.1 > > > > 500ms 177.271=C2=B5s 0.0 > > > > > > > > -- > > > Mikael Abrahamsson email: swmike@swm.pp.se = > > > _______________________________________________ > > > > > > > > > Bloat mailing list > > > Bloat@lists.bufferbloat.net > > > https://lists.bufferbloat.net/listinfo/bloat = > > > > > > > > > > > > _______________________________________________ > > > Bloat mailing list > > > Bloat@lists.bufferbloat.net > > > https://lists.bufferbloat.net/listinfo/bloat = > > >=20 >=20 >=20 >=20 > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat = > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat = >=20 --Apple-Mail=_48738234-6605-49AD-8E17-8020BFD86FE7 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 Please - my email was not an intention to troll - I wanted to = establish a dialogue, I am sorry if I=E2=80=99ve offended.
On = 13 Dec 2017, at 18:08, dpreed@reed.com wrote:

Just to be clear, I have built and operated a whole range of = network platforms, as well as diagnosing problems and planning = deployments of systems that include digital packet delivery in real = contexts where cost and performance matter, for nearly 40 years now. So = this isn't only some kind of radical opinion, but hard-won knowledge = across my entire career. I also havea very strong theoretical background = in queueing theory and control theory -- enough to teach a graduate = seminar, anyway.

I accept that - if we are laying out bona fides, I = have acted as thesis advisor to people working in this area over 20 = years, and I continue to work with network operators, system designers = and research organisations (mainly in the EU) in this area.

That said, there are lots of folks out there who = have opinions different than mine. But far too many (such as those who = think big buffers are "good", who brought us bufferbloat) are not aware = of how networks are really used or the practical effects of their poor = models of usage.

 

If it comforts you to think that I am just stating an = "opinion", which must be wrong because it is not the "conventional = wisdom" in the circles where you travel, fine. You are entitled to = dismiss any ideas you don't like. But I would suggest you get data about = your assumptions.

 

I don't know if I'm being trolled, but a couple of comments = on the recent comments:

 

1. Statistical multiplexing viewed as an averaging/smoothing = as an idea is, in my personal opinion and experience measuring real = network behavior, a description of a theoretical phenomenon that is not = real (e.g. "consider a spherical cow") that is amenable to theoretical = analysis. Such theoretical analysis can make some gross estimates, but = it breaks down quickly. The same thing is true of common economic theory = that models practical markets by linear models (linear systems of = differential equations are common) and gaussian probability = distributions (gaussians are easily analyzed, but wrong. You can read = the popular books by Nassim Taleb for an entertaining and enlightening = deeper understanding of the economic problems with such = modeling).

I= would fully accept that seeing statistical (or perhaps, better named, = stochastic) multiplexing as an averaging process is vast over = simplification of the complexity. However, I see the underlying = mathematics as capturing a much richer description(s), for example of = the transient behaviour - queuing theory (in its usual undergraduate = formulation) tends to gloss over the edge / extreme conditions  as = well as dealing with non-stationary arrival phenomena (such as can occur = in the presence of adaptive protocols).  

For example - one approach to solve the underlying = Markov Chain systems (as the operational semantic representation of a = queueing system)  is to represent them as transition matrices and = then =E2=80=9Csolve=E2=80=9D those matrices for steady state [as you = probably know - think of that as backstory for the interested = reader]. 

We=E2=80=99ve used = such transition matrices to examine =E2=80=9Crelaxation times=E2=80=9D = of queueing / scheduling algorithms - i.e given that a buffer has = filled, how quickly will the system relax back towards =E2=80=9Csteady = state=E2=80=9D. There are assumptions behind this, of course, but = viewing the buffer state as a probability distribution and seeing how = that distribution evolves after, say, an impulse change in load helps at = lot to generate new approaches.

Cards = on the table - I don=E2=80=99t see networks as a (purely) natural = phenomena (as, say, chemistry or physics) - but as a more mathematical = one. Queuing systems are (relatively) simple automata being pushed = through their states by (non-stationary but broadly characterisable in = stochastic terms) arrivals and departures (which are not so = stochastically varied as they are related to the actual packet sizes.). = There are rules to that mathematical game imposed by real-world physics, = but there are other ways of constructing (and configuring) the actions = of those automata to create =E2=80=9Cbetter=E2=80=9D solutions (for = various types of =E2=80=9Cbetter=E2=80=9D).

 

One of the features well observed in real = measurements of real systems is that packet flows are "fractal", which = means that there is a self-similarity of rate variability all time = scales from micro to macro. As you look at smaller and smaller time = scales, or larger and larger time scales, the packet request density per = unit time never smooths out due to "averaging over sources". That is, = there's no practical "statistical multiplexing" effect. There's also = significant correlation among many packet arrivals - assuming they are = statistically independent (which is required for the "law of large = numbers" to apply) is often far from the real situation - flows that are = assumed to be independent are usually strongly = coupled.

I = remember this debate and its evolution, Hurst parameters and all that. I = also understand that a collection of on/off Poisson sources looks = fractal - I found that =E2=80=9Cthe universe if fractal - live with = it=E2=80=9D ethos of limited practical use (except to help people say it = was not solvable). When I saw those results the question I asked myself = (because not seeing them a =E2=80=9Cnatural=E2=80=9D phenomena) "what is = the right way to interact with the traffic patterns to regain acceptable = levels of mathematical understanding?=E2=80=9D - i.e what is the right = intervention.

I agree that flows = become coupled - every time two flows share a common path/resource they = have that potential, the strength of that coupling and how to decouple = them is what is useful to understand. It does not take much = =E2=80=9Crandomness=E2=80=9D (i.e perturbation of streams arrival = patterns) to radically reduce that coupling - thankfully such randomness = tends to occur due to issues of differential path length (hence = delay). 

Must admit I like = randomness (in limited amounts) -  it is very useful - CDMA is just = one example of such.

 

The one exception where flows average out at a = constant rate is when there is a "bottleneck". Then, there being no more = capacity, the constant rate is forced, not by statistical averaging but = by a very different process. One that is almost never desirable.

 

This is just what is observed in case after = case.  Designers may imagine that their networks have "smooth = averaging" properties. There's a strong thread in networking literature = that makes this pretty-much-always-false assumption the basis of = protocol designs, thinking about "Quality of Service" and other sorts of = things. You can teach graduate students about a reality that does not = exist, and get papers accepted in conferences where the reviewers have = been trained in the same tradition of unreal = assumptions.

Agreed - there is a massive disconnect between a = lot of the literature (and the people who make their living generating = it - [to those people, please don=E2=80=99t take offence, queueing = theory is really useful it is just the real world is a lot more = non-stationary than you model]) and reality.

 

2. I work every day with "datacenter" networking = and distributed systems on 10 GigE and faster Ethernet fabrics with = switches and trunking. I see the packet flows driven by distributed = computing in real systems. Whenever the sustained peak load on a switch = path reaches 100%, that's not "good", that's not "efficient" resource = usage. That is a situation where computing is experiencing huge wasted = capacity due to network congestion that is dramatically slowing down the = desired workload.

Imagine that there were two flows - one that = required low latency (e.g a real-time response as it was part of a large = distributed computation) and other flows that could make useful progress = if they suffered the delay (and to some extent, the loss effects) of the = other traffic. 

If the = operational scenario you are working in consists of =E2=80=9Cmono = service=E2=80=9D (as you describe above) then there is no room for any = differential service - I would content that (as important as data = centres style systems are) they are not a universal = phenomenon. 

It is my = understanding that Google uses this two tier notion to get high = utilisation from their network interconnects while still preserving the = performance of their services. I see large scale (i.e. public internets) = not as a mono-service but as a =E2=80=9Cpoly service=E2=80=9D - there = are multiple demands for timeliness etc that exist out there for =E2=80=9C= real services=E2=80=9D.

 

Again this is because *real workloads* in = distributed computation don't have smooth or averagable rates over = interconnects. Latency is everything in that application = too!

Yep - = understand that - designed and built large scale message passing = supercomputers in the =E2=80=9880s and =E2=80=9890s - even wrote a book = on how to construct, measure and analyse their interconnects. Still have = 70+ Inmos transputers (and the cross-bar switching infrastructure) in = the garage.

 

Yes, because one buys switches from vendors who = don't know how to build or operate a server or a database at all, you = see vendors trying to demonstrate their amazing throughput, but the = people who build these systems (me, for example) are not looking at = throughput or statistical multiplexing at all! We use "throughput" as a = proxy for "latency under load". (and it is a poor proxy! Because vendors = throw in big buffers, causing bufferbloat. See Arista Networks' attempts = to justify their huge buffers as a "good thing" -- when it is just a = case of something you have to design around by clocking the packets so = they never accumulate in a = buffer).

Again - we are in violent agreement - this is the = (misguided) belief of product managers that =E2=80=9Cmore is better=E2=80=9D= - so they put more and more buffering in to their systems

 

So, yes, the peak transfer rate matters, of = course. And sometimes it is utilized for very good reason (when the = latency of a file transfer as a whole is the latency that matters). But = to be clear, just because as a user I want to download a Linux distro = update as quickly as possible when it happens does NOT imply that the = average load at any time scale is "statistically averaged" for = residential networking. Quite the opposite! I buy Gigabit service to my = house because I cannot predict when I will need it, but I almost never = need it. My average rate (except once a month or so) is miniscule. This = is true even though my house is a heavy user of = Netflix.

Again - violent agreement - what matters is =E2=80=9C= the outcome=E2=80=9D; bulk data transport is just one case (and, = unfortunately the one that appears most frequently in those papers = mentioned above); what the Netflix user is interested in is = =E2=80=9Cprobability of buffering event per watched hour=E2=80=9D or = =E2=80=9Ctime to first frame being displayed=E2=80=9D

Take heart - you are really not alone here, there = are plenty of people in the Telecoms industry that understand this = (engineering, not marketing or senior management). What has happened is = that people have been sold =E2=80=9Ctop speed=E2=80=9D and others (like = the Googles and Netflix of this world) are _extremely_ worried that if = the transport quality of their data suffers their business models = disappear. 

Capacity planning = this is difficult - undressing the behavioural dynamics of (application = level) demand is what is needed. This is a large weakness in the = planning of the digital supply chains of today.

 

The way that Gigbit residential service affects = my "quality of service" is almost entirely that I get good "response = time" to unpredictable demands. How quickly a Netflix stream can fill = its play buffer is the measure. The data rate of any Netflix stream is, = on average much, much less than a Gigabit. Buffers in the network would = ruin my Netflix experience, because the buffering is best done at the = "edge" as the End-to-End argument usually suggests. It's certainly NOT = because of statistical = multiplexing.

Not quite as violent agreement here - Netflix = (once streaming) is not that sensitive to delay - a burst of a = 100ms-500ms for a second or so does not put their key outcome (assuring = that the payout buffer does not empty) at too much = risk. 

We=E2=80=99ve worked = with people who have created risks for Netflix delivery (accidentally I = might add - they though they were doing =E2=80=9Cthe right thing=E2=80=9D)= by increasing their network infrastructure to 100G delivery everywhere. = That change (combined with others made by CDN people - TCP offload = engines) created so much non-stationarity in the load so as to cause = delay and loss spikes that *did* cause VoD playout buffers to empty. =  This is an example of where =E2=80=9Cmore capacity=E2=80=9D = produced worse outcomes.

This is = still a pretty young industry - plenty of room for new original research = out there (but for those paper creators reading this out there - step = away from the TCP bulk streams, they are not the thing that is really = interesting, the dynamic behavioural aspects are much more interesting = to mine for new papers)

 

So when you are tempted to = talk about "statistical multiplexing" smoothing out traffic flow take a = pause and think about whether that really makes sense as a description = of reality.

I see =E2=80=9Ctrad=E2=80=9D statistical = multiplexing as the way that the industry has conned itself into = creating (probably) unsustainable delivery models - it has put itself on = a =E2=80=9Ckeep building bigger=E2=80=9D approach to just stand still - = all because it doesn=E2=80=99t face up to issues of managing =E2=80=9Cdela= y and loss=E2=80=9D coherently. The inherent two-degrees of freedom and = the fact that such attenuation is conserved.

 

fq_codel is a good thing because it handles the = awkward behavior at "peak load". It smooths out the impact of running = out of resources. But that impact is still undesirable - if many Netflix = flows are adding up to peak load, a new Netflix flow can't start very = quickly. That results in terrible QoS from a Netflix user's point of = view.

I = would suggest that there are other ways of dealing with the impact of = =E2=80=9Cpeak=E2=80=9D (i.e where instantaneous demand exceeds supply = over a long enough timescale to start effecting the most delay/loss = sensitive application in the collective multiplexed stream).  I = would also agree that if all the streams are of the same =E2=80=9Cbound = on delay and loss=E2=80=9D requirements (i.e *all* Netflix) then 100%+ = of all the same load (over, again the appropriate timescale - which for = Netflix VoD in streaming is about 20s to 30s) then end-user = disappointment is the only thing that can occur.

Again, not intended to troll - I think we are = agreeing that current (as per most literature / received wisdom) have = just about run their course - my assertion is that mathematics needed is = out there (it is _not_ traditional queueing theory - but does spring = from similar roots). 

Cheers

Neil

 

 



On Wednesday, December 13, 2017 11:41am, "Jonathan Morton" = <chromatix99@gmail.com> said:

> Have you considered what this means for the = economics of the operation of networks? What other industry that = =E2=80=9Cmoves things around=E2=80=9D (i.e logistical or similar) system = creates a solution in which they have 10x as much infrastructure than = their peak requirement?
Ten times peak demand?  No.
Ten times average demand estimated at time of = deployment, and struggling badly with peak demand a decade later, = yes.  And this is the transportation industry, where a decade is a = *short* time - like less than a year in telecoms.
- Jonathan Morton

On 13 Dec 2017 17:27, "Neil Davies" <neil.davies@pnsol.com> wrote:

On 12 Dec 2017, at 22:53, dpreed@reed.com wrote:

Luca's point tends to be correct - variable = latency destroys the stability of flow control loops, which destroys = throughput, even when there is sufficient capacity to handle the = load.

 

This is an indirect result of Little's = Lemma (which is strictly true only for Poisson arrival, but almost any = arrival process will have a similar interaction between latency and = throughput).
Actually it is true for general arrival patterns = (can=E2=80=99t lay my hands on the reference for the moment - but it was = a while back that was shown) - what this points to is an underlying = conservation law - that =E2=80=9Cdelay and loss=E2=80=9D are conserved = in a scheduling process. This comes out of the M/M/1/K/K queueing system = and associated analysis.
There is  conservation law (and Klienrock refers to = this - at least in terms of delay - in 1965 - http://onlinelibrary.wiley.com/doi/10.1002/nav.3800120206/abstr= act) at work here.
All scheduling systems can do is =E2=80=9Cdistribute=E2=80= =9D the resulting =E2=80=9Cdelay and loss=E2=80=9D differentially = amongst the (instantaneous set of) competing streams. 
Let me just repeat that - The =E2=80=9Cdelay and loss=E2=80= =9D are a conserved quantity - scheduling can=E2=80=99t =E2=80=9Cdestroy=E2= =80=9D it (they can influence higher level protocol behaviour) but not = reduce the total amount of =E2=80=9Cdelay and loss=E2=80=9D that is = being induced into the collective set of streams...

 

However, the other reason I say what I say = so strongly is this:

 

Rant on.

 

Peak/avg. load ratio always exceeds a = factor of 10 or more, IRL. Only "benchmark setups" (or hot-rod races = done for academic reasons or marketing reasons to claim some sort of = "title") operate at peak supportable load any significant part of the = time.
Have you considered what this means for the economics of = the operation of networks? What other industry that =E2=80=9Cmoves = things around=E2=80=9D (i.e logistical or similar) system creates a = solution in which they have 10x as much infrastructure than their peak = requirement?

 

The reason for this is not just "fat pipes = are better", but because bitrate of the underlying medium is an = insignificant fraction of systems operational and capital expense.
Agree that (if you are the incumbent that =E2=80=98owns=E2= =80=99 the low level transmission medium) that this is true (though the = costs of lighting a new lambda are not trivial) - but that is not the = experience of anyone else in the digital supply time

 

SLA's are specified in "uptime" not "bits = transported", and a clogged pipe is defined as down when latency exceeds = a small number.
Do you have any evidence you can reference for an SLA = that treats a few ms as =E2=80=9Cdown=E2=80=9D? Most of the SLAs I=E2=80=99= ve had dealings with use averages over fairly long time periods (e.g. a = month) - and there is no quality in averages.

 

Typical operating points of corporate = networks where the users are happy are single-digit percentage of max = load.
Or less - they also detest the costs that they have to = pay the network providers to try and de-risk their applications. There = is also the issue that they measure averages (over 5min to 15min) they = completely fail to capture (for example) the 15seconds when delay and = jitter was high so the CEO=E2=80=99s video conference broke up.

 

This is also true of computer buses and = memory controllers and storage interfaces IRL. Again, latency is the = primary measure, and the system never focuses on operating points = anywhere near max throughput.
Agreed - but wouldn=E2=80=99t it be nice if they could? = I=E2=80=99ve worked on h/w systems where we have designed system to run = near limits (the set-top box market is pretty cut-throat and the closer = to saturation you can run and still deliver the acceptable outcome the = cheaper the box the greater the profit margin for the set-top box = provider)

 

Rant off.

=
Cheers
Neil

On Tuesday, December 12, 2017 1:36pm, "Dave = Taht" <dave@taht.net> said:

=
> 
> Luca Muscariello <luca.muscariello@gmail.com> writes:
> 
> > I think everything is about response time, even = throughput.
> >
> > If we = compare the time to transmit a single packet from A to B, including
> > propagation delay, transmission delay and queuing = delay,
> > to the time to move a much larger amount = of data from A to B we use
> throughput
> > in this second case because it is a normalized
> > quantity w.r.t. response time (bytes over delivery = time). For a single
> > transmission we tend to use = latency.
> > But in the end response time is what = matters.
> >
> > Also, even = instantaneous throughput is well defined only for a time scale
> which
> > has to be much larger than = the min RTT (propagation + transmission delays)
> > = Agree also that looking at video, latency and latency budgets are = better
> > quantities than throughput. At least more = accurate.
> >
> > On Fri, Dec 8, = 2017 at 8:05 AM, Mikael Abrahamsson <swmike@swm.pp.se>
> wrote:
> >
> > On Mon, 4 Dec 2017, dpreed@reed.com wrote:
> >
> > I suggest we stop talking about throughput, which = has been the
> mistaken
> > idea = about networking for 30-40 years.
> >
> >
> > We need to talk both about = latency and speed. Yes, speed is talked about
> too
> > much (relative to RTT), but it's not irrelevant.
> >
> > Speed of light in fiber = means RTT is approx 1ms per 100km, so from
> = Stockholm
> > to SFO my RTT is never going to be = significantly below 85ms (8625km
> great
> > circle). It's current twice that.
> = >
> > So we just have to accept that some = services will never be deliverable
> > across the = wider Internet, but have to be deployed closer to the
> = customer
> > (as per your examples, some need 1ms = RTT to work well), and we need
> lower
>= > access latency and lower queuing delay. So yes, agreed.
> >
> > However, I am not going to = concede that speed is "mistaken idea about
> > = networking". No amount of smarter queuing is going to fix the problem = if
> I
> > don't have enough = throughput available to me that I need for my
> = application.
> 
> In terms of the bellcurve here, throughput has increased = much more
> rapidly than than latency has decreased, = for most, and in an increasing
> majority of = human-interactive cases (like video streaming), we often
> have enough throughput.
> 
> And the age old argument regarding "just have = overcapacity, always"
> tends to work in these = cases.
> 
> I tend not to care as much about how long it takes for = things that do
> not need R/T deadlines as humans and = as steering wheels do.
> 
> Propigation delay, while ultimately bound by the speed = of light, is also
> affected by the wires wrapping = indirectly around the earth - much slower
> than would = be possible if we worked at it:
> 
> https://arxiv.org/pdf/1505.03449.pdf
> 
> Then there's inside the boxes themselves:
> 
> A lot of my struggles of late has been to get latencies = and adaquate
> sampling techniques down below 3ms (my = previous value for starting to
> reject things due to = having too much noise) - and despite trying fairly
> = hard, well... a process can't even sleep accurately much below 1ms, = on
> bare metal linux. A dream of mine has been 8 = channel high quality audio,
> with a video delay of not = much more than 2.7ms for AR applications.
> 
> For comparison, an idle quad core aarch64 and dual core = x86_64:
> 
> root@nanopineo2:~# irtt sleep
> 
> Testing sleep accuracy...
> 
> Sleep Duration Mean Error % Error
> 
> 1ns 13.353=C2=B5s 1335336.9
> 
> 10ns 14.34=C2=B5s 143409.5
> 
> 100ns 13.343=C2=B5s 13343.9
> 
> 1=C2=B5s 12.791=C2=B5s 1279.2
> 
> 10=C2=B5s 148.661=C2=B5s 1486.6
> 
> 100=C2=B5s 150.907=C2=B5s 150.9
> 
> 1ms 168.001=C2=B5s 16.8
> 
> 10ms 131.235=C2=B5s 1.3
> 
> 100ms 145.611=C2=B5s 0.1
> 
> 200ms 162.917=C2=B5s 0.1
> 
> 500ms 169.885=C2=B5s 0.0
> 
> 
> d@nemesis:~$ irtt sleep
> 
> Testing sleep accuracy...
> 
> 
> Sleep Duration Mean Error % Error
> 
> 1ns 668ns 66831.9
> 
> 10ns 672ns 6723.7
> 
> 100ns 557ns 557.6
> 
> 1=C2=B5s 57.749=C2=B5s 5774.9
> 
> 10=C2=B5s 63.063=C2=B5s 630.6
> 
> 100=C2=B5s 67.737=C2=B5s 67.7
> 
> 1ms 153.978=C2=B5s 15.4
> 
> 10ms 169.709=C2=B5s 1.7
> 
> 100ms 186.685=C2=B5s 0.2
> 
> 200ms 176.859=C2=B5s 0.1
> 
> 500ms 177.271=C2=B5s 0.0
> 
> >
> > --
> > = Mikael Abrahamsson email: swmike@swm.pp.se
> = > _______________________________________________
> = >
> >
> > Bloat mailing = list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> >
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>





_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


= --Apple-Mail=_48738234-6605-49AD-8E17-8020BFD86FE7-- --Apple-Mail=_AACC55FE-CAE4-4EBC-8D94-7FDAA09CB40F Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename=signature.asc Content-Type: application/pgp-signature; name=signature.asc Content-Description: Message signed with OpenPGP -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJaMYWyAAoJEPmpMHCy/udqGqUP/Atlq3/caASZfXyca5BdbgId +8trpnSOxwWT71lLcG88aMH+/7Ib+rh/C/9IFvPEQjqUaO2AkHLjDNrdbR1utsE3 plJJLYuyDauPHCmHC52rOCSPo8ANGbDAl6I9V0oMHiZR3TpWouvxDhIuXo2So004 d5Kja5V2v/qtaCXiDW1tRZrY0u22CGB1jGs9Ir3mY1UUlgozLQcN12Ep1Ju3/Afy Jj6nJn+pl/1I4H3z+8wdn4g0aaND9ePz02YQVW3Nw/7BE7jtpIunrir/vWizEbXe +KwACPwKreL49B2WkhBaasMSsP10oCtH7z3bJlsA8269wX9Nq+y2oRU40J/Sr0Py e6sMlOVqafJNILhoVfoCA9gH8iQjhGdogUCwAvPnWonfDeGY+ElKmLd7XrKCZe3J 9QzYnmvPrnSjHLeRE3yOH2nbKOy4Ax6wGBL91Z5gC7KFl5D8LtgDJWW/E/5Yblu/ BxB9lmztBqBUSad4AaFRfYQ26eeOMbhZ7OU7opXg8JlAVQf7b8J6MxNMssyMWcIL x3lnBuTignMozTzME2Ty6gUQpWZWMz5i3AUPgr58VdW+/U7Fyu7srCxejwpPfiAu SZbZeYcBqGBFL7Or188i2NlBglM4vLLB9wpSXZH+iDDgB3UzIEMtKV0WOJtFPBWa qJvMYpTcFco1oUsrQNtg =MNsR -----END PGP SIGNATURE----- --Apple-Mail=_AACC55FE-CAE4-4EBC-8D94-7FDAA09CB40F--