From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com [IPv6:2a00:1450:4864:20::62a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 316563CB38 for ; Wed, 14 Jul 2021 14:38:11 -0400 (EDT) Received: by mail-ej1-x62a.google.com with SMTP id bu12so4967439ejb.0 for ; Wed, 14 Jul 2021 11:38:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ABQuXC1W5XI/hoHd/Bmyk+72GKu74x5nREg5CVj38ys=; b=LUyYDIsZJy2IbuLyRnAYRAdVYYP8r3XNhG3EP9AuJPKgScmMtbVRCKJn2Lr845vR4q lR8JsIzg5B7KmFnsLr/cQdKRlAdRoi0aqJG8aYhxzauJB5M9MijRsTpfFEK5yExwJ0SQ oN+SDX1HZDMbqV0LidQg/tG9YgjZRaV4vBstc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ABQuXC1W5XI/hoHd/Bmyk+72GKu74x5nREg5CVj38ys=; b=Q+C2idr6g8/dAnNvmiTCesxvbu4bia2lPlD/sX7ekSm5XXKL90iza8tykXF7iz8wkl Fc+ourTkpfMmP0wcsZLOXyZXrNUDOWkHRhvamjJ+baYr/dEutx5OLB/DOUObnQCD6psn F52koHLnUOkJ4gA+ZVmS8hJEv21tdWz4cTaW6n0L5sFAzSJq75Agq1wZ8aLZmwZl2rt/ lbT0UaVm/Erg4FZ6NkOXj6LA13l74KlD44zW2eVUhFHNzHQUQf5YIEY9QluiwJd2SxxQ DoBH3sP/Woc9yUJq3IpAUp4KKnitoUbJIYRPb1sp7WlmShlQryJo4JqOkvj144Jf2yKB BTRw== X-Gm-Message-State: AOAM532FeEEM58eLgeOORmGmXG6SDeeYF1s6Ss+kPpE+L/RhKou8HILX tNCF9qXsSvvgbPn3nRkbmWcOiJu9jGlTQjfhIgJgQtZrMzyYineds1VGw5kbV5WMK4hW0Z4tbRF Q7x0QxJGOOix3rfsJjNdZ64k0cjsanQ== X-Google-Smtp-Source: ABdhPJyghx3clyxELVNMIGN+dUfBIF57SNWN+HeG74FuRi4qg1jRKrKc3c2tiYn4sPAoL8hWW5Fa2nf3xu4aEESxpfI= X-Received: by 2002:a17:906:a04f:: with SMTP id bg15mr6501928ejb.417.1626287889757; Wed, 14 Jul 2021 11:38:09 -0700 (PDT) MIME-Version: 1.0 References: <1625188609.32718319@apps.rackspace.com> <989de0c1-e06c-cda9-ebe6-1f33df8a4c24@candelatech.com> <1625773080.94974089@apps.rackspace.com> <1625859083.09751240@apps.rackspace.com> <1626111630.69692379@apps.rackspace.com> <9c3d61c1-7013-414e-964d-9e83f596e69d@candelatech.com> <1e8bdf58-2a21-f543-a248-be58bcbddbcf@candelatech.com> <02c601d777b6$c4ce5a10$4e6b0e30$@rizk.com.de> <1626198543.007132235@apps.rackspace.com> In-Reply-To: <1626198543.007132235@apps.rackspace.com> From: Bob McMahon Date: Wed, 14 Jul 2021 11:37:58 -0700 Message-ID: To: "David P. Reed" Cc: Amr Rizk , Ben Greear , starlink@lists.bufferbloat.net, Make-Wifi-fast , Leonard Kleinrock , Cake List , codel@lists.bufferbloat.net, cerowrt-devel , bloat Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-256; boundary="00000000000001cd7b05c719a819" Subject: Re: [Bloat] Little's Law mea culpa, but not invalidating my main point X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Jul 2021 18:38:11 -0000 --00000000000001cd7b05c719a819 Content-Type: multipart/alternative; boundary="000000000000f977e805c719a711" --000000000000f977e805c719a711 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Thanks for this. I find it both interesting and useful. Learning from those who came before me reminds me of "standing on the shoulders of giants." I try to teach my kids that it's not so much us as the giants we choose - so choose judiciously and, more importantly, be grateful when they provide their shoulders from which to see. One challenge I faced with iperf 2 was around flow control's effects on latency. I find if iperf 2 rate limits on writes then the end/end latencies, RTT look good because the pipe is basically empty, while rate limiting reads to the same value fills the window and drives the RTT up. One might conclude, from a network perspective, the write side is better. But in reality, the write rate limiting is just pushing the delay into the application's logic, i.e. the relevant bytes may not be in the pipe but they aren't at the receiver either, they're stuck somewhere in the "tx application space." It wasn't obvious to me how to address this. We added burst measurements (burst xfer time, and bursts/sec) which, I think, helps. Bob On Tue, Jul 13, 2021 at 10:49 AM David P. Reed wrote: > Bob - > > On Tuesday, July 13, 2021 1:07pm, "Bob McMahon" > said: > > > "Control at endpoints benefits greatly from even small amounts of > > information supplied by the network about the degree of congestion > present > > on the path." > > > > Agreed. The ECN mechanism seems like a shared thermostat in a building. > > It's basically an on/off where everyone is trying to set the temperatur= e. > > It does affect, in a non-linear manner, but still an effect. Better tha= n > a > > thermostat set at infinity or 0 Kelvin for sure. > > > > I find the assumption that congestion occurs "in network" as not always > > true. Taking OWD measurements with read side rate limiting suggests tha= t > > equally important to mitigating bufferbloat driven latency using > congestion > > signals is to make sure apps read "fast enough" whatever that means. I > > rarely hear about how important it is for apps to prioritize reads over > > open sockets. Not sure why that's overlooked and bufferbloat gets all t= he > > attention. I'm probably missing something. > > In the early days of the Internet protocol and also even ARPANET Host-Hos= t > protocol there were those who conflated host-level "flow control" (matchi= ng > production rate of data into the network to the destination *process* > consumption rate of data on a virtual circuit with a source capable of > variable and unbounded bit rate) with "congestion control" in the network= . > The term "congestion control" wasn't even used in the Internetworking > project when it was discussing design in the late 1970's. I tried to use = it > in our working group meetings, and every time I said "congestion" the > response would be phrased as "flow". > > The classic example was printing a file's contents from disk to an ASR33 > terminal on an TIP (Terminal IMP). There was flow control in the end-to-e= nd > protocol to avoid overflowing the TTY's limited buffer. But those who gre= w > up with ARPANET knew that thare was no way to accumulate queueing in the > IMP network, because of RFNM's that required permission for each new pack= et > to be sent. RFNM's implicitly prevented congestion from being caused by a > virtual circuit. But a flow control problem remained, because at the high= er > level protocol, buffering would overflow at the TIP. > > TCP adopted a different end-to-end *flow* control, so it solved the flow > control problem by creating a Windowing mechanism. But it did not by itse= lf > solve the *congestion* control problem, even congestion built up inside t= he > network by a wide-open window and a lazy operating system at the receivin= g > end that just said, I've got a lot of virtual memory so I'll open the > window to maximum size. > > There was a lot of confusion, because the guys who came from the ARPANET > environment, with all links being the same speed and RFNM limits on rate, > couldn't see why the Internet stack was so collapse-prone. I think Multic= s, > for example, as a giant virtual memory system caused congestion by openin= g > up its window too much. > > This is where Van Jacobson discovered that dropped packets were a "good > enough" congestion signal because of "fate sharing" among the packets tha= t > flowed on a bottleneck path, and that windowing (invented for flow contro= l > by the receiver to protect itself from overflow if the receiver couldn't > receive fast enough) could be used to slow down the sender to match the > rate of senders to the capacity of the internal bottleneck link. An elega= nt > "hack" that actually worked really well in practice. > > Now we view it as a bug if the receiver opens its window too much, or > otherwise doesn't translate dropped packets (or other incipient-congestio= n > signals) to shut down the source transmission rate as quickly as possible= . > Fortunately, the proper state of the internet - the one it should seek as > its ideal state - is that there is at most one packet waiting for each > egress link in the bottleneck path. This stable state ensures that the > window-reduction or slow-down signal encounters no congestion, with high > probability. [Excursions from one-packet queue occur, but since only > one-packet waiting is sufficient to fill the bottleneck link to capacity, > they can't achieve higher throughput in steady state. In practice, noisy > arrival distributions can reduce throughput, so allowing a small number o= f > packets to be waiting on a bottleneck link's queue can slightly increase > throughput. That's not asymptotically relevant, but as mentioned, the > Internet is never near asymptotic behavior.] > > > > > > Bob > > > > On Tue, Jul 13, 2021 at 12:15 AM Amr Rizk wrote: > > > >> Ben, > >> > >> it depends on what one tries to measure. Doing a rate scan using UDP (= to > >> measure latency distributions under load) is the best thing that we ha= ve > >> but without actually knowing how resources are shared (fair share as i= n > >> WiFi, FIFO as nearly everywhere else) it becomes very difficult to > >> interpret the results or provide a proper argument on latency. You are > >> right - TCP stats are a proxy for user experience but I believe they a= re > >> difficult to reproduce (we are always talking about very short TCP > flows - > >> the infinite TCP flow that converges to a steady behavior is purely > >> academic). > >> > >> By the way, Little's law is a strong tool when it comes to averages. T= o > be > >> able to say more (e.g. 1% of the delays is larger than x) one requires > more > >> information (e.g. the traffic - On-OFF pattern) see [1]. I am not sur= e > >> when does such information readily exist. > >> > >> Best > >> Amr > >> > >> [1] https://dl.acm.org/doi/10.1145/3341617.3326146 or if behind a > paywall > >> https://www.dcs.warwick.ac.uk/~florin/lib/sigmet19b.pdf > >> > >> -------------------------------- > >> Amr Rizk (amr.rizk@uni-due.de) > >> University of Duisburg-Essen > >> > >> -----Urspr=C3=BCngliche Nachricht----- > >> Von: Bloat Im Auftrag von Ben > Greear > >> Gesendet: Montag, 12. Juli 2021 22:32 > >> An: Bob McMahon > >> Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast < > >> make-wifi-fast@lists.bufferbloat.net>; Leonard Kleinrock < > lk@cs.ucla.edu>; > >> David P. Reed ; Cake List < > cake@lists.bufferbloat.net>; > >> codel@lists.bufferbloat.net; cerowrt-devel < > >> cerowrt-devel@lists.bufferbloat.net>; bloat < > bloat@lists.bufferbloat.net> > >> Betreff: Re: [Bloat] Little's Law mea culpa, but not invalidating my > main > >> point > >> > >> UDP is better for getting actual packet latency, for sure. TCP is > >> typical-user-experience-latency though, so it is also useful. > >> > >> I'm interested in the test and visualization side of this. If there > were > >> a way to give engineers a good real-time look at a complex real-world > >> network, then they have something to go on while trying to tune variou= s > >> knobs in their network to improve it. > >> > >> I'll let others try to figure out how build and tune the knobs, but th= e > >> data acquisition and visualization is something we might try to > >> accomplish. I have a feeling I'm not the first person to think of thi= s, > >> however....probably someone already has done such a thing. > >> > >> Thanks, > >> Ben > >> > >> On 7/12/21 1:04 PM, Bob McMahon wrote: > >> > I believe end host's TCP stats are insufficient as seen per the > >> > "failed" congested control mechanisms over the last decades. I think > >> > Jaffe pointed this out in > >> > 1979 though he was using what's been deemed on this thread as > "spherical > >> cow queueing theory." > >> > > >> > "Flow control in store-and-forward computer networks is appropriate > >> > for decentralized execution. A formal description of a class of > >> > "decentralized flow control algorithms" is given. The feasibility of > >> > maximizing power with such algorithms is investigated. On the > >> > assumption that communication links behave like M/M/1 servers it is > >> shown that no "decentralized flow control algorithm" can maximize > network > >> power. Power has been suggested in the literature as a network > performance > >> objective. It is also shown that no objective based only on the users' > >> throughputs and average delay is decentralizable. Finally, a restricte= d > >> class of algorithms cannot even approximate power." > >> > > >> > https://ieeexplore.ieee.org/document/1095152 > >> > > >> > Did Jaffe make a mistake? > >> > > >> > Also, it's been observed that latency is non-parametric in it's > >> > distributions and computing gaussians per the central limit theorem > >> > for OWD feedback loops aren't effective. How does one design a contr= ol > >> loop around things that are non-parametric? It also begs the question, > what > >> are the feed forward knobs that can actually help? > >> > > >> > Bob > >> > > >> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear >> > wrote: > >> > > >> > Measuring one or a few links provides a bit of data, but seems > like > >> if someone is trying to understand > >> > a large and real network, then the OWD between point A and B nee= ds > >> to just be input into something much > >> > more grand. Assuming real-time OWD data exists between 100 to > 1000 > >> endpoint pairs, has anyone found a way > >> > to visualize this in a useful manner? > >> > > >> > Also, considering something better than ntp may not really scale > to > >> 1000+ endpoints, maybe round-trip > >> > time is only viable way to get this type of data. In that case, > >> maybe clever logic could use things > >> > like trace-route to get some idea of how long it takes to get > 'onto' > >> the internet proper, and so estimate > >> > the last-mile latency. My assumption is that the last-mile > latency > >> is where most of the pervasive > >> > assymetric network latencies would exist (or just ping 8.8.8.8 > which > >> is 20ms from everywhere due to > >> > $magic). > >> > > >> > Endpoints could also triangulate a bit if needed, using some > anchor > >> points in the network > >> > under test. > >> > > >> > Thanks, > >> > Ben > >> > > >> > On 7/12/21 11:21 AM, Bob McMahon wrote: > >> > > iperf 2 supports OWD and gives full histograms for TCP write = to > >> read, TCP connect times, latency of packets (with UDP), latency of > "frames" > >> with > >> > > simulated video traffic (TCP and UDP), xfer times of bursts > with > >> low duty cycle traffic, and TCP RTT (sampling based.) It also has > support > >> for sampling (per > >> > > interval reports) down to 100 usecs if configured with > >> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've > >> released all this as open source. > >> > > > >> > > OWD only works if the end realtime clocks are synchronized > using > >> a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data > >> centers don't > >> > provide > >> > > sufficient level of clock accuracy and the GPS pulse per > second * > >> to colo and vm customers. > >> > > > >> > > https://iperf2.sourceforge.io/iperf-manpage.html > >> > > > >> > > Bob > >> > > > >> > > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed < > >> dpreed@deepplum.com >> dpreed@deepplum.com > >> > >> wrote: > >> > > > >> > > > >> > > On Monday, July 12, 2021 9:46am, "Livingood, Jason" < > >> Jason_Livingood@comcast.com > >> > >> Jason_Livingood@comcast.com>>> said: > >> > > > >> > > > I think latency/delay is becoming seen to be as > important > >> certainly, if not a more direct proxy for end user QoE. This is all > still > >> evolving and I > >> > have > >> > > to say is a super interesting & fun thing to work on. :-) > >> > > > >> > > If I could manage to sell one idea to the management > >> hierarchy of communications industry CEOs (operators, vendors, ...) it > is > >> this one: > >> > > > >> > > "It's the end-to-end latency, stupid!" > >> > > > >> > > And I mean, by end-to-end, latency to complete a task at = a > >> relevant layer of abstraction. > >> > > > >> > > At the link level, it's packet send to packet receive > >> completion. > >> > > > >> > > But at the transport level including retransmission > buffers, > >> it's datagram (or message) origination until the acknowledgement arriv= es > >> for that > >> > message being > >> > > delivered after whatever number of retransmissions, freei= ng > >> the retransmission buffer. > >> > > > >> > > At the WWW level, it's mouse click to display update > >> corresponding to completion of the request. > >> > > > >> > > What should be noted is that lower level latencies don't > >> directly predict the magnitude of higher-level latencies. But longer > lower > >> level latencies > >> > almost > >> > > always amplfify higher level latencies. Often non-linearl= y. > >> > > > >> > > Throughput is very, very weakly related to these latencie= s, > >> in contrast. > >> > > > >> > > The amplification process has to do with the presence of > >> queueing. Queueing is ALWAYS bad for latency, and throughput only help= s > if > >> it is in exactly the > >> > > right place (the so-called input queue of the bottleneck > >> process, which is often a link, but not always). > >> > > > >> > > Can we get that slogan into Harvard Business Review? Can = we > >> get it taught in Managerial Accounting at HBS? (which does address > >> logistics/supply chain > >> > queueing). > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > This electronic communication and the information and any fil= es > >> transmitted with it, or attached to it, are confidential and are > intended > >> solely for the > >> > use of > >> > > the individual or entity to whom it is addressed and may > contain > >> information that is confidential, legally privileged, protected by > privacy > >> laws, or > >> > otherwise > >> > > restricted from disclosure to anyone else. If you are not the > >> intended recipient or the person responsible for delivering the e-mail > to > >> the intended > >> > recipient, > >> > > you are hereby notified that any use, copying, distributing, > >> dissemination, forwarding, printing, or copying of this e-mail is > strictly > >> prohibited. If you > >> > > received this e-mail in error, please return the e-mail to th= e > >> sender, delete it from your computer, and destroy any printed copy of > it. > >> > > >> > > >> > -- > >> > Ben Greear greearb@candelatech.com > >> >> > >> > Candela Technologies Inc http://www.candelatech.com > >> > > >> > > >> > This electronic communication and the information and any files > >> > transmitted with it, or attached to it, are confidential and are > >> > intended solely for the use of the individual or entity to whom it i= s > >> > addressed and may contain information that is confidential, legally > >> > privileged, protected by privacy laws, or otherwise restricted from > >> disclosure to anyone else. If you are not the intended recipient or th= e > >> person responsible for delivering the e-mail to the intended recipient= , > you > >> are hereby notified that any use, copying, distributing, dissemination= , > >> forwarding, printing, or copying of this e-mail is strictly prohibited= . > If > >> you received this e-mail in error, please return the e-mail to the > sender, > >> delete it from your computer, and destroy any printed copy of it. > >> > >> > >> -- > >> Ben Greear > >> Candela Technologies Inc http://www.candelatech.com > >> > >> _______________________________________________ > >> Bloat mailing list > >> Bloat@lists.bufferbloat.net > >> https://lists.bufferbloat.net/listinfo/bloat > >> > >> > > > > -- > > This electronic communication and the information and any files > transmitted > > with it, or attached to it, are confidential and are intended solely fo= r > > the use of the individual or entity to whom it is addressed and may > contain > > information that is confidential, legally privileged, protected by > privacy > > laws, or otherwise restricted from disclosure to anyone else. If you ar= e > > not the intended recipient or the person responsible for delivering the > > e-mail to the intended recipient, you are hereby notified that any use, > > copying, distributing, dissemination, forwarding, printing, or copying = of > > this e-mail is strictly prohibited. If you received this e-mail in erro= r, > > please return the e-mail to the sender, delete it from your computer, a= nd > > destroy any printed copy of it. > > > > > --=20 This electronic communication and the information and any files transmitted= =20 with it, or attached to it, are confidential and are intended solely for=20 the use of the individual or entity to whom it is addressed and may contain= =20 information that is confidential, legally privileged, protected by privacy= =20 laws, or otherwise restricted from disclosure to anyone else. If you are=20 not the intended recipient or the person responsible for delivering the=20 e-mail to the intended recipient, you are hereby notified that any use,=20 copying, distributing, dissemination, forwarding, printing, or copying of= =20 this e-mail is strictly prohibited. If you received this e-mail in error,= =20 please return the e-mail to the sender, delete it from your computer, and= =20 destroy any printed copy of it. --000000000000f977e805c719a711 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Thanks for this. I find it both interesting and useful. Le= arning=C2=A0from those who came before me reminds me of "standing on t= he shoulders=C2=A0of giants." I try to teach my kids that it's not= so much us as the giants we choose - so choose judiciously=C2=A0and, more = importantly, be grateful when they provide their shoulders from which to se= e.=C2=A0

One challenge I faced with iperf 2 was around flow control&= #39;s effects on latency. I find if iperf 2 rate limits on writes then the = end/end latencies, RTT look good because the pipe is basically empty, while= rate limiting reads to the same value fills the window and drives the RTT = up. One might conclude, from a network perspective, the write side is bette= r.=C2=A0 But in reality, the=C2=A0write rate limiting is just pushing the d= elay into the application's logic, i.e. the relevant bytes may not be i= n the pipe but they aren't at the receiver=C2=A0either, they're stu= ck somewhere in the "tx application space."

It wasn't = obvious=C2=A0to me how to address this. We added burst measurements (burst = xfer time, and bursts/sec) which, I think,=C2=A0helps.

Bob

=
On Tue, Ju= l 13, 2021 at 10:49 AM David P. Reed <dpreed@deepplum.com> wrote:
Bob -

On Tuesday, July 13, 2021 1:07pm, "Bob McMahon" <bob.mcmahon@broadcom.com> said:

> "Control at endpoints benefits greatly from even small amounts of=
> information supplied by the network about the degree of congestion pre= sent
> on the path."
>
> Agreed. The ECN mechanism seems like a shared thermostat in a building= .
> It's basically an on/off where everyone is trying to set the tempe= rature.
> It does affect, in a non-linear manner, but still an effect. Better th= an a
> thermostat set at infinity or 0 Kelvin for sure.
>
> I find the assumption that congestion occurs "in network" as= not always
> true. Taking OWD measurements with read side rate limiting suggests th= at
> equally important to mitigating bufferbloat driven latency using conge= stion
> signals is to make sure apps read "fast enough" whatever tha= t means. I
> rarely hear about how important it is for apps to prioritize reads ove= r
> open sockets. Not sure why that's overlooked and bufferbloat gets = all the
> attention. I'm probably missing something.

In the early days of the Internet protocol and also even ARPANET Host-Host = protocol there were those who conflated host-level "flow control"= (matching production rate of data into the network to the destination *pro= cess* consumption rate of data on a virtual circuit with a source capable o= f variable and unbounded bit rate) with "congestion control" in t= he network. The term "congestion control" wasn't even used in= the Internetworking project when it was discussing design in the late 1970= 's. I tried to use it in our working group meetings, and every time I s= aid "congestion" the response would be phrased as "flow"= ;.

The classic example was printing a file's contents from disk to an ASR3= 3 terminal on an TIP (Terminal IMP). There was flow control in the end-to-e= nd protocol to avoid overflowing the TTY's limited buffer. But those wh= o grew up with ARPANET knew that thare was no way to accumulate queueing in= the IMP network, because of RFNM's that required permission for each n= ew packet to be sent. RFNM's implicitly prevented congestion from being= caused by a virtual circuit. But a flow control problem remained, because = at the higher level protocol, buffering would overflow at the TIP.

TCP adopted a different end-to-end *flow* control, so it solved the flow co= ntrol problem by creating a Windowing mechanism. But it did not by itself s= olve the *congestion* control problem, even congestion built up inside the = network by a wide-open window and a lazy operating system at the receiving = end that just said, I've got a lot of virtual memory so I'll open t= he window to maximum size.

There was a lot of confusion, because the guys who came from the ARPANET en= vironment, with all links being the same speed and RFNM limits on rate, cou= ldn't see why the Internet stack was so collapse-prone. I think Multics= , for example, as a giant virtual memory system caused congestion by openin= g up its window too much.

This is where Van Jacobson discovered that dropped packets were a "goo= d enough" congestion signal because of "fate sharing" among = the packets that flowed on a bottleneck path, and that windowing (invented = for flow control by the receiver to protect itself from overflow if the rec= eiver couldn't receive fast enough) could be used to slow down the send= er to match the rate of senders to the capacity of the internal bottleneck = link. An elegant "hack" that actually worked really well in pract= ice.

Now we view it as a bug if the receiver opens its window too much, or other= wise doesn't translate dropped packets (or other incipient-congestion s= ignals) to shut down the source transmission rate as quickly as possible. F= ortunately, the proper state of the internet - the one it should seek as it= s ideal state - is that there is at most one packet waiting for each egress= link in the bottleneck path. This stable state ensures that the window-red= uction or slow-down signal encounters no congestion, with high probability.= [Excursions from one-packet queue occur, but since only one-packet waiting= is sufficient to fill the bottleneck link to capacity, they can't achi= eve higher throughput in steady state. In practice, noisy arrival distribut= ions can reduce throughput, so allowing a small number of packets to be wai= ting on a bottleneck link's queue can slightly increase throughput. Tha= t's not asymptotically relevant, but as mentioned, the Internet is neve= r near asymptotic behavior.]


>
> Bob
>
> On Tue, Jul 13, 2021 at 12:15 AM Amr Rizk <
amr@rizk.com.de> wrote:
>
>> Ben,
>>
>> it depends on what one tries to measure. Doing a rate scan using U= DP (to
>> measure latency distributions under load) is the best thing that w= e have
>> but without actually knowing how resources are shared (fair share = as in
>> WiFi, FIFO as nearly everywhere else) it becomes very difficult to=
>> interpret the results or provide a proper argument on latency. You= are
>> right - TCP stats are a proxy for user experience but I believe th= ey are
>> difficult to reproduce (we are always talking about very short TCP= flows -
>> the infinite TCP flow that converges to a steady behavior is purel= y
>> academic).
>>
>> By the way, Little's law is a strong tool when it comes to ave= rages. To be
>> able to say more (e.g. 1% of the delays is larger than x) one requ= ires more
>> information (e.g. the traffic - On-OFF pattern) see [1].=C2=A0 I a= m not sure
>> when does such information readily exist.
>>
>> Best
>> Amr
>>
>> [1] https://dl.acm.org/doi/10.1145/3341617.33= 26146 or if behind a paywall
>> https://www.dcs.warwick.ac.uk/~flori= n/lib/sigmet19b.pdf
>>
>> --------------------------------
>> Amr Rizk (amr.rizk@uni-due.de)
>> University of Duisburg-Essen
>>
>> -----Urspr=C3=BCngliche Nachricht-----
>> Von: Bloat <bloat-bounces@lists.bufferbloat.net> Im Auftra= g von Ben Greear
>> Gesendet: Montag, 12. Juli 2021 22:32
>> An: Bob McMahon <bob.mcmahon@broadcom.com>
>> Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast <
>> make-wifi-fast@lists.bufferbloat.net>; Leonard Kleinrock <= ;lk@cs.ucla.edu>= ;
>> David P. Reed <dpreed@deepplum.com>; Cake List <cake@lists.bufferbloat.net>= ;
>> c= odel@lists.bufferbloat.net; cerowrt-devel <
>> cerowrt-devel@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat= .net>
>> Betreff: Re: [Bloat] Little's Law mea culpa, but not invalidat= ing my main
>> point
>>
>> UDP is better for getting actual packet latency, for sure.=C2=A0 T= CP is
>> typical-user-experience-latency though, so it is also useful.
>>
>> I'm interested in the test and visualization side of this.=C2= =A0 If there were
>> a way to give engineers a good real-time look at a complex real-wo= rld
>> network, then they have something to go on while trying to tune va= rious
>> knobs in their network to improve it.
>>
>> I'll let others try to figure out how build and tune the knobs= , but the
>> data acquisition and visualization is something we might try to >> accomplish.=C2=A0 I have a feeling I'm not the first person to= think of this,
>> however....probably someone already has done such a thing.
>>
>> Thanks,
>> Ben
>>
>> On 7/12/21 1:04 PM, Bob McMahon wrote:
>> > I believe end host's TCP stats are insufficient as seen p= er the
>> > "failed" congested control mechanisms over the last= decades. I think
>> > Jaffe pointed this out in
>> > 1979 though he was using what's been deemed on this threa= d as "spherical
>> cow queueing theory."
>> >
>> > "Flow control in store-and-forward computer networks is = appropriate
>> > for decentralized execution. A formal description of a class = of
>> > "decentralized flow control algorithms" is given. T= he feasibility of
>> > maximizing power with such algorithms is investigated. On the=
>> > assumption that communication links behave like M/M/1 servers= it is
>> shown that no "decentralized flow control algorithm" can= maximize network
>> power. Power has been suggested in the literature as a network per= formance
>> objective. It is also shown that no objective based only on the us= ers'
>> throughputs and average delay is decentralizable. Finally, a restr= icted
>> class of algorithms cannot even approximate power."
>> >
>> > https://ieeexplore.ieee.org/document/1095= 152
>> >
>> > Did Jaffe make a mistake?
>> >
>> > Also, it's been observed that latency is non-parametric i= n it's
>> > distributions and computing gaussians per the central limit t= heorem
>> > for OWD feedback loops aren't effective. How does one des= ign a control
>> loop around things that are non-parametric? It also begs the quest= ion, what
>> are the feed forward knobs that can actually help?
>> >
>> > Bob
>> >
>> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com=
>> <mailto:greearb@candelatech.com>> wrote:
>> >
>> >=C2=A0 =C2=A0 =C2=A0Measuring one or a few links provides a bi= t of data, but seems like
>> if someone is trying to understand
>> >=C2=A0 =C2=A0 =C2=A0a large and real network, then the OWD bet= ween point A and B needs
>> to just be input into something much
>> >=C2=A0 =C2=A0 =C2=A0more grand.=C2=A0 Assuming real-time OWD d= ata exists between 100 to 1000
>> endpoint pairs, has anyone found a way
>> >=C2=A0 =C2=A0 =C2=A0to visualize this in a useful manner?
>> >
>> >=C2=A0 =C2=A0 =C2=A0Also, considering something better than nt= p may not really scale to
>> 1000+ endpoints, maybe round-trip
>> >=C2=A0 =C2=A0 =C2=A0time is only viable way to get this type o= f data.=C2=A0 In that case,
>> maybe clever logic could use things
>> >=C2=A0 =C2=A0 =C2=A0like trace-route to get some idea of how l= ong it takes to get 'onto'
>> the internet proper, and so estimate
>> >=C2=A0 =C2=A0 =C2=A0the last-mile latency.=C2=A0 My assumption= is that the last-mile latency
>> is where most of the pervasive
>> >=C2=A0 =C2=A0 =C2=A0assymetric network latencies would exist (= or just ping 8.8.8.8 which
>> is 20ms from everywhere due to
>> >=C2=A0 =C2=A0 =C2=A0$magic).
>> >
>> >=C2=A0 =C2=A0 =C2=A0Endpoints could also triangulate a bit if = needed, using some anchor
>> points in the network
>> >=C2=A0 =C2=A0 =C2=A0under test.
>> >
>> >=C2=A0 =C2=A0 =C2=A0Thanks,
>> >=C2=A0 =C2=A0 =C2=A0Ben
>> >
>> >=C2=A0 =C2=A0 =C2=A0On 7/12/21 11:21 AM, Bob McMahon wrote: >> >=C2=A0 =C2=A0 =C2=A0 > iperf 2 supports OWD and gives full = histograms for TCP write to
>> read, TCP connect times, latency of packets (with UDP), latency of= "frames"
>> with
>> >=C2=A0 =C2=A0 =C2=A0 > simulated video traffic (TCP and UDP= ), xfer times of bursts with
>> low duty cycle traffic, and TCP RTT (sampling based.) It also has = support
>> for sampling (per
>> >=C2=A0 =C2=A0 =C2=A0 > interval reports) down to 100 usecs = if configured with
>> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We&= #39;ve
>> released all this as open source.
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 > OWD only works if the end realtime c= locks are synchronized using
>> a "machine level" protocol such as IEEE 1588 or PTP. Sad= ly, *most data
>> centers don't
>> >=C2=A0 =C2=A0 =C2=A0provide
>> >=C2=A0 =C2=A0 =C2=A0 > sufficient level of clock accuracy a= nd the GPS pulse per second *
>> to colo and vm customers.
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 > https://iperf= 2.sourceforge.io/iperf-manpage.html
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 > Bob
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 > On Mon, Jul 12, 2021 at 10:40 AM Dav= id P. Reed <
>> dpreed@de= epplum.com <mailto:dpreed@deepplum.com> <mailto:
>> dpreed@de= epplum.com
>> >=C2=A0 =C2=A0 =C2=A0<mailto:dpreed@deepplum.com>>> wrote:
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0On Monday, July 1= 2, 2021 9:46am, "Livingood, Jason" <
>> J= ason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
>> >=C2=A0 =C2=A0 =C2=A0<mailto:Jason_Livingood@comcast.com <mailt= o:
>> J= ason_Livingood@comcast.com>>> said:
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0 > I think lat= ency/delay is becoming seen to be as important
>> certainly, if not a more direct proxy for end user QoE. This is al= l still
>> evolving and I
>> >=C2=A0 =C2=A0 =C2=A0have
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0to say is a super= interesting & fun thing to work on. :-)
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0If I could manage= to sell one idea to the management
>> hierarchy of communications industry CEOs (operators, vendors, ...= ) it is
>> this one:
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0"It's th= e end-to-end latency, stupid!"
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0And I mean, by en= d-to-end, latency to complete a task at a
>> relevant layer of abstraction.
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0At the link level= , it's packet send to packet receive
>> completion.
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0But at the transp= ort level including retransmission buffers,
>> it's datagram (or message) origination until the acknowledgeme= nt arrives
>> for that
>> >=C2=A0 =C2=A0 =C2=A0message being
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0delivered after w= hatever number of retransmissions, freeing
>> the retransmission buffer.
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0At the WWW level,= it's mouse click to display update
>> corresponding to completion of the request.
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0What should be no= ted is that lower level latencies don't
>> directly predict the magnitude of higher-level latencies. But long= er lower
>> level latencies
>> >=C2=A0 =C2=A0 =C2=A0almost
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0always amplfify h= igher level latencies. Often non-linearly.
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0Throughput is ver= y, very weakly related to these latencies,
>> in contrast.
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0The amplification= process has to do with the presence of
>> queueing. Queueing is ALWAYS bad for latency, and throughput only = helps if
>> it is in exactly the
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0right place (the = so-called input queue of the bottleneck
>> process, which is often a link, but not always).
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0Can we get that s= logan into Harvard Business Review? Can we
>> get it taught in Managerial Accounting at HBS? (which does address=
>> logistics/supply chain
>> >=C2=A0 =C2=A0 =C2=A0queueing).
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 >
>> >=C2=A0 =C2=A0 =C2=A0 > This electronic communication and th= e information and any files
>> transmitted with it, or attached to it, are confidential and are i= ntended
>> solely for the
>> >=C2=A0 =C2=A0 =C2=A0use of
>> >=C2=A0 =C2=A0 =C2=A0 > the individual or entity to whom it = is addressed and may contain
>> information that is confidential, legally privileged, protected by= privacy
>> laws, or
>> >=C2=A0 =C2=A0 =C2=A0otherwise
>> >=C2=A0 =C2=A0 =C2=A0 > restricted from disclosure to anyone= else. If you are not the
>> intended recipient or the person responsible for delivering the e-= mail to
>> the intended
>> >=C2=A0 =C2=A0 =C2=A0recipient,
>> >=C2=A0 =C2=A0 =C2=A0 > you are hereby notified that any use= , copying, distributing,
>> dissemination, forwarding, printing, or copying of this e-mail is = strictly
>> prohibited. If you
>> >=C2=A0 =C2=A0 =C2=A0 > received this e-mail in error, pleas= e return the e-mail to the
>> sender, delete it from your computer, and destroy any printed copy= of it.
>> >
>> >
>> >=C2=A0 =C2=A0 =C2=A0--
>> >=C2=A0 =C2=A0 =C2=A0Ben Greear <greearb@candelatech.com <mailto:greearb@candelat= ech.com
>> >>
>> >=C2=A0 =C2=A0 =C2=A0Candela Technologies Inc http://www.cande= latech.com
>> >
>> >
>> > This electronic communication and the information and any fil= es
>> > transmitted with it, or attached to it, are confidential and = are
>> > intended solely for the use of the individual or entity to wh= om it is
>> > addressed and may contain information that is confidential, l= egally
>> > privileged, protected by privacy laws, or otherwise restricte= d from
>> disclosure to anyone else. If you are not the intended recipient o= r the
>> person responsible for delivering the e-mail to the intended recip= ient, you
>> are hereby notified that any use, copying, distributing, dissemina= tion,
>> forwarding, printing, or copying of this e-mail is strictly prohib= ited. If
>> you received this e-mail in error, please return the e-mail to the= sender,
>> delete it from your computer, and destroy any printed copy of it.<= br> >>
>>
>> --
>> Ben Greear <greearb@candelatech.com>
>> Candela Technologies Inc=C2=A0 http://www.candelatech.com
>>
>> _______________________________________________
>> Bloat mailing list
>> B= loat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat

>>
>>
>
> --
> This electronic communication and the information and any files transm= itted
> with it, or attached to it, are confidential and are intended solely f= or
> the use of the individual or entity to whom it is addressed and may co= ntain
> information that is confidential, legally privileged, protected by pri= vacy
> laws, or otherwise restricted from disclosure to anyone else. If you a= re
> not the intended recipient or the person responsible for delivering th= e
> e-mail to the intended recipient, you are hereby notified that any use= ,
> copying, distributing, dissemination, forwarding, printing, or copying= of
> this e-mail is strictly prohibited. If you received this e-mail in err= or,
> please return the e-mail to the sender, delete it from your computer, = and
> destroy any printed copy of it.
>



This ele= ctronic communication and the information and any files transmitted with it= , or attached to it, are confidential and are intended solely for the use o= f the individual or entity to whom it is addressed and may contain informat= ion that is confidential, legally privileged, protected by privacy laws, or= otherwise restricted from disclosure to anyone else. If you are not the in= tended recipient or the person responsible for delivering the e-mail to the= intended recipient, you are hereby notified that any use, copying, distrib= uting, dissemination, forwarding, printing, or copying of this e-mail is st= rictly prohibited. If you received this e-mail in error, please return the = e-mail to the sender, delete it from your computer, and destroy any printed= copy of it. --000000000000f977e805c719a711-- --00000000000001cd7b05c719a819 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIIQagYJKoZIhvcNAQcCoIIQWzCCEFcCAQExDzANBglghkgBZQMEAgEFADALBgkqhkiG9w0BBwGg gg3BMIIFDTCCA/WgAwIBAgIQeEqpED+lv77edQixNJMdADANBgkqhkiG9w0BAQsFADBMMSAwHgYD VQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMzETMBEGA1UEChMKR2xvYmFsU2lnbjETMBEGA1UE AxMKR2xvYmFsU2lnbjAeFw0yMDA5MTYwMDAwMDBaFw0yODA5MTYwMDAwMDBaMFsxCzAJBgNVBAYT AkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMTEwLwYDVQQDEyhHbG9iYWxTaWduIEdDQyBS MyBQZXJzb25hbFNpZ24gMiBDQSAyMDIwMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA vbCmXCcsbZ/a0fRIQMBxp4gJnnyeneFYpEtNydrZZ+GeKSMdHiDgXD1UnRSIudKo+moQ6YlCOu4t rVWO/EiXfYnK7zeop26ry1RpKtogB7/O115zultAz64ydQYLe+a1e/czkALg3sgTcOOcFZTXk38e aqsXsipoX1vsNurqPtnC27TWsA7pk4uKXscFjkeUE8JZu9BDKaswZygxBOPBQBwrA5+20Wxlk6k1 e6EKaaNaNZUy30q3ArEf30ZDpXyfCtiXnupjSK8WU2cK4qsEtj09JS4+mhi0CTCrCnXAzum3tgcH cHRg0prcSzzEUDQWoFxyuqwiwhHu3sPQNmFOMwIDAQABo4IB2jCCAdYwDgYDVR0PAQH/BAQDAgGG MGAGA1UdJQRZMFcGCCsGAQUFBwMCBggrBgEFBQcDBAYKKwYBBAGCNxQCAgYKKwYBBAGCNwoDBAYJ KwYBBAGCNxUGBgorBgEEAYI3CgMMBggrBgEFBQcDBwYIKwYBBQUHAxEwEgYDVR0TAQH/BAgwBgEB /wIBADAdBgNVHQ4EFgQUljPR5lgXWzR1ioFWZNW+SN6hj88wHwYDVR0jBBgwFoAUj/BLf6guRSSu TVD6Y5qL3uLdG7wwegYIKwYBBQUHAQEEbjBsMC0GCCsGAQUFBzABhiFodHRwOi8vb2NzcC5nbG9i YWxzaWduLmNvbS9yb290cjMwOwYIKwYBBQUHMAKGL2h0dHA6Ly9zZWN1cmUuZ2xvYmFsc2lnbi5j b20vY2FjZXJ0L3Jvb3QtcjMuY3J0MDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwuZ2xvYmFs c2lnbi5jb20vcm9vdC1yMy5jcmwwWgYDVR0gBFMwUTALBgkrBgEEAaAyASgwQgYKKwYBBAGgMgEo CjA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxzaWduLmNvbS9yZXBvc2l0b3J5LzAN BgkqhkiG9w0BAQsFAAOCAQEAdAXk/XCnDeAOd9nNEUvWPxblOQ/5o/q6OIeTYvoEvUUi2qHUOtbf jBGdTptFsXXe4RgjVF9b6DuizgYfy+cILmvi5hfk3Iq8MAZsgtW+A/otQsJvK2wRatLE61RbzkX8 9/OXEZ1zT7t/q2RiJqzpvV8NChxIj+P7WTtepPm9AIj0Keue+gS2qvzAZAY34ZZeRHgA7g5O4TPJ /oTd+4rgiU++wLDlcZYd/slFkaT3xg4qWDepEMjT4T1qFOQIL+ijUArYS4owpPg9NISTKa1qqKWJ jFoyms0d0GwOniIIbBvhI2MJ7BSY9MYtWVT5jJO3tsVHwj4cp92CSFuGwunFMzCCA18wggJHoAMC AQICCwQAAAAAASFYUwiiMA0GCSqGSIb3DQEBCwUAMEwxIDAeBgNVBAsTF0dsb2JhbFNpZ24gUm9v dCBDQSAtIFIzMRMwEQYDVQQKEwpHbG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWduMB4XDTA5 MDMxODEwMDAwMFoXDTI5MDMxODEwMDAwMFowTDEgMB4GA1UECxMXR2xvYmFsU2lnbiBSb290IENB IC0gUjMxEzARBgNVBAoTCkdsb2JhbFNpZ24xEzARBgNVBAMTCkdsb2JhbFNpZ24wggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDMJXaQeQZ4Ihb1wIO2hMoonv0FdhHFrYhy/EYCQ8eyip0E XyTLLkvhYIJG4VKrDIFHcGzdZNHr9SyjD4I9DCuul9e2FIYQebs7E4B3jAjhSdJqYi8fXvqWaN+J J5U4nwbXPsnLJlkNc96wyOkmDoMVxu9bi9IEYMpJpij2aTv2y8gokeWdimFXN6x0FNx04Druci8u nPvQu7/1PQDhBjPogiuuU6Y6FnOM3UEOIDrAtKeh6bJPkC4yYOlXy7kEkmho5TgmYHWyn3f/kRTv riBJ/K1AFUjRAjFhGV64l++td7dkmnq/X8ET75ti+w1s4FRpFqkD2m7pg5NxdsZphYIXAgMBAAGj QjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSP8Et/qC5FJK5N UPpjmove4t0bvDANBgkqhkiG9w0BAQsFAAOCAQEAS0DbwFCq/sgM7/eWVEVJu5YACUGssxOGhigH M8pr5nS5ugAtrqQK0/Xx8Q+Kv3NnSoPHRHt44K9ubG8DKY4zOUXDjuS5V2yq/BKW7FPGLeQkbLmU Y/vcU2hnVj6DuM81IcPJaP7O2sJTqsyQiunwXUaMld16WCgaLx3ezQA3QY/tRG3XUyiXfvNnBB4V 14qWtNPeTCekTBtzc3b0F5nCH3oO4y0IrQocLP88q1UOD5F+NuvDV0m+4S4tfGCLw0FREyOdzvcy a5QBqJnnLDMfOjsl0oZAzjsshnjJYS8Uuu7bVW/fhO4FCU29KNhyztNiUGUe65KXgzHZs7XKR1g/ XzCCBUkwggQxoAMCAQICDBhL7k9eiTHfluW70TANBgkqhkiG9w0BAQsFADBbMQswCQYDVQQGEwJC RTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTExMC8GA1UEAxMoR2xvYmFsU2lnbiBHQ0MgUjMg UGVyc29uYWxTaWduIDIgQ0EgMjAyMDAeFw0yMTAyMjIwNDQyMDRaFw0yMjA5MDEwODA5NDlaMIGM MQswCQYDVQQGEwJJTjESMBAGA1UECBMJS2FybmF0YWthMRIwEAYDVQQHEwlCYW5nYWxvcmUxFjAU BgNVBAoTDUJyb2FkY29tIEluYy4xFDASBgNVBAMTC0JvYiBNY01haG9uMScwJQYJKoZIhvcNAQkB Fhhib2IubWNtYWhvbkBicm9hZGNvbS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB AQDyY95HWFm48WhKUyFbAS9JxiDqBHBdAbgjx4iF46lkqZdVkIJ8pGfcXoGd10Vp9yL5VQevDAt/ A/Jh22uhSgKR9Almeux9xWGhG8cyZwcCwYrsMt84FqCgEQidT+7YGNdd9oKrjU7mFC7pAnnw+cGI d3NFryurgnNPwfEK0X7HwRsga5pM+Zelr/ZM8MkphE1hCvTuPGakNylOFhP+wKL8Bmhsq5tNIInw DrPV5EPUikwiGMDmkX8o6roGiUwyqAp8dMZKJZ/vS/aWEELV+gm21Btr7eqdAWyqm09McVpkM4th v/FOYcj8DeJr8MXmHW53gN2fv0BzQjqAdrdCBPNRAgMBAAGjggHZMIIB1TAOBgNVHQ8BAf8EBAMC BaAwgaMGCCsGAQUFBwEBBIGWMIGTME4GCCsGAQUFBzAChkJodHRwOi8vc2VjdXJlLmdsb2JhbHNp Z24uY29tL2NhY2VydC9nc2djY3IzcGVyc29uYWxzaWduMmNhMjAyMC5jcnQwQQYIKwYBBQUHMAGG NWh0dHA6Ly9vY3NwLmdsb2JhbHNpZ24uY29tL2dzZ2NjcjNwZXJzb25hbHNpZ24yY2EyMDIwME0G A1UdIARGMEQwQgYKKwYBBAGgMgEoCjA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxz aWduLmNvbS9yZXBvc2l0b3J5LzAJBgNVHRMEAjAAMEkGA1UdHwRCMEAwPqA8oDqGOGh0dHA6Ly9j cmwuZ2xvYmFsc2lnbi5jb20vZ3NnY2NyM3BlcnNvbmFsc2lnbjJjYTIwMjAuY3JsMCMGA1UdEQQc MBqBGGJvYi5tY21haG9uQGJyb2FkY29tLmNvbTATBgNVHSUEDDAKBggrBgEFBQcDBDAfBgNVHSME GDAWgBSWM9HmWBdbNHWKgVZk1b5I3qGPzzAdBgNVHQ4EFgQUpyXYr5rh8cZzkns+zXmMG1YkBk4w DQYJKoZIhvcNAQELBQADggEBACfauRPak93nzbpn8UXqRZqg6iUZch/UfGj9flerMl4TlK5jWulz Y+rRg+iWkjiLk3O+kKu6GI8TLXB2rsoTnrHYij96Uad5/Ut3Q5F4S0ILgOWVU38l0VZIGGG0CzG1 eLUgN2zjLg++xJuzqijuKQCJb/3+il2MTJ8dcDaXuYcjg7Vt6+EtCBS1SGMVhOTH4Fp50yGWj8ZA bPF1uuJM+dGLJLheUizCr5J/OBEdENg+DSmrqoZ+kZd76iRaF2CkhboR2394Ft8lFlKQiU0q8lnR 9/kdZ0F0iCcUfhaLaGYWujW7N0LZ+rQuTfuPGLx9zZNeNMWSZi/Pc8vdCO7EnlIxggJtMIICaQIB ATBrMFsxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMTEwLwYDVQQDEyhH bG9iYWxTaWduIEdDQyBSMyBQZXJzb25hbFNpZ24gMiBDQSAyMDIwAgwYS+5PXokx35blu9EwDQYJ YIZIAWUDBAIBBQCggdQwLwYJKoZIhvcNAQkEMSIEIKvwUOreiJty5ZR/Qatb7JNaoQlbnQDmOh6N hbEQIHULMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTIxMDcxNDE4 MzgxMFowaQYJKoZIhvcNAQkPMVwwWjALBglghkgBZQMEASowCwYJYIZIAWUDBAEWMAsGCWCGSAFl AwQBAjAKBggqhkiG9w0DBzALBgkqhkiG9w0BAQowCwYJKoZIhvcNAQEHMAsGCWCGSAFlAwQCATAN BgkqhkiG9w0BAQEFAASCAQCg2bXo0577PxhVEtlhI6wpd7mQ4gYx7eyv/3EvG1J17iaERPpv6KCV 4/pSRhmDyhGDRBw2J1G1OEtUbrL6G3dWnqV03qxcjCNJvKGD95HWiNRhmuEoCG6Ywy1VYmeqYTDG mLvqGf2y8BvEEb33cdZpKQmsKuFsKDILi9W+Pe8dlNCTCrUAw+2UklgQPy+U8lGxQApi5HfTdxKR QUy9oTJOlwrPd1uvNH/2wHwFOH5njWNC/RIu8CAMQgpfMRB2xrI+BrqI80oxfuKqPYS76OTb1IH6 iMtZzEAhG7Gt3LsF+vvhor/oZmMWYWl40kHM/Gz1TCcvLL0Tp4ETU0kiY2AG --00000000000001cd7b05c719a819--