From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com [IPv6:2a00:1450:4864:20::533]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id A68DE3B29E for ; Thu, 22 Jul 2021 12:31:00 -0400 (EDT) Received: by mail-ed1-x533.google.com with SMTP id t2so7499501edd.13 for ; Thu, 22 Jul 2021 09:31:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=XdslqVmW46kry5GY0vva+cmVRFN6evE9jLJEUIrEkQ0=; b=DPBf6vNmQha7cR3eOtNEZy+tVyS1eXn1Nhipae8oiQBYMY3yYaeWXZwYY4VB046eEl ZtPeiRt2gNKjtsQIyzRcdBM5wZmbjcXIaSa9dV0aVr+O4rRPpHUSxfAy99PQ8HULC5Hx nlvkJLrrCwu1ZqtSh1ZnW5gqkm+jhCbGQhZYk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=XdslqVmW46kry5GY0vva+cmVRFN6evE9jLJEUIrEkQ0=; b=dy++YantARcyGoVOb3xu6AAEIzWn0AxSblSQMjpLR2qLrn25wCXimipgoA1Wm2sRf7 w85Ejpx6Ctm7cG5S5uSCzmCN/qzgjrsbp5P9MBgccv82gh1vy3BbOgh4ROdvspyEuVGd 7/Y38ynzDArGWZtLvPD1ArEnZR+296vnor1X/d9dPuOwH6iw61YNNa17KHdsihy1fNl/ VuRWBj8uhQVcXuyZy3OHQzkaQhwIRZrcC9aEl8/7Mzjm6pzfnZbgUR6Xte7ITz9zjEN6 gEvNrHE0i/15X2zfDlKqeQIubir8a3/ljE97XUaedZEYtVRrXmAdnptA0SOlMVse8Dk1 d/xg== X-Gm-Message-State: AOAM530NJX28vUF4h5sNruzRqL4UCJd4nLnfZ6IxCVk33J6MUhvMtxgc 20LtKQc1LqGFvj4ZRl0q7l3J9cHH1y5J7kj9IiT86TC/fUA8jhFiZejIXLAw3ADyp5FYyM8nG9V MhqnDDPBnA0L6FsN35W6xbLq6oBP1+PL5rUK3ssd+ X-Google-Smtp-Source: ABdhPJwR9LwNDrtRZi7f1KS8QlHcHIfPfmH/T+CwyULys5DkLgRqZP7azmdJkIPTJKub74ErkA3xxTpjDlbwuud3Rh0= X-Received: by 2002:aa7:c545:: with SMTP id s5mr571357edr.182.1626971459378; Thu, 22 Jul 2021 09:30:59 -0700 (PDT) MIME-Version: 1.0 References: <1625188609.32718319@apps.rackspace.com> <989de0c1-e06c-cda9-ebe6-1f33df8a4c24@candelatech.com> <1625773080.94974089@apps.rackspace.com> <1625859083.09751240@apps.rackspace.com> <1626111630.69692379@apps.rackspace.com> <9c3d61c1-7013-414e-964d-9e83f596e69d@candelatech.com> <1e8bdf58-2a21-f543-a248-be58bcbddbcf@candelatech.com> <02c601d777b6$c4ce5a10$4e6b0e30$@rizk.com.de> <1626198543.007132235@apps.rackspace.com> In-Reply-To: From: Bob McMahon Date: Thu, 22 Jul 2021 09:30:47 -0700 Message-ID: Subject: Re: [Bloat] Little's Law mea culpa, but not invalidating my main point To: Leonard Kleinrock Cc: "David P. Reed" , Amr Rizk , Ben Greear , starlink@lists.bufferbloat.net, Make-Wifi-fast , Cake List , codel@lists.bufferbloat.net, cerowrt-devel , bloat , Dave Taht Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-256; boundary="000000000000ebc2fe05c7b8cf7d" X-List-Received-Date: Thu, 22 Jul 2021 16:31:01 -0000 --000000000000ebc2fe05c7b8cf7d Content-Type: multipart/alternative; boundary="000000000000e5b76e05c7b8cfff" --000000000000e5b76e05c7b8cfff Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Thanks for this. I plan to purchase the second volume to go with my copy of volume 1. There is (always) more to learn and your expertise is very helpful. Bob PS. As a side note, I've added support for TCP_NOTSENT_LOWAT in iperf 2.1.= 4 and it's proving interesting per WiFi/BT latency testing including helping to mitigate sender side bloat. *--tcp-write-prefetch **n*[kmKM]Set TCP_NOTSENT_LOWAT on the socket and use event based writes per select() on the socket.I'll probably add measuring the select() delays to see if that correlates to things like RF arbitrations, etc. On Wed, Jul 21, 2021 at 4:20 PM Leonard Kleinrock wrote: > Just a few comments following David Reed's insightful comments re the > history of the ARPANET and its approach to flow control. I have attached > some pages from my Volume II which provide an understanding of how we > addressed flow control and its implementation in the ARPANET. > > The early days of the ARPANET design and evaluation involved detailed > design of what we did call =E2=80=9CFlow Control=E2=80=9D. In my "Queuei= ng Systems, Volume > II: Computer Applications=E2=80=9D, John Wiley, 1976, I documented much o= f what we > designed and evaluated for the ARPANET, and focused on performance, > deadlocks, lockups and degradations due to flow control design. Aspects = of > congestion control were considered, but this 2-volume book was mostly abo= ut > understanding congestion. Of interest are the many deadlocks that we > discovered in those early days as we evaluated and measured the network > behavior. Flow control was designed into that early network, but it had = a > certain ad-hoc flavor and I point out the danger of requiring flows to > depend upon the acquisition of multiple tokens that were allocated from > different portions of the network at the same time in a distributed > fashion. The attached relevant sections of the book address these issues= ; > I thought it would be of value to see what we were looking at back then. > > On a related topic regarding flow and congestion control (as triggered by > David=E2=80=99s comment* "**at most one packet waiting for each egress li= nk in > the bottleneck path.=E2=80=9D*), in 1978, I published a paper > in > which I extended the notion of Power (the ratio of throughput to response > time) that had been introduced by Giessler, et a > l > and I pointed out the amazing properties that emerged when Power is > optimized, e.g., that one should keep each hop in the pipe =E2=80=9Cjust = full=E2=80=9D, > i.e., one message per hop. As it turns out, and as has been discussed in > this email chain, Jaffe > sh= owed > in 1981 that this optimization was not decentralizable and so no one > pursued this optimal operating point (notwithstanding the fact that I > published other papers on this issue, for example in 1979 > and > in 1981 ). So this > issue of Power lay dormant for decades until Van Jacobsen, et al, > resurrected the idea with their BBR flow control design in 2016 > when they showed that > indeed one could decentralize power. Considerable research has since > followed their paper including another by me in 2018 > . > (This was not the first time that a publication challenging the merits of= a > new idea negatively impacted that idea for decades - for example, the 198= 8 > book =E2=80=9CPerceptrons=E2=80=9D > by > Minsky and Papert discouraged research into neural networks for many year= s > until that idea was proven to have merit.) But the story is not over as > much work has yet to be done to develop the algorithms that can properly > deal with congestion in the sense that this email chain continues to > discuss it. > > Best, > Len > > > > > > > > On Jul 13, 2021, at 10:49 AM, David P. Reed wrote: > > Bob - > > On Tuesday, July 13, 2021 1:07pm, "Bob McMahon" > said: > > "Control at endpoints benefits greatly from even small amounts of > information supplied by the network about the degree of congestion presen= t > on the path." > > Agreed. The ECN mechanism seems like a shared thermostat in a building. > It's basically an on/off where everyone is trying to set the temperature. > It does affect, in a non-linear manner, but still an effect. Better than = a > thermostat set at infinity or 0 Kelvin for sure. > > I find the assumption that congestion occurs "in network" as not always > true. Taking OWD measurements with read side rate limiting suggests that > equally important to mitigating bufferbloat driven latency using congesti= on > signals is to make sure apps read "fast enough" whatever that means. I > rarely hear about how important it is for apps to prioritize reads over > open sockets. Not sure why that's overlooked and bufferbloat gets all the > attention. I'm probably missing something. > > > In the early days of the Internet protocol and also even ARPANET Host-Hos= t > protocol there were those who conflated host-level "flow control" (matchi= ng > production rate of data into the network to the destination *process* > consumption rate of data on a virtual circuit with a source capable of > variable and unbounded bit rate) with "congestion control" in the network= . > The term "congestion control" wasn't even used in the Internetworking > project when it was discussing design in the late 1970's. I tried to use = it > in our working group meetings, and every time I said "congestion" the > response would be phrased as "flow". > > The classic example was printing a file's contents from disk to an ASR33 > terminal on an TIP (Terminal IMP). There was flow control in the end-to-e= nd > protocol to avoid overflowing the TTY's limited buffer. But those who gre= w > up with ARPANET knew that thare was no way to accumulate queueing in the > IMP network, because of RFNM's that required permission for each new pack= et > to be sent. RFNM's implicitly prevented congestion from being caused by a > virtual circuit. But a flow control problem remained, because at the high= er > level protocol, buffering would overflow at the TIP. > > TCP adopted a different end-to-end *flow* control, so it solved the flow > control problem by creating a Windowing mechanism. But it did not by itse= lf > solve the *congestion* control problem, even congestion built up inside t= he > network by a wide-open window and a lazy operating system at the receivin= g > end that just said, I've got a lot of virtual memory so I'll open the > window to maximum size. > > There was a lot of confusion, because the guys who came from the ARPANET > environment, with all links being the same speed and RFNM limits on rate, > couldn't see why the Internet stack was so collapse-prone. I think Multic= s, > for example, as a giant virtual memory system caused congestion by openin= g > up its window too much. > > This is where Van Jacobson discovered that dropped packets were a "good > enough" congestion signal because of "fate sharing" among the packets tha= t > flowed on a bottleneck path, and that windowing (invented for flow contro= l > by the receiver to protect itself from overflow if the receiver couldn't > receive fast enough) could be used to slow down the sender to match the > rate of senders to the capacity of the internal bottleneck link. An elega= nt > "hack" that actually worked really well in practice. > > Now we view it as a bug if the receiver opens its window too much, or > otherwise doesn't translate dropped packets (or other incipient-congestio= n > signals) to shut down the source transmission rate as quickly as possible= . > Fortunately, the proper state of the internet - the one it should seek as > its ideal state - is that there is at most one packet waiting for each > egress link in the bottleneck path. This stable state ensures that the > window-reduction or slow-down signal encounters no congestion, with high > probability. [Excursions from one-packet queue occur, but since only > one-packet waiting is sufficient to fill the bottleneck link to capacity, > they can't achieve higher throughput in steady state. In practice, noisy > arrival distributions can reduce throughput, so allowing a small number o= f > packets to be waiting on a bottleneck link's queue can slightly increase > throughput. That's not asymptotically relevant, but as mentioned, the > Internet is never near asymptotic behavior.] > > > > Bob > > On Tue, Jul 13, 2021 at 12:15 AM Amr Rizk wrote: > > Ben, > > it depends on what one tries to measure. Doing a rate scan using UDP (to > measure latency distributions under load) is the best thing that we have > but without actually knowing how resources are shared (fair share as in > WiFi, FIFO as nearly everywhere else) it becomes very difficult to > interpret the results or provide a proper argument on latency. You are > right - TCP stats are a proxy for user experience but I believe they are > difficult to reproduce (we are always talking about very short TCP flows = - > the infinite TCP flow that converges to a steady behavior is purely > academic). > > By the way, Little's law is a strong tool when it comes to averages. To b= e > able to say more (e.g. 1% of the delays is larger than x) one requires mo= re > information (e.g. the traffic - On-OFF pattern) see [1]. I am not sure > when does such information readily exist. > > Best > Amr > > [1] https://dl.acm.org/doi/10.1145/3341617.3326146 or if behind a paywall > https://www.dcs.warwick.ac.uk/~florin/lib/sigmet19b.pdf > > -------------------------------- > Amr Rizk (amr.rizk@uni-due.de) > University of Duisburg-Essen > > -----Urspr=C3=BCngliche Nachricht----- > Von: Bloat Im Auftrag von Ben Greea= r > Gesendet: Montag, 12. Juli 2021 22:32 > An: Bob McMahon > Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast < > make-wifi-fast@lists.bufferbloat.net>; Leonard Kleinrock = ; > David P. Reed ; Cake List >; > codel@lists.bufferbloat.net; cerowrt-devel < > cerowrt-devel@lists.bufferbloat.net>; bloat > Betreff: Re: [Bloat] Little's Law mea culpa, but not invalidating my main > point > > UDP is better for getting actual packet latency, for sure. TCP is > typical-user-experience-latency though, so it is also useful. > > I'm interested in the test and visualization side of this. If there were > a way to give engineers a good real-time look at a complex real-world > network, then they have something to go on while trying to tune various > knobs in their network to improve it. > > I'll let others try to figure out how build and tune the knobs, but the > data acquisition and visualization is something we might try to > accomplish. I have a feeling I'm not the first person to think of this, > however....probably someone already has done such a thing. > > Thanks, > Ben > > On 7/12/21 1:04 PM, Bob McMahon wrote: > > I believe end host's TCP stats are insufficient as seen per the > "failed" congested control mechanisms over the last decades. I think > Jaffe pointed this out in > 1979 though he was using what's been deemed on this thread as "spherical > > cow queueing theory." > > > "Flow control in store-and-forward computer networks is appropriate > for decentralized execution. A formal description of a class of > "decentralized flow control algorithms" is given. The feasibility of > maximizing power with such algorithms is investigated. On the > assumption that communication links behave like M/M/1 servers it is > > shown that no "decentralized flow control algorithm" can maximize network > power. Power has been suggested in the literature as a network performanc= e > objective. It is also shown that no objective based only on the users' > throughputs and average delay is decentralizable. Finally, a restricted > class of algorithms cannot even approximate power." > > > https://ieeexplore.ieee.org/document/1095152 > > Did Jaffe make a mistake? > > Also, it's been observed that latency is non-parametric in it's > distributions and computing gaussians per the central limit theorem > for OWD feedback loops aren't effective. How does one design a control > > loop around things that are non-parametric? It also begs the question, wh= at > are the feed forward knobs that can actually help? > > > Bob > > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear > > wrote: > > > Measuring one or a few links provides a bit of data, but seems like > > if someone is trying to understand > > a large and real network, then the OWD between point A and B needs > > to just be input into something much > > more grand. Assuming real-time OWD data exists between 100 to 1000 > > endpoint pairs, has anyone found a way > > to visualize this in a useful manner? > > Also, considering something better than ntp may not really scale to > > 1000+ endpoints, maybe round-trip > > time is only viable way to get this type of data. In that case, > > maybe clever logic could use things > > like trace-route to get some idea of how long it takes to get 'onto' > > the internet proper, and so estimate > > the last-mile latency. My assumption is that the last-mile latency > > is where most of the pervasive > > assymetric network latencies would exist (or just ping 8.8.8.8 which > > is 20ms from everywhere due to > > $magic). > > Endpoints could also triangulate a bit if needed, using some anchor > > points in the network > > under test. > > Thanks, > Ben > > On 7/12/21 11:21 AM, Bob McMahon wrote: > > iperf 2 supports OWD and gives full histograms for TCP write to > > read, TCP connect times, latency of packets (with UDP), latency of "frame= s" > with > > simulated video traffic (TCP and UDP), xfer times of bursts with > > low duty cycle traffic, and TCP RTT (sampling based.) It also has support > for sampling (per > > interval reports) down to 100 usecs if configured with > > --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've > released all this as open source. > > > OWD only works if the end realtime clocks are synchronized using > > a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data > centers don't > > provide > > sufficient level of clock accuracy and the GPS pulse per second * > > to colo and vm customers. > > > https://iperf2.sourceforge.io/iperf-manpage.html > > Bob > > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed < > > dpreed@deepplum.com dpreed@deepplum.com > > >> wrote: > > > > On Monday, July 12, 2021 9:46am, "Livingood, Jason" < > > Jason_Livingood@comcast.com > > > Jason_Livingood@comcast.com>>> said: > > > I think latency/delay is becoming seen to be as important > > certainly, if not a more direct proxy for end user QoE. This is all still > evolving and I > > have > > to say is a super interesting & fun thing to work on. :-) > > If I could manage to sell one idea to the management > > hierarchy of communications industry CEOs (operators, vendors, ...) it is > this one: > > > "It's the end-to-end latency, stupid!" > > And I mean, by end-to-end, latency to complete a task at a > > relevant layer of abstraction. > > > At the link level, it's packet send to packet receive > > completion. > > > But at the transport level including retransmission buffers, > > it's datagram (or message) origination until the acknowledgement arrives > for that > > message being > > delivered after whatever number of retransmissions, freeing > > the retransmission buffer. > > > At the WWW level, it's mouse click to display update > > corresponding to completion of the request. > > > What should be noted is that lower level latencies don't > > directly predict the magnitude of higher-level latencies. But longer lowe= r > level latencies > > almost > > always amplfify higher level latencies. Often non-linearly. > > Throughput is very, very weakly related to these latencies, > > in contrast. > > > The amplification process has to do with the presence of > > queueing. Queueing is ALWAYS bad for latency, and throughput only helps i= f > it is in exactly the > > right place (the so-called input queue of the bottleneck > > process, which is often a link, but not always). > > > Can we get that slogan into Harvard Business Review? Can we > > get it taught in Managerial Accounting at HBS? (which does address > logistics/supply chain > > queueing). > > > > > > > > > This electronic communication and the information and any files > > transmitted with it, or attached to it, are confidential and are intended > solely for the > > use of > > the individual or entity to whom it is addressed and may contain > > information that is confidential, legally privileged, protected by privac= y > laws, or > > otherwise > > restricted from disclosure to anyone else. If you are not the > > intended recipient or the person responsible for delivering the e-mail to > the intended > > recipient, > > you are hereby notified that any use, copying, distributing, > > dissemination, forwarding, printing, or copying of this e-mail is strictl= y > prohibited. If you > > received this e-mail in error, please return the e-mail to the > > sender, delete it from your computer, and destroy any printed copy of it. > > > > -- > Ben Greear > > Candela Technologies Inc http://www.candelatech.com > > > This electronic communication and the information and any files > transmitted with it, or attached to it, are confidential and are > intended solely for the use of the individual or entity to whom it is > addressed and may contain information that is confidential, legally > privileged, protected by privacy laws, or otherwise restricted from > > disclosure to anyone else. If you are not the intended recipient or the > person responsible for delivering the e-mail to the intended recipient, y= ou > are hereby notified that any use, copying, distributing, dissemination, > forwarding, printing, or copying of this e-mail is strictly prohibited. I= f > you received this e-mail in error, please return the e-mail to the sender= , > delete it from your computer, and destroy any printed copy of it. > > > -- > Ben Greear > Candela Technologies Inc http://www.candelatech.com > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > > > > -- > This electronic communication and the information and any files transmitt= ed > with it, or attached to it, are confidential and are intended solely for > the use of the individual or entity to whom it is addressed and may conta= in > information that is confidential, legally privileged, protected by privac= y > laws, or otherwise restricted from disclosure to anyone else. If you are > not the intended recipient or the person responsible for delivering the > e-mail to the intended recipient, you are hereby notified that any use, > copying, distributing, dissemination, forwarding, printing, or copying of > this e-mail is strictly prohibited. If you received this e-mail in error, > please return the e-mail to the sender, delete it from your computer, and > destroy any printed copy of it. > > --=20 This electronic communication and the information and any files transmitted= =20 with it, or attached to it, are confidential and are intended solely for=20 the use of the individual or entity to whom it is addressed and may contain= =20 information that is confidential, legally privileged, protected by privacy= =20 laws, or otherwise restricted from disclosure to anyone else. If you are=20 not the intended recipient or the person responsible for delivering the=20 e-mail to the intended recipient, you are hereby notified that any use,=20 copying, distributing, dissemination, forwarding, printing, or copying of= =20 this e-mail is strictly prohibited. If you received this e-mail in error,= =20 please return the e-mail to the sender, delete it from your computer, and= =20 destroy any printed copy of it. --000000000000e5b76e05c7b8cfff Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Thanks for this. I plan to purchase the second volume to g= o with my copy of volume 1. There is (always) more to learn and your expert= ise is very helpful.

Bob

PS.=C2=A0 As a side note, I've a= dded support for TCP_NOTSENT_LOWAT in iperf 2.1.4 and it's prov= ing interesting per WiFi/BT latency testing including helping to mitigate s= ender side bloat.=C2=A0
--tcp-write-prefetch=C2=A0n[kmKM]
<= dd style=3D"color:rgb(0,0,0);font-family:Times;font-size:medium">Set TCP_NO= TSENT_LOWAT on the socket and use event based writes per select() on the so= cket.
I'll probably add measuring the select() delays to see i= f that correlates to things like RF arbitrations, etc.


On Wed, Jul 2= 1, 2021 at 4:20 PM Leonard Kleinrock <lk@cs.ucla.edu> wrote:
<= div dir=3D"auto">Just a few comments following David Reed's insightful = comments re the history of the ARPANET and its approach to flow control.=C2= =A0 I have attached some pages from my Volume II which provide an understan= ding of how we addressed flow control and its implementation in the ARPANET= .

The early days of the = ARPANET design and evaluation involved detailed design of what we did call = =E2=80=9CFlow Control=E2=80=9D.=C2=A0 In my "Queueing Systems, Volume = II: Computer Applications=E2=80=9D, John Wiley, 1976, I documented much of = what we designed and evaluated for the ARPANET, and focused on performance,= deadlocks, lockups and degradations due to flow control design.=C2=A0 Aspe= cts of congestion control were considered, but this 2-volume book was mostl= y about understanding congestion. =C2=A0 =C2=A0Of interest are the many dea= dlocks that we discovered in those early days as we evaluated and measured = the network behavior.=C2=A0 Flow control was designed into that early netwo= rk, but it had a certain ad-hoc flavor and I point out the danger of requir= ing flows to depend upon the acquisition of multiple tokens that were alloc= ated from different portions of the network at the same time in a distribut= ed fashion.=C2=A0 The attached relevant sections of the book address these = issues; =C2=A0I thought it would be of value to see what we were looking at= back then. =C2=A0

On a = related topic regarding flow and congestion control (as triggered by David=E2=80=99s comment "at most one packet waiting for each egress link in t= he bottleneck path.=E2=80=9D), in 1978, I published a=C2=A0paper=C2=A0in which I = extended the notion of Power (the ratio of throughput to response time) tha= t had been introduced by=C2=A0Giessler, et al= =C2=A0 and I pointed out the amazing properties that emerged when Power is = optimized, e.g., that one should keep each hop in the pipe =E2=80=9Cjust fu= ll=E2=80=9D, i.e., one message per hop.=C2=A0 As it turns out, and as has b= een discussed in this email chain,=C2=A0Jaffe=C2=A0showed in 1981 that this optimization was not decentralizable and s= o no one pursued this optimal operating point (notwithstanding the fact tha= t I published other papers on this issue, for example in=C2=A01979= =C2=A0and in=C2=A01981).=C2=A0 So this issue of Power lay dorma= nt for decades until Van Jacobsen, et al, resurrected the idea with their B= BR flow control design in=C2=A02016=C2=A0when they showed that indeed = one could decentralize power.=C2=A0 Considerable research has since followe= d their paper including another by me in=C2=A02018. (This was not the first time that a publication challenging the merits = of a new idea negatively impacted that idea for decades - for example, the = 1988 book=C2=A0= =E2=80=9CPerceptrons=E2=80=9D=C2=A0by Minsky and Papert discouraged res= earch into neural networks for many years until that idea was proven to hav= e merit.) =C2=A0But the story is not over as much =C2=A0work has yet to be = done to develop the algorithms that can properly deal with congestion in th= e sense that this email chain continues to discuss it.=C2=A0

Best,
Len


=




On Jul 13, 2021, at 10:49 AM, David P. Reed <dpreed@deepplum.com> wrote:
Bob -

bob.mcmahon@broadcom.com= > said:

"C= ontrol at endpoints benefits greatly from even small amounts of
informat= ion supplied by the network about the degree of congestion present
on th= e path."

Agreed. The ECN mechanism seems like a shared thermost= at in a building.
It's basically an on/off where everyone is trying = to set the temperature.
It does affect, in a non-linear manner, but stil= l an effect. Better than a
thermostat set at infinity or 0 Kelvin for su= re.

I find the assumption that congestion occurs "in network&qu= ot; as not always
true. Taking OWD measurements with read side rate limi= ting suggests that
equally important to mitigating bufferbloat driven la= tency using congestion
signals is to make sure apps read "fast enou= gh" whatever that means. I
rarely hear about how important it is fo= r apps to prioritize reads over
open sockets. Not sure why that's ov= erlooked and bufferbloat gets all the
attention. I'm probably missin= g something.

In the early days of the Internet protocol and also even = ARPANET Host-Host protocol there were those who conflated host-level "= flow control" (matching production rate of data into the network to th= e destination *process* consumption rate of data on a virtual circuit with = a source capable of variable and unbounded bit rate) with "congestion = control" in the network. The term "congestion control" wasn&= #39;t even used in the Internetworking project when it was discussing desig= n in the late 1970's. I tried to use it in our working group meetings, = and every time I said "congestion" the response would be phrased = as "flow".

The classic example was printing a file&= #39;s contents from disk to an ASR33 terminal on an TIP (Terminal IMP). The= re was flow control in the end-to-end protocol to avoid overflowing the TTY= 's limited buffer. But those who grew up with ARPANET knew that thare w= as no way to accumulate queueing in the IMP network, because of RFNM's = that required permission for each new packet to be sent. RFNM's implici= tly prevented congestion from being caused by a virtual circuit. But a flow= control problem remained, because at the higher level protocol, buffering = would overflow at the TIP.

TCP adopted a different end-to-end = *flow* control, so it solved the flow control problem by creating a Windowi= ng mechanism. But it did not by itself solve the *congestion* control probl= em, even congestion built up inside the network by a wide-open window and a= lazy operating system at the receiving end that just said, I've got a = lot of virtual memory so I'll open the window to maximum size.
There was a lot of confusion, because the guys who came from the ARPAN= ET environment, with all links being the same speed and RFNM limits on rate= , couldn't see why the Internet stack was so collapse-prone. I think Mu= ltics, for example, as a giant virtual memory system caused congestion by o= pening up its window too much.

This is where Van Jacobson disc= overed that dropped packets were a "good enough" congestion signa= l because of "fate sharing" among the packets that flowed on a bo= ttleneck path, and that windowing (invented for flow control by the receive= r to protect itself from overflow if the receiver couldn't receive fast= enough) could be used to slow down the sender to match the rate of senders= to the capacity of the internal bottleneck link. An elegant "hack&quo= t; that actually worked really well in practice.

Now we view i= t as a bug if the receiver opens its window too much, or otherwise doesn= 9;t translate dropped packets (or other incipient-congestion signals) to sh= ut down the source transmission rate as quickly as possible. Fortunately, t= he proper state of the internet - the one it should seek as its ideal state= - is that there is at most one packet waiting for each egress link in the = bottleneck path. This stable state ensures that the window-reduction or slo= w-down signal encounters no congestion, with high probability. [Excursions = from one-packet queue occur, but since only one-packet waiting is sufficien= t to fill the bottleneck link to capacity, they can't achieve higher th= roughput in steady state. In practice, noisy arrival distributions can redu= ce throughput, so allowing a small number of packets to be waiting on a bot= tleneck link's queue can slightly increase throughput. That's not a= symptotically relevant, but as mentioned, the Internet is never near asympt= otic behavior.]



Bob

On Tue, Jul 13, 202= 1 at 12:15 AM Amr Rizk <amr@rizk.com.de> wrote:

Ben,
it depends on what one tries to measure. Doing a rate scan using UDP = (to
measure latency distributions under load) is the best thing that we = have
but without actually knowing how resources are shared (fair share a= s in
WiFi, FIFO as nearly everywhere else) it becomes very difficult to<= br>interpret the results or provide a proper argument on latency. You areright - TCP stats are a proxy for user experience but I believe they are<= br>difficult to reproduce (we are always talking about very short TCP flows= -
the infinite TCP flow that converges to a steady behavior is purelyacademic).

By the way, Little's law is a strong tool when it c= omes to averages. To be
able to say more (e.g. 1% of the delays is large= r than x) one requires more
information (e.g. the traffic - On-OFF patte= rn) see [1].=C2=A0 I am not sure
when does such information readily exis= t.

Best
Amr

[1] https://dl.acm.org/doi/10.1145/3341617.3= 326146 or if behind a paywall
https://www.dcs.warwick.ac.= uk/~florin/lib/sigmet19b.pdf

--------------------------------Amr Rizk (amr.riz= k@uni-due.de)
University of Duisburg-Essen

-----Urspr=C3=BCng= liche Nachricht-----
Von: Bloat <bloat-bounces@lists.bufferbloat.net> Im Auftrag von Ben Greear
Gesendet: Montag, 12. Juli 2021 22:32An: Bob McMahon <
bob.mcmahon@broadcom.com>
Cc: starlink@lists.bufferbloat.net= ; Make-Wifi-fast <
make-wifi-fast@lists.bufferbloat.net>; Leona= rd Kleinrock <lk@cs.= ucla.edu>;
David P. Reed <dpreed@deepplum.com>; Cake List <cake@lists.bufferbloat= .net>;
codel@lists.bufferbloat.net; cerowrt-devel <
cerowrt-devel@l= ists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
Betref= f: Re: [Bloat] Little's Law mea culpa, but not invalidating my main
= point

UDP is better for getting actual packet latency, for sure.=C2= =A0 TCP is
typical-user-experience-latency though, so it is also useful.=

I'm interested in the test and visualization side of this.=C2= =A0 If there were
a way to give engineers a good real-time look at a com= plex real-world
network, then they have something to go on while trying = to tune various
knobs in their network to improve it.

I'll le= t others try to figure out how build and tune the knobs, but the
data ac= quisition and visualization is something we might try to
accomplish.=C2= =A0 I have a feeling I'm not the first person to think of this,
howe= ver....probably someone already has done such a thing.

Thanks,
Be= n

On 7/12/21 1:04 PM, Bob McMahon wrote:
I believe end host's TCP stats are insufficient as seen per the
&q= uot;failed" congested control mechanisms over the last decades. I thin= k
Jaffe pointed this out in
1979 though he was using what's been = deemed on this thread as "spherical
cow queueing theor= y."

"Flow control in store-and-f= orward computer networks is appropriate
for decentralized execution. A f= ormal description of a class of
"decentralized flow control algorit= hms" is given. The feasibility of
maximizing power with such algori= thms is investigated. On the
assumption that communication links behave = like M/M/1 servers it is
shown that no "decentralized = flow control algorithm" can maximize network
power. Power has been = suggested in the literature as a network performance
objective. It is al= so shown that no objective based only on the users'
throughputs and = average delay is decentralizable. Finally, a restricted
class of algorit= hms cannot even approximate power."

<= a href=3D"https://ieeexplore.ieee.org/document/1095152" target=3D"_blank">h= ttps://ieeexplore.ieee.org/document/1095152

Did Jaffe make a mis= take?

Also, it's been observed that latency is non-parametric in= it's
distributions and computing gaussians per the central limit th= eorem
for OWD feedback loops aren't effective. How does one design a= control
loop around things that are non-parametric? It als= o begs the question, what
are the feed forward knobs that can actually h= elp?

Bob

On Mon, Jul 12, 2021 at 12= :07 PM Ben Greear <greearb@candelatech.com
<mailto:greearb@candelatech.com>> wrote:

=C2=A0=C2=A0=C2=A0Measuri= ng one or a few links provides a bit of data, but seems like
if someone is trying to understand
=C2=A0=C2= =A0=C2=A0a large and real network, then the OWD between point A and B needs=
to just be input into something much
=C2=A0=C2=A0=C2=A0more grand.=C2=A0 Assuming real-time OWD data e= xists between 100 to 1000
endpoint pairs, has anyone found = a way
=C2=A0=C2=A0=C2=A0to visualize this in a= useful manner?

=C2=A0=C2=A0=C2=A0Also, considering something better= than ntp may not really scale to
1000+ endpoints, maybe ro= und-trip
=C2=A0=C2=A0=C2=A0time is only viable= way to get this type of data.=C2=A0 In that case,
maybe cl= ever logic could use things
=C2=A0=C2=A0=C2=A0= like trace-route to get some idea of how long it takes to get 'onto'= ;
the internet proper, and so estimate
=C2=A0=C2=A0=C2=A0the last-mile latency.=C2=A0 My assumption is t= hat the last-mile latency
is where most of the pervasive
=C2=A0=C2=A0=C2=A0assymetric network latencies w= ould exist (or just ping 8.8.8.8 which
is 20ms from everywh= ere due to
=C2=A0=C2=A0=C2=A0$magic).

= =C2=A0=C2=A0=C2=A0Endpoints could also triangulate a bit if needed, using s= ome anchor
points in the network
=C2=A0=C2=A0=C2=A0under test.

=C2=A0=C2=A0=C2=A0Thanks,
=C2=A0= =C2=A0=C2=A0Ben

=C2=A0=C2=A0=C2=A0On 7/12/21 11:21 AM, Bob McMahon w= rote:
iperf 2 supports OWD and gives full hist= ograms for TCP write to
read, TCP connect time= s, latency of packets (with UDP), latency of "frames"
with
=
simulated video traffic= (TCP and UDP), xfer times of bursts with
low = duty cycle traffic, and TCP RTT (sampling based.) It also has support
fo= r sampling (per
inte= rval reports) down to 100 usecs if configured with
--enable-fastsampling, otherwise the fastest sampling is 5 ms. We'= ve
released all this as open source.

OWD only works if the end realtime clocks are synch= ronized using
a "machine level" prot= ocol such as IEEE 1588 or PTP. Sadly, *most data
centers don't
=C2=A0=C2=A0=C2=A0provide
sufficient level of clock accuracy and the GPS pulse per second *
to colo and vm customers.

https://iperf2.sourceforge.io/iperf-m= anpage.html

Bob

On Mon, Jul 12, 2021 at 10:40 AM David P.= Reed <
dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mail= to:
dpreed@deep= plum.com
=C2=A0=C2=A0=C2=A0<mailto:dpreed@deepplum.com>>> wrote:


=C2=A0=C2=A0=C2= =A0On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com<= /a>>
=C2=A0=C2=A0=C2=A0<mailto:Jason_Livingood@c= omcast.com <mailto:
Jason_Livingood@comcast.com>>>= ; said:

I think latency/delay is becoming seen to be as important=
certainly, if not a more direct = proxy for end user QoE. This is all still
evolving and I
=C2=A0=C2=A0=C2=A0have
=C2=A0=C2= =A0=C2=A0to say is a super interesting & fun thing to work on. :-)
<= br>=C2=A0=C2=A0=C2=A0If I could manage to sell one idea to the management
hierarchy of communications industry CEOs (oper= ators, vendors, ...) it is
this one:

=C2=A0=C2=A0=C2=A0"It's the end-to-end lat= ency, stupid!"

=C2=A0=C2=A0=C2=A0And I mean, by end-to-end, lat= ency to complete a task at a
relevant layer of= abstraction.

= =C2=A0=C2=A0=C2=A0At the link level, it's packet send to packet receive=
completion.

=C2=A0=C2=A0=C2=A0But at the transport level incl= uding retransmission buffers,
it's datagra= m (or message) origination until the acknowledgement arrives
for that
=C2=A0=C2=A0=C2=A0message being
=C2=A0=C2=A0=C2=A0delivered after whatever number of retransmi= ssions, freeing
the retransmission buffer.
=

=C2=A0=C2=A0=C2=A0A= t the WWW level, it's mouse click to display update
corresponding to completion of the request.

=C2=A0=C2=A0=C2=A0What should be no= ted is that lower level latencies don't
di= rectly predict the magnitude of higher-level latencies. But longer lowerlevel latencies
=C2=A0=C2=A0=C2=A0almost
<= blockquote type=3D"cite">=C2=A0=C2=A0=C2=A0always amplfify higher level lat= encies. Often non-linearly.

=C2=A0=C2=A0=C2=A0Throughput is very, ve= ry weakly related to these latencies,
in contr= ast.

=C2=A0=C2= =A0=C2=A0The amplification process has to do with the presence of
queueing. Queueing is ALWAYS bad for latency, and throu= ghput only helps if
it is in exactly the
=C2=A0=C2=A0=C2=A0right place (the so-called input = queue of the bottleneck
process, which is ofte= n a link, but not always).

=C2=A0=C2=A0=C2=A0Can we get that slogan into Harvard Business R= eview? Can we
get it taught in Managerial Acco= unting at HBS? (which does address
logistics/supply chain
=C2=A0=C2=A0=C2=A0queueing).






This electronic communication and the information= and any files
transmitted with it, or attache= d to it, are confidential and are intended
solely for the
=C2=A0=C2=A0=C2=A0use of
the in= dividual or entity to whom it is addressed and may contain
=
information that is confidential, legally privileged, protecte= d by privacy
laws, or
=C2=A0=C2=A0=C2=A0oth= erwise
restricted from disclosure to anyone el= se. If you are not the
intended recipient or t= he person responsible for delivering the e-mail to
the intended
=C2=A0=C2=A0=C2=A0recipient,
you are hereby notified that any use, copying, distributing,
dissemination, forwarding, printing, or copying of this = e-mail is strictly
prohibited. If you
received this e-mail in error, please return the e-mai= l to the
sender, delete it from your computer,= and destroy any printed copy of it.


= =C2=A0=C2=A0=C2=A0--
=C2=A0=C2=A0=C2=A0Ben Greear <greearb@candelatech.com <= mailto:greearb= @candelatech.com

=C2=A0= =C2=A0=C2=A0Candela Technologies Inc http://www.candelatech.com


This electroni= c communication and the information and any files
transmitted with it, o= r attached to it, are confidential and are
intended solely for the use o= f the individual or entity to whom it is
addressed and may contain infor= mation that is confidential, legally
privileged, protected by privacy la= ws, or otherwise restricted from
disclosure to anyone else.= If you are not the intended recipient or the
person responsible for del= ivering the e-mail to the intended recipient, you
are hereby notified th= at any use, copying, distributing, dissemination,
forwarding, printing, = or copying of this e-mail is strictly prohibited. If
you received this e= -mail in error, please return the e-mail to the sender,
delete it from y= our computer, and destroy any printed copy of it.


--
Ben Gree= ar <greearb= @candelatech.com>
Candela Technologies Inc =C2=A0http://www.candelatech.com
=
_______________________________________________
Bloat mailing listBloat@li= sts.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat=



--
This electronic communication and the in= formation and any files transmitted
with it, or attached to it, are conf= idential and are intended solely for
the use of the individual or entity= to whom it is addressed and may contain
information that is confidentia= l, legally privileged, protected by privacy
laws, or otherwise restricte= d from disclosure to anyone else. If you are
not the intended recipient = or the person responsible for delivering the
e-mail to the intended reci= pient, you are hereby notified that any use,
copying, distributing, diss= emination, forwarding, printing, or copying of
this e-mail is strictly p= rohibited. If you received this e-mail in error,
please return the e-mai= l to the sender, delete it from your computer, and
destroy any printed c= opy of it.

This ele= ctronic communication and the information and any files transmitted with it= , or attached to it, are confidential and are intended solely for the use o= f the individual or entity to whom it is addressed and may contain informat= ion that is confidential, legally privileged, protected by privacy laws, or= otherwise restricted from disclosure to anyone else. If you are not the in= tended recipient or the person responsible for delivering the e-mail to the= intended recipient, you are hereby notified that any use, copying, distrib= uting, dissemination, forwarding, printing, or copying of this e-mail is st= rictly prohibited. If you received this e-mail in error, please return the = e-mail to the sender, delete it from your computer, and destroy any printed= copy of it. --000000000000e5b76e05c7b8cfff-- --000000000000ebc2fe05c7b8cf7d Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIIQagYJKoZIhvcNAQcCoIIQWzCCEFcCAQExDzANBglghkgBZQMEAgEFADALBgkqhkiG9w0BBwGg gg3BMIIFDTCCA/WgAwIBAgIQeEqpED+lv77edQixNJMdADANBgkqhkiG9w0BAQsFADBMMSAwHgYD VQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMzETMBEGA1UEChMKR2xvYmFsU2lnbjETMBEGA1UE AxMKR2xvYmFsU2lnbjAeFw0yMDA5MTYwMDAwMDBaFw0yODA5MTYwMDAwMDBaMFsxCzAJBgNVBAYT AkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMTEwLwYDVQQDEyhHbG9iYWxTaWduIEdDQyBS MyBQZXJzb25hbFNpZ24gMiBDQSAyMDIwMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA vbCmXCcsbZ/a0fRIQMBxp4gJnnyeneFYpEtNydrZZ+GeKSMdHiDgXD1UnRSIudKo+moQ6YlCOu4t rVWO/EiXfYnK7zeop26ry1RpKtogB7/O115zultAz64ydQYLe+a1e/czkALg3sgTcOOcFZTXk38e aqsXsipoX1vsNurqPtnC27TWsA7pk4uKXscFjkeUE8JZu9BDKaswZygxBOPBQBwrA5+20Wxlk6k1 e6EKaaNaNZUy30q3ArEf30ZDpXyfCtiXnupjSK8WU2cK4qsEtj09JS4+mhi0CTCrCnXAzum3tgcH cHRg0prcSzzEUDQWoFxyuqwiwhHu3sPQNmFOMwIDAQABo4IB2jCCAdYwDgYDVR0PAQH/BAQDAgGG MGAGA1UdJQRZMFcGCCsGAQUFBwMCBggrBgEFBQcDBAYKKwYBBAGCNxQCAgYKKwYBBAGCNwoDBAYJ KwYBBAGCNxUGBgorBgEEAYI3CgMMBggrBgEFBQcDBwYIKwYBBQUHAxEwEgYDVR0TAQH/BAgwBgEB /wIBADAdBgNVHQ4EFgQUljPR5lgXWzR1ioFWZNW+SN6hj88wHwYDVR0jBBgwFoAUj/BLf6guRSSu TVD6Y5qL3uLdG7wwegYIKwYBBQUHAQEEbjBsMC0GCCsGAQUFBzABhiFodHRwOi8vb2NzcC5nbG9i YWxzaWduLmNvbS9yb290cjMwOwYIKwYBBQUHMAKGL2h0dHA6Ly9zZWN1cmUuZ2xvYmFsc2lnbi5j b20vY2FjZXJ0L3Jvb3QtcjMuY3J0MDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwuZ2xvYmFs c2lnbi5jb20vcm9vdC1yMy5jcmwwWgYDVR0gBFMwUTALBgkrBgEEAaAyASgwQgYKKwYBBAGgMgEo CjA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxzaWduLmNvbS9yZXBvc2l0b3J5LzAN BgkqhkiG9w0BAQsFAAOCAQEAdAXk/XCnDeAOd9nNEUvWPxblOQ/5o/q6OIeTYvoEvUUi2qHUOtbf jBGdTptFsXXe4RgjVF9b6DuizgYfy+cILmvi5hfk3Iq8MAZsgtW+A/otQsJvK2wRatLE61RbzkX8 9/OXEZ1zT7t/q2RiJqzpvV8NChxIj+P7WTtepPm9AIj0Keue+gS2qvzAZAY34ZZeRHgA7g5O4TPJ /oTd+4rgiU++wLDlcZYd/slFkaT3xg4qWDepEMjT4T1qFOQIL+ijUArYS4owpPg9NISTKa1qqKWJ jFoyms0d0GwOniIIbBvhI2MJ7BSY9MYtWVT5jJO3tsVHwj4cp92CSFuGwunFMzCCA18wggJHoAMC AQICCwQAAAAAASFYUwiiMA0GCSqGSIb3DQEBCwUAMEwxIDAeBgNVBAsTF0dsb2JhbFNpZ24gUm9v dCBDQSAtIFIzMRMwEQYDVQQKEwpHbG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWduMB4XDTA5 MDMxODEwMDAwMFoXDTI5MDMxODEwMDAwMFowTDEgMB4GA1UECxMXR2xvYmFsU2lnbiBSb290IENB IC0gUjMxEzARBgNVBAoTCkdsb2JhbFNpZ24xEzARBgNVBAMTCkdsb2JhbFNpZ24wggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDMJXaQeQZ4Ihb1wIO2hMoonv0FdhHFrYhy/EYCQ8eyip0E XyTLLkvhYIJG4VKrDIFHcGzdZNHr9SyjD4I9DCuul9e2FIYQebs7E4B3jAjhSdJqYi8fXvqWaN+J J5U4nwbXPsnLJlkNc96wyOkmDoMVxu9bi9IEYMpJpij2aTv2y8gokeWdimFXN6x0FNx04Druci8u nPvQu7/1PQDhBjPogiuuU6Y6FnOM3UEOIDrAtKeh6bJPkC4yYOlXy7kEkmho5TgmYHWyn3f/kRTv riBJ/K1AFUjRAjFhGV64l++td7dkmnq/X8ET75ti+w1s4FRpFqkD2m7pg5NxdsZphYIXAgMBAAGj QjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSP8Et/qC5FJK5N UPpjmove4t0bvDANBgkqhkiG9w0BAQsFAAOCAQEAS0DbwFCq/sgM7/eWVEVJu5YACUGssxOGhigH M8pr5nS5ugAtrqQK0/Xx8Q+Kv3NnSoPHRHt44K9ubG8DKY4zOUXDjuS5V2yq/BKW7FPGLeQkbLmU Y/vcU2hnVj6DuM81IcPJaP7O2sJTqsyQiunwXUaMld16WCgaLx3ezQA3QY/tRG3XUyiXfvNnBB4V 14qWtNPeTCekTBtzc3b0F5nCH3oO4y0IrQocLP88q1UOD5F+NuvDV0m+4S4tfGCLw0FREyOdzvcy a5QBqJnnLDMfOjsl0oZAzjsshnjJYS8Uuu7bVW/fhO4FCU29KNhyztNiUGUe65KXgzHZs7XKR1g/ XzCCBUkwggQxoAMCAQICDBhL7k9eiTHfluW70TANBgkqhkiG9w0BAQsFADBbMQswCQYDVQQGEwJC RTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTExMC8GA1UEAxMoR2xvYmFsU2lnbiBHQ0MgUjMg UGVyc29uYWxTaWduIDIgQ0EgMjAyMDAeFw0yMTAyMjIwNDQyMDRaFw0yMjA5MDEwODA5NDlaMIGM MQswCQYDVQQGEwJJTjESMBAGA1UECBMJS2FybmF0YWthMRIwEAYDVQQHEwlCYW5nYWxvcmUxFjAU BgNVBAoTDUJyb2FkY29tIEluYy4xFDASBgNVBAMTC0JvYiBNY01haG9uMScwJQYJKoZIhvcNAQkB Fhhib2IubWNtYWhvbkBicm9hZGNvbS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB AQDyY95HWFm48WhKUyFbAS9JxiDqBHBdAbgjx4iF46lkqZdVkIJ8pGfcXoGd10Vp9yL5VQevDAt/ A/Jh22uhSgKR9Almeux9xWGhG8cyZwcCwYrsMt84FqCgEQidT+7YGNdd9oKrjU7mFC7pAnnw+cGI d3NFryurgnNPwfEK0X7HwRsga5pM+Zelr/ZM8MkphE1hCvTuPGakNylOFhP+wKL8Bmhsq5tNIInw DrPV5EPUikwiGMDmkX8o6roGiUwyqAp8dMZKJZ/vS/aWEELV+gm21Btr7eqdAWyqm09McVpkM4th v/FOYcj8DeJr8MXmHW53gN2fv0BzQjqAdrdCBPNRAgMBAAGjggHZMIIB1TAOBgNVHQ8BAf8EBAMC BaAwgaMGCCsGAQUFBwEBBIGWMIGTME4GCCsGAQUFBzAChkJodHRwOi8vc2VjdXJlLmdsb2JhbHNp Z24uY29tL2NhY2VydC9nc2djY3IzcGVyc29uYWxzaWduMmNhMjAyMC5jcnQwQQYIKwYBBQUHMAGG NWh0dHA6Ly9vY3NwLmdsb2JhbHNpZ24uY29tL2dzZ2NjcjNwZXJzb25hbHNpZ24yY2EyMDIwME0G A1UdIARGMEQwQgYKKwYBBAGgMgEoCjA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxz aWduLmNvbS9yZXBvc2l0b3J5LzAJBgNVHRMEAjAAMEkGA1UdHwRCMEAwPqA8oDqGOGh0dHA6Ly9j cmwuZ2xvYmFsc2lnbi5jb20vZ3NnY2NyM3BlcnNvbmFsc2lnbjJjYTIwMjAuY3JsMCMGA1UdEQQc MBqBGGJvYi5tY21haG9uQGJyb2FkY29tLmNvbTATBgNVHSUEDDAKBggrBgEFBQcDBDAfBgNVHSME GDAWgBSWM9HmWBdbNHWKgVZk1b5I3qGPzzAdBgNVHQ4EFgQUpyXYr5rh8cZzkns+zXmMG1YkBk4w DQYJKoZIhvcNAQELBQADggEBACfauRPak93nzbpn8UXqRZqg6iUZch/UfGj9flerMl4TlK5jWulz Y+rRg+iWkjiLk3O+kKu6GI8TLXB2rsoTnrHYij96Uad5/Ut3Q5F4S0ILgOWVU38l0VZIGGG0CzG1 eLUgN2zjLg++xJuzqijuKQCJb/3+il2MTJ8dcDaXuYcjg7Vt6+EtCBS1SGMVhOTH4Fp50yGWj8ZA bPF1uuJM+dGLJLheUizCr5J/OBEdENg+DSmrqoZ+kZd76iRaF2CkhboR2394Ft8lFlKQiU0q8lnR 9/kdZ0F0iCcUfhaLaGYWujW7N0LZ+rQuTfuPGLx9zZNeNMWSZi/Pc8vdCO7EnlIxggJtMIICaQIB ATBrMFsxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMTEwLwYDVQQDEyhH bG9iYWxTaWduIEdDQyBSMyBQZXJzb25hbFNpZ24gMiBDQSAyMDIwAgwYS+5PXokx35blu9EwDQYJ YIZIAWUDBAIBBQCggdQwLwYJKoZIhvcNAQkEMSIEID2Tkit35v0l/1CnFU4XprYKzOyfj2xojwCw D8Cdx7I6MBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTIxMDcyMjE2 MzA1OVowaQYJKoZIhvcNAQkPMVwwWjALBglghkgBZQMEASowCwYJYIZIAWUDBAEWMAsGCWCGSAFl AwQBAjAKBggqhkiG9w0DBzALBgkqhkiG9w0BAQowCwYJKoZIhvcNAQEHMAsGCWCGSAFlAwQCATAN BgkqhkiG9w0BAQEFAASCAQDRVcIq+rkWCDvEQOmmnlwJXPBCwTy7U1ggGimDF3MgbzoIdyE6UjkF q7o53I9eZKKoV0cIsn4FU+wdogLabKNFAIpXMUz8ToUWQNpplh069P+gBftbuKdpMnIp3vugCrvc x+rSn5HYxUb9HFfKQ7Li8idvLhcH8o5o1kNYxJXxQ2M/pQuc+ySjAJe+nxPA0ViRX70dJ1qleTIa pB8DDGpWkObs4Sk8/3bbCPkrPoE7k1JR8W4bGuRy2i/tOZ0AWqgTO/a34g2dhpoKwEFhkYr+vJN8 IzMCDsFmKQHsNk6wWT5rCo0Uhf3xDuMbOT9IAJlf2tbCvMe0nN6TFudUPmuV --000000000000ebc2fe05c7b8cf7d--