From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp112.iad3a.emailsrvr.com (smtp112.iad3a.emailsrvr.com [173.203.187.112]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id AEFAC3CB49 for ; Tue, 13 Jul 2021 13:49:03 -0400 (EDT) Received: from app41.wa-webapps.iad3a (relay-webapps.rsapps.net [172.27.255.140]) by smtp23.relay.iad3a.emailsrvr.com (SMTP Server) with ESMTP id 1AFDE23A94; Tue, 13 Jul 2021 13:49:03 -0400 (EDT) Received: from deepplum.com (localhost.localdomain [127.0.0.1]) by app41.wa-webapps.iad3a (Postfix) with ESMTP id 0273240CF1; Tue, 13 Jul 2021 13:49:03 -0400 (EDT) Received: by apps.rackspace.com (Authenticated sender: dpreed@deepplum.com, from: dpreed@deepplum.com) with HTTP; Tue, 13 Jul 2021 13:49:03 -0400 (EDT) X-Auth-ID: dpreed@deepplum.com Date: Tue, 13 Jul 2021 13:49:03 -0400 (EDT) From: "David P. Reed" To: "Bob McMahon" Cc: "Amr Rizk" , "Ben Greear" , starlink@lists.bufferbloat.net, "Make-Wifi-fast" , "Leonard Kleinrock" , "Cake List" , codel@lists.bufferbloat.net, "cerowrt-devel" , "bloat" MIME-Version: 1.0 Content-Type: text/plain;charset=UTF-8 Content-Transfer-Encoding: quoted-printable Importance: Normal X-Priority: 3 (Normal) X-Type: plain In-Reply-To: References: <1625188609.32718319@apps.rackspace.com> <989de0c1-e06c-cda9-ebe6-1f33df8a4c24@candelatech.com> <1625773080.94974089@apps.rackspace.com> <1625859083.09751240@apps.rackspace.com> <1626111630.69692379@apps.rackspace.com> <9c3d61c1-7013-414e-964d-9e83f596e69d@candelatech.com> <1e8bdf58-2a21-f543-a248-be58bcbddbcf@candelatech.com> <02c601d777b6$c4ce5a10$4e6b0e30$@rizk.com.de> X-Client-IP: 209.6.168.128 Message-ID: <1626198543.007132235@apps.rackspace.com> X-Mailer: webmail/19.0.7-RC X-Classification-ID: ab458438-ae78-4de7-9931-ff7768ca8810-1-1 Subject: Re: [Make-wifi-fast] [Bloat] Little's Law mea culpa, but not invalidating my main point X-BeenThere: make-wifi-fast@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Jul 2021 17:49:03 -0000 Bob -=0A=0AOn Tuesday, July 13, 2021 1:07pm, "Bob McMahon" said:=0A=0A> "Control at endpoints benefits greatly from even sm= all amounts of=0A> information supplied by the network about the degree of = congestion present=0A> on the path."=0A> =0A> Agreed. The ECN mechanism see= ms like a shared thermostat in a building.=0A> It's basically an on/off whe= re everyone is trying to set the temperature.=0A> It does affect, in a non-= linear manner, but still an effect. Better than a=0A> thermostat set at inf= inity or 0 Kelvin for sure.=0A> =0A> I find the assumption that congestion = occurs "in network" as not always=0A> true. Taking OWD measurements with re= ad side rate limiting suggests that=0A> equally important to mitigating buf= ferbloat driven latency using congestion=0A> signals is to make sure apps r= ead "fast enough" whatever that means. I=0A> rarely hear about how importan= t it is for apps to prioritize reads over=0A> open sockets. Not sure why th= at's overlooked and bufferbloat gets all the=0A> attention. I'm probably mi= ssing something.=0A=0AIn the early days of the Internet protocol and also e= ven ARPANET Host-Host protocol there were those who conflated host-level "f= low control" (matching production rate of data into the network to the dest= ination *process* consumption rate of data on a virtual circuit with a sour= ce capable of variable and unbounded bit rate) with "congestion control" in= the network. The term "congestion control" wasn't even used in the Interne= tworking project when it was discussing design in the late 1970's. I tried = to use it in our working group meetings, and every time I said "congestion"= the response would be phrased as "flow".=0A=0AThe classic example was prin= ting a file's contents from disk to an ASR33 terminal on an TIP (Terminal I= MP). There was flow control in the end-to-end protocol to avoid overflowing= the TTY's limited buffer. But those who grew up with ARPANET knew that tha= re was no way to accumulate queueing in the IMP network, because of RFNM's = that required permission for each new packet to be sent. RFNM's implicitly = prevented congestion from being caused by a virtual circuit. But a flow con= trol problem remained, because at the higher level protocol, buffering woul= d overflow at the TIP.=0A=0ATCP adopted a different end-to-end *flow* contr= ol, so it solved the flow control problem by creating a Windowing mechanism= . But it did not by itself solve the *congestion* control problem, even con= gestion built up inside the network by a wide-open window and a lazy operat= ing system at the receiving end that just said, I've got a lot of virtual m= emory so I'll open the window to maximum size.=0A=0AThere was a lot of conf= usion, because the guys who came from the ARPANET environment, with all lin= ks being the same speed and RFNM limits on rate, couldn't see why the Inter= net stack was so collapse-prone. I think Multics, for example, as a giant v= irtual memory system caused congestion by opening up its window too much.= =0A=0AThis is where Van Jacobson discovered that dropped packets were a "go= od enough" congestion signal because of "fate sharing" among the packets th= at flowed on a bottleneck path, and that windowing (invented for flow contr= ol by the receiver to protect itself from overflow if the receiver couldn't= receive fast enough) could be used to slow down the sender to match the ra= te of senders to the capacity of the internal bottleneck link. An elegant "= hack" that actually worked really well in practice.=0A=0ANow we view it as = a bug if the receiver opens its window too much, or otherwise doesn't trans= late dropped packets (or other incipient-congestion signals) to shut down t= he source transmission rate as quickly as possible. Fortunately, the proper= state of the internet - the one it should seek as its ideal state - is tha= t there is at most one packet waiting for each egress link in the bottlenec= k path. This stable state ensures that the window-reduction or slow-down si= gnal encounters no congestion, with high probability. [Excursions from one-= packet queue occur, but since only one-packet waiting is sufficient to fill= the bottleneck link to capacity, they can't achieve higher throughput in s= teady state. In practice, noisy arrival distributions can reduce throughput= , so allowing a small number of packets to be waiting on a bottleneck link'= s queue can slightly increase throughput. That's not asymptotically relevan= t, but as mentioned, the Internet is never near asymptotic behavior.]=0A=0A= =0A> =0A> Bob=0A> =0A> On Tue, Jul 13, 2021 at 12:15 AM Amr Rizk wrote:=0A> =0A>> Ben,=0A>>=0A>> it depends on what one tries to mea= sure. Doing a rate scan using UDP (to=0A>> measure latency distributions un= der load) is the best thing that we have=0A>> but without actually knowing = how resources are shared (fair share as in=0A>> WiFi, FIFO as nearly everyw= here else) it becomes very difficult to=0A>> interpret the results or provi= de a proper argument on latency. You are=0A>> right - TCP stats are a proxy= for user experience but I believe they are=0A>> difficult to reproduce (we= are always talking about very short TCP flows -=0A>> the infinite TCP flow= that converges to a steady behavior is purely=0A>> academic).=0A>>=0A>> By= the way, Little's law is a strong tool when it comes to averages. To be=0A= >> able to say more (e.g. 1% of the delays is larger than x) one requires m= ore=0A>> information (e.g. the traffic - On-OFF pattern) see [1]. I am not= sure=0A>> when does such information readily exist.=0A>>=0A>> Best=0A>> Am= r=0A>>=0A>> [1] https://dl.acm.org/doi/10.1145/3341617.3326146 or if behind= a paywall=0A>> https://www.dcs.warwick.ac.uk/~florin/lib/sigmet19b.pdf=0A>= >=0A>> --------------------------------=0A>> Amr Rizk (amr.rizk@uni-due.de)= =0A>> University of Duisburg-Essen=0A>>=0A>> -----Urspr=C3=BCngliche Nachri= cht-----=0A>> Von: Bloat Im Auftrag v= on Ben Greear=0A>> Gesendet: Montag, 12. Juli 2021 22:32=0A>> An: Bob McMah= on =0A>> Cc: starlink@lists.bufferbloat.net; Make= -Wifi-fast <=0A>> make-wifi-fast@lists.bufferbloat.net>; Leonard Kleinrock = ;=0A>> David P. Reed ; Cake List ;=0A>> codel@lists.bufferbloat.net; cerowrt-devel <= =0A>> cerowrt-devel@lists.bufferbloat.net>; bloat =0A>> Betreff: Re: [Bloat] Little's Law mea culpa, but not invalidating= my main=0A>> point=0A>>=0A>> UDP is better for getting actual packet laten= cy, for sure. TCP is=0A>> typical-user-experience-latency though, so it is= also useful.=0A>>=0A>> I'm interested in the test and visualization side o= f this. If there were=0A>> a way to give engineers a good real-time look a= t a complex real-world=0A>> network, then they have something to go on whil= e trying to tune various=0A>> knobs in their network to improve it.=0A>>=0A= >> I'll let others try to figure out how build and tune the knobs, but the= =0A>> data acquisition and visualization is something we might try to=0A>> = accomplish. I have a feeling I'm not the first person to think of this,=0A= >> however....probably someone already has done such a thing.=0A>>=0A>> Tha= nks,=0A>> Ben=0A>>=0A>> On 7/12/21 1:04 PM, Bob McMahon wrote:=0A>> > I bel= ieve end host's TCP stats are insufficient as seen per the=0A>> > "failed" = congested control mechanisms over the last decades. I think=0A>> > Jaffe po= inted this out in=0A>> > 1979 though he was using what's been deemed on thi= s thread as "spherical=0A>> cow queueing theory."=0A>> >=0A>> > "Flow contr= ol in store-and-forward computer networks is appropriate=0A>> > for decentr= alized execution. A formal description of a class of=0A>> > "decentralized = flow control algorithms" is given. The feasibility of=0A>> > maximizing pow= er with such algorithms is investigated. On the=0A>> > assumption that comm= unication links behave like M/M/1 servers it is=0A>> shown that no "decentr= alized flow control algorithm" can maximize network=0A>> power. Power has b= een suggested in the literature as a network performance=0A>> objective. It= is also shown that no objective based only on the users'=0A>> throughputs = and average delay is decentralizable. Finally, a restricted=0A>> class of a= lgorithms cannot even approximate power."=0A>> >=0A>> > https://ieeexplore.= ieee.org/document/1095152=0A>> >=0A>> > Did Jaffe make a mistake?=0A>> >=0A= >> > Also, it's been observed that latency is non-parametric in it's=0A>> >= distributions and computing gaussians per the central limit theorem=0A>> >= for OWD feedback loops aren't effective. How does one design a control=0A>= > loop around things that are non-parametric? It also begs the question, wh= at=0A>> are the feed forward knobs that can actually help?=0A>> >=0A>> > Bo= b=0A>> >=0A>> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear > > wrote:=0A>> >=0A>> > M= easuring one or a few links provides a bit of data, but seems like=0A>> if = someone is trying to understand=0A>> > a large and real network, then t= he OWD between point A and B needs=0A>> to just be input into something muc= h=0A>> > more grand. Assuming real-time OWD data exists between 100 to= 1000=0A>> endpoint pairs, has anyone found a way=0A>> > to visualize t= his in a useful manner?=0A>> >=0A>> > Also, considering something bette= r than ntp may not really scale to=0A>> 1000+ endpoints, maybe round-trip= =0A>> > time is only viable way to get this type of data. In that case= ,=0A>> maybe clever logic could use things=0A>> > like trace-route to g= et some idea of how long it takes to get 'onto'=0A>> the internet proper, a= nd so estimate=0A>> > the last-mile latency. My assumption is that the= last-mile latency=0A>> is where most of the pervasive=0A>> > assymetri= c network latencies would exist (or just ping 8.8.8.8 which=0A>> is 20ms fr= om everywhere due to=0A>> > $magic).=0A>> >=0A>> > Endpoints could = also triangulate a bit if needed, using some anchor=0A>> points in the netw= ork=0A>> > under test.=0A>> >=0A>> > Thanks,=0A>> > Ben=0A>> >= =0A>> > On 7/12/21 11:21 AM, Bob McMahon wrote:=0A>> > > iperf 2 s= upports OWD and gives full histograms for TCP write to=0A>> read, TCP conne= ct times, latency of packets (with UDP), latency of "frames"=0A>> with=0A>>= > > simulated video traffic (TCP and UDP), xfer times of bursts with= =0A>> low duty cycle traffic, and TCP RTT (sampling based.) It also has sup= port=0A>> for sampling (per=0A>> > > interval reports) down to 100 use= cs if configured with=0A>> --enable-fastsampling, otherwise the fastest sam= pling is 5 ms. We've=0A>> released all this as open source.=0A>> > >= =0A>> > > OWD only works if the end realtime clocks are synchronized u= sing=0A>> a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most= data=0A>> centers don't=0A>> > provide=0A>> > > sufficient level = of clock accuracy and the GPS pulse per second *=0A>> to colo and vm custom= ers.=0A>> > >=0A>> > > https://iperf2.sourceforge.io/iperf-manpag= e.html=0A>> > >=0A>> > > Bob=0A>> > >=0A>> > > On Mon, = Jul 12, 2021 at 10:40 AM David P. Reed <=0A>> dpreed@deepplum.com > dpreed@deepplum.com=0A>> > >> wrote:=0A>> > >=0A>> > >=0A>> > > O= n Monday, July 12, 2021 9:46am, "Livingood, Jason" <=0A>> Jason_Livingood@c= omcast.com =0A>> > > Jason_Livingood@comcast.com>>> said:=0A>>= > >=0A>> > > > I think latency/delay is becoming seen to be= as important=0A>> certainly, if not a more direct proxy for end user QoE. = This is all still=0A>> evolving and I=0A>> > have=0A>> > > to = say is a super interesting & fun thing to work on. :-)=0A>> > >=0A>> >= > If I could manage to sell one idea to the management=0A>> hiera= rchy of communications industry CEOs (operators, vendors, ...) it is=0A>> t= his one:=0A>> > >=0A>> > > "It's the end-to-end latency, stup= id!"=0A>> > >=0A>> > > And I mean, by end-to-end, latency to = complete a task at a=0A>> relevant layer of abstraction.=0A>> > >=0A>>= > > At the link level, it's packet send to packet receive=0A>> co= mpletion.=0A>> > >=0A>> > > But at the transport level includ= ing retransmission buffers,=0A>> it's datagram (or message) origination unt= il the acknowledgement arrives=0A>> for that=0A>> > message being=0A>> = > > delivered after whatever number of retransmissions, freeing=0A= >> the retransmission buffer.=0A>> > >=0A>> > > At the WWW le= vel, it's mouse click to display update=0A>> corresponding to completion of= the request.=0A>> > >=0A>> > > What should be noted is that = lower level latencies don't=0A>> directly predict the magnitude of higher-l= evel latencies. But longer lower=0A>> level latencies=0A>> > almost=0A>= > > > always amplfify higher level latencies. Often non-linearly.= =0A>> > >=0A>> > > Throughput is very, very weakly related to= these latencies,=0A>> in contrast.=0A>> > >=0A>> > > The amp= lification process has to do with the presence of=0A>> queueing. Queueing i= s ALWAYS bad for latency, and throughput only helps if=0A>> it is in exactl= y the=0A>> > > right place (the so-called input queue of the bottl= eneck=0A>> process, which is often a link, but not always).=0A>> > >= =0A>> > > Can we get that slogan into Harvard Business Review? Can= we=0A>> get it taught in Managerial Accounting at HBS? (which does address= =0A>> logistics/supply chain=0A>> > queueing).=0A>> > >=0A>> > = >=0A>> > >=0A>> > >=0A>> > >=0A>> > >=0A>> > >= =0A>> > > This electronic communication and the information and any fi= les=0A>> transmitted with it, or attached to it, are confidential and are i= ntended=0A>> solely for the=0A>> > use of=0A>> > > the individual = or entity to whom it is addressed and may contain=0A>> information that is = confidential, legally privileged, protected by privacy=0A>> laws, or=0A>> >= otherwise=0A>> > > restricted from disclosure to anyone else. If = you are not the=0A>> intended recipient or the person responsible for deliv= ering the e-mail to=0A>> the intended=0A>> > recipient,=0A>> > > y= ou are hereby notified that any use, copying, distributing,=0A>> disseminat= ion, forwarding, printing, or copying of this e-mail is strictly=0A>> prohi= bited. If you=0A>> > > received this e-mail in error, please return th= e e-mail to the=0A>> sender, delete it from your computer, and destroy any = printed copy of it.=0A>> >=0A>> >=0A>> > --=0A>> > Ben Greear > >>=0A>> > Cand= ela Technologies Inc http://www.candelatech.com=0A>> >=0A>> >=0A>> > This e= lectronic communication and the information and any files=0A>> > transmitte= d with it, or attached to it, are confidential and are=0A>> > intended sole= ly for the use of the individual or entity to whom it is=0A>> > addressed a= nd may contain information that is confidential, legally=0A>> > privileged,= protected by privacy laws, or otherwise restricted from=0A>> disclosure to= anyone else. If you are not the intended recipient or the=0A>> person resp= onsible for delivering the e-mail to the intended recipient, you=0A>> are h= ereby notified that any use, copying, distributing, dissemination,=0A>> for= warding, printing, or copying of this e-mail is strictly prohibited. If=0A>= > you received this e-mail in error, please return the e-mail to the sender= ,=0A>> delete it from your computer, and destroy any printed copy of it.=0A= >>=0A>>=0A>> --=0A>> Ben Greear =0A>> Candela Tech= nologies Inc http://www.candelatech.com=0A>>=0A>> ________________________= _______________________=0A>> Bloat mailing list=0A>> Bloat@lists.bufferbloa= t.net=0A>> https://lists.bufferbloat.net/listinfo/bloat=0A>>=0A>>=0A> =0A> = --=0A> This electronic communication and the information and any files tran= smitted=0A> with it, or attached to it, are confidential and are intended s= olely for=0A> the use of the individual or entity to whom it is addressed a= nd may contain=0A> information that is confidential, legally privileged, pr= otected by privacy=0A> laws, or otherwise restricted from disclosure to any= one else. If you are=0A> not the intended recipient or the person responsib= le for delivering the=0A> e-mail to the intended recipient, you are hereby = notified that any use,=0A> copying, distributing, dissemination, forwarding= , printing, or copying of=0A> this e-mail is strictly prohibited. If you re= ceived this e-mail in error,=0A> please return the e-mail to the sender, de= lete it from your computer, and=0A> destroy any printed copy of it.=0A> =0A