From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com [IPv6:2a00:1450:4864:20::62e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 718213CB39 for ; Sat, 10 Jul 2021 15:51:13 -0400 (EDT) Received: by mail-ej1-x62e.google.com with SMTP id nd37so23763500ejc.3 for ; Sat, 10 Jul 2021 12:51:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=kNYUN++X9HlEujszhpvBN/wgcLNzMhrG6WD95bsjlfM=; b=glGc5Dy7LPGNJvMsoYUOkfje33h4zbs3n/29Pwjl1O6rP2SkbDq4rAW6+xGPfS/+5q 77oXyz/iSoE6FREFTE0hdqaxtS53x+nnuxq4ZMF1059bvsGpaVm7j+FleOCG9wpHOs9t 9SPySPJReChsgFn/jQ1p/vuP8+FzAIeehJJQU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=kNYUN++X9HlEujszhpvBN/wgcLNzMhrG6WD95bsjlfM=; b=YaYAf1uv5WSK0WT8lx0uJ2wRHXSE6FRPv96nKY3CMA70su3EGr/vv73zEY6jAFOzbC UI9+CgwxbzLCvnzfEmD7kb8Cn5HaeKNgbSdz1R4xBETaRIWRR2mqq7TPYXwKybCrCo+Z gIDaLekmWLEMYspArDsFI4HuGTtT9OhxH0MkFJMruAKjlglMmnkV2YGzr+42rg2Vllg1 BmkJpYNJlq+GyJ+b5CFtHz/sMXH4pM/eHVKiyiaXiET1Llv0B+reArbDYj47XAyBv7oZ lXYqrfEVjUmz6yf1bPrUgRCvpXggfaFlozBQA85UkPvu/di6Nw5xVDuyr4LcGn6GCmGw TocQ== X-Gm-Message-State: AOAM532sT2NTw36SboGC83YiZKxA3Hogcvpee3oO2clsSopkhehsIJCc RSrx4BDpFFgaNuPNP9RD9PzFgC4sttmzdWijbVBsSh5pJAx/+Ofzes7Jw1LLccH+gMd1MMTDoJ8 c1KswVDvduhRdzG3kY1VVW3l2FO98kAWWiMDHk+2V X-Google-Smtp-Source: ABdhPJxw3Nfgjv3yJ58LmCgA/xXLobF5YtkxoB9bZsoBm6o/ZiIJd+KB1CCFMiIl2Ie1usIeK+BJktWI0lx2J1FzwPg= X-Received: by 2002:a17:907:2bc7:: with SMTP id gv7mr45488270ejc.417.1625946672013; Sat, 10 Jul 2021 12:51:12 -0700 (PDT) MIME-Version: 1.0 References: <1625188609.32718319@apps.rackspace.com> <989de0c1-e06c-cda9-ebe6-1f33df8a4c24@candelatech.com> <1625773080.94974089@apps.rackspace.com> <1625859083.09751240@apps.rackspace.com> <8C38E940-8B97-4767-A39B-25F043AE0856@cs.ucla.edu> In-Reply-To: <8C38E940-8B97-4767-A39B-25F043AE0856@cs.ucla.edu> From: Bob McMahon Date: Sat, 10 Jul 2021 12:51:00 -0700 Message-ID: Subject: Re: Little's Law mea culpa, but not invalidating my main point To: Leonard Kleinrock Cc: "David P. Reed" , Luca Muscariello , starlink@lists.bufferbloat.net, Make-Wifi-fast , Cake List , codel@lists.bufferbloat.net, cerowrt-devel , bloat , Ben Greear Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-256; boundary="000000000000d7facc05c6ca3513" X-List-Received-Date: Sat, 10 Jul 2021 19:51:13 -0000 --000000000000d7facc05c6ca3513 Content-Type: multipart/alternative; boundary="000000000000cfe17505c6ca35ba" --000000000000cfe17505c6ca35ba Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable "Analyzing that is really difficult, and if we don=E2=80=99t measure and se= nse, we have no hope of understanding, controlling, or ameliorating such situations." It is truly a high honor to observe the queueing theory and control theory discussions to the world class experts here. We simple test guys must measure things and we'd like those things to be generally useful to all who can help towards improvements. Hence back to my original question, what network, or other, telemetry do experts here see as useful towards measuring active traffic to help with this? Just some background, and my apologies for the indulgence, but we'd like our automation rigs to be able to better emulate "real world scenarios" and use stochastic based regression type signals when something goes wrong which, for us, is typically a side effect to a driver or firmware code change and commit. (Humans need machine level support for this.) It's also very frustrating that modern data centers aren't generally providing GPS atomic time to servers. (I think part of the idea behind IP packets, etc. was to mitigate fault domains and the PSTN stratum clocks were a huge weak point.) I find, today, not having a common clock reference "accurate and precise enough" is hindering progress towards understanding the complexity and towards the ameliorating, at least from our attempts to map "bothersome to machine and/or humans and relevant real world phenomenon" into our automation environments allowing us to catch things early in the eng life cycle. A few of us have pushed over the last five or more years to add one way delay (OWD) of the test traffic (which is not the same as 1/2 RTT nor an ICMP ping delay) into iperf 2. That code is available to anyone. The lack of adoption applied to OWD has been disheartening. One common response has been, "We don't need that because users can't get their devices sync'd to the atomic clock anyway." (Also 3 is a larger number than 2 so iperf3 must be better than iperf2 so let us keep using that as our measurement tool - though I digress ;) ;) Bob PS. One can get a stratum 1 clock with a raspberry pi working in a home for about $200. I've got one in my home (along with a $2500 OCXO from spectracom) and the Pi is reasonable. https://www.satsignal.eu/ntp/Raspberry-Pi-NTP.html On Fri, Jul 9, 2021 at 4:01 PM Leonard Kleinrock wrote: > David, > > No question that non-stationarity and instability are what we often see i= n > networks. And, non-stationarity and instability are both topics that lea= d > to very complex analytical problems in queueing theory. You can find som= e > results on the transient analysis in the queueing theory literature > (including the second volume of my Queueing Systems book), but they are > limited and hard. Nevertheless, the literature does contain some works on > transient analysis of queueing systems as applied to network congestion > control - again limited. On the other hand, as you said, control theory > addresses stability head on and does offer some tools as well, but again, > it is hairy. > > Averages are only averages, but they can provide valuable information. Fo= r > sure, latency can and does confound behavior. But, as you point out, it = is > the proliferation of control protocols that are, in some cases, deployed > willy-nilly in networks without proper evaluation of their behavior that > can lead to the nasty cycle of large transient latency, frantic repeating > of web requests, protocols sending multiple copies, lack of awareness of > true capacity or queue size or throughput, etc, all of which you articula= te > so well, create the chaos and frustration in the network. Analyzing that > is really difficult, and if we don=E2=80=99t measure and sense, we have n= o hope of > understanding, controlling, or ameliorating such situations. > > Len > > On Jul 9, 2021, at 12:31 PM, David P. Reed wrote: > > Len - I admit I made a mistake in challenging Little's Law as being based > on Poisson processes. It is more general. But it tells you an "average" i= n > its base form, and latency averages are not useful for end user > applications. > > > However, Little's Law does assume something that is not actually valid > about the kind of distributions seen in the network, and in fact, it is N= OT > true that networks converge on Poisson arrival times. > > > The key issue is well-described in the sandard analysis of the M/M/1 queu= e > (e.g. https://en.wikipedia.org/wiki/M/M/1_queue) , which is done only for > Poisson processes, and is also limited to "stable" systems. But networks > are never stable when fully loaded. They get unstable and those > instabilities persist for a long time in the network. Instability is at > core the underlying *requirement* of the Internet's usage. > > > So specifically: real networks, even large ones, and certainly the > Internet today, are not asymptotic limits of sums of stationary stochasti= c > arrival processes. Each esternal terminal of any real network has a real > user there, running a real application, and the network is a complex grap= h. > This makes it completely unlike a single queue. Even the links within a > network carry a relatively small number of application flows. There's no > ability to apply the Law of Large Numbers to the distributions, because a= ny > particular path contains only a small number of serialized flows with > hightly variable rates. > > > Here's an example of what really happens in a real network (I've observed > this in 5 different cities on ATT's cellular network, back when it was > running Alcatel Lucent HSPA+ gear in those cities). > But you can see this on any network where transient overload occurs, > creating instability. > > > > > At 7 AM, the data transmission of the network is roughty stable. That's > because no links are overloaded within the network. Little's Law can tell > you by observing the delay and throughput on any path that the average > delay in the network is X. > > > Continue sampling delay in the network as the day wears on. At about 10 > AM, ping delay starts to soar into the multiple second range. No packers > are lost. The peak ping time is about 4000 milliseconds - 4 seconds in mo= st > of the networks. This is in downtown, no radio errors are reported, no li= nk > errors. > So it is all queueing delay. > > > Now what Little's law doesn't tell you much about average delay, because > clearly *some* subpiece of the network is fully saturated. But what is > interesting here is what is happening and where. You can't tell what is > saturated, and in fact the entire network is quite unstable, because the > peak is constantly varying and you don't know where the throughput is. Al= l > the packets are now arriving 4 seconds or so later. > > > Why is the situaton not worse than 4 seconds? Well, there are multiple > things going on: > > > 1) TCP may be doing a lot of retransmissions (non-Poisson at all, not > random either. The arrival process is entirely deterministic in each > source, based on the retransmission timeout) or it may not be. > > > 2) Users are pissed off, because they clicked on a web page, and got > nothing back. They retry on their screen, or they try another site. > Meanwhile, the underlying TCP connection remains there, pumping the netwo= rk > full of more packets on that old path, which is still backed up with > packets that haven't been delivered that are sitting in queues. The real > arrival process is not Poisson at all, its a deterministic, repeated > retrsnsmission plus a new attempt to connect to a new site. > > > 3) When the users get a web page back eventually, it is filled with names > of other pieces needed to display that web page, which causes some number > (often as many as 100) new pages to be fetched, ALL at the same time. > Certainly not a stochastic process that will just obey the law of large > numbers. > > > All of these things are the result of initial instability, causing queues > to build up. > > > So what is the state of the system? is it stable? is it stochastic? Is it > the sum of enough stochastic stable flows to average out to Poisson? > > > The answer is clearly NO. Control theory (not queuing theory) suggests > that this system is completely uncontrolled and unstable. > > > So if the system is in this state, what does Little's Lemma tell us? What > is the meaning of that hightly variable 4 second delay on ping packets, i= n > terms of average utilizaton of the network? > > > We don't even know what all the users really might need, if the system > hadn't become unstable, because some users have given up, and others are > trying even harder, and new users are arriving. > > > What we do know, because ATT (at my suggestion) reconfigured their system > after blaming Apple Computer company for "bugs" in the original iPhone in > public, is that simply *dropping* packets sitting in queues more than a > couple milliseconds MADE THE USERS HAPPY. Apparently the required capacit= y > was there all along! > > > So I conclude that the 4 second delay was the largest delay users could > barely tolerate before deciding the network was DOWN and going away. And > that the backup was the accumulation of useless packets sitting in queues > because none of the end systems were receiving congestion signals (which > for the Internet stack begins with packet dropping). > > > I should say that most operators, and especially ATT in this case, do not > measure end-to-end latency. Instead they use Little's Lemma to query > routers for their current throughput in bits per second, and calculate > latency as if Little's Lemma applied. This results in reports to manageme= nt > that literally say: > > > The network is not dropping packets, utilization is near 100% on many o= f > our switches and routers. > > > And management responds, Hooray! Because utilization of 100% of their > hardware is their investors' metric of maximizing profits. The hardware > they are operating is fully utilized. No waste! And users are happy becau= se > no packets have been dropped! > > > Hmm... what's wrong with this picture? I can see why Donovan, CTO, would > accuse Apple of lousy software that was ruining iPhone user experience! > His network was operating without ANY problems. > So it must be Apple! > > > Well, no. The entire problem, as we saw when ATT just changed to shorten > egress queues and drop packets when the egress queues overflowed, was tha= t > ATT's network was amplifying instability, not at the link level, but at t= he > network level. > > > And queueing theory can help with that, but *intro queueing theory* canno= t. > > > And a big part of that problem is the pervasive belief that, at the > network boundary, *Poisson arrival* is a reasonable model for use in all > cases. > > > > > > > > > > > > > > > > > > > > > On Friday, July 9, 2021 6:05am, "Luca Muscariello" > said: > > For those who might be interested in Little's law > there is a nice paper by John Little on the occasion > of the 50th anniversary of the result. > > https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-V= iewed-on-its-50th-Anniversary > > https://www.informs.org/content/download/255808/2414681/file/little_paper= .pdf > > Nice read. > Luca > > P.S. > Who has not a copy of L. Kleinrock's books? I do have and am not ready to > lend them! > On Fri, Jul 9, 2021 at 11:01 AM Leonard Kleinrock wrote: > >> David, >> I totally appreciate your attention to when and when not analytical >> modeling works. Let me clarify a few things from your note. >> First, Little's law (also known as Little=E2=80=99s lemma or, as I use i= n my >> book, Little=E2=80=99s result) does not assume Poisson arrivals - it is= good for >> *any* arrival process and any service process and is an equality between >> time averages. It states that the time average of the number in a syste= m >> (for a sample path *w)* is equal to the average arrival rate to the >> system multiplied by the time-averaged time in the system for that sampl= e >> path. This is often written as NTimeAvg =3D=CE=BB=C2=B7TTimeAvg . Mo= reover, if >> the system is also ergodic, then the time average equals the ensemble >> average and we often write it as N =CC=84 =3D =CE=BB T =CC=84 . In any = case, this >> requires neither Poisson arrivals nor exponential service times. >> >> Queueing theorists often do study the case of Poisson arrivals. True, i= t >> makes the analysis easier, yet there is a better reason it is often used= , >> and that is because the sum of a large number of independent stationary >> renewal processes approaches a Poisson process. So nature often gives u= s >> Poisson arrivals. >> Best, >> Len >> >> On Jul 8, 2021, at 12:38 PM, David P. Reed wrote: >> >> I will tell you flat out that the arrival time distribution assumption >> made by Little's Lemma that allows "estimation of queue depth" is totall= y >> unreasonable on ANY Internet in practice. >> >> >> The assumption is a Poisson Arrival Process. In reality, traffic arrival= s >> in real internet applications are extremely far from Poisson, and, of >> course, using TCP windowing, become highly intercorrelated with crossing >> traffic that shares the same queue. >> >> >> So, as I've tried to tell many, many net-heads (people who ignore >> applications layer behavior, like the people that think latency doesn't >> matter to end users, only throughput), end-to-end packet arrival times o= n a >> practical network are incredibly far from Poisson - and they are more li= ke >> fractal probability distributions, very irregular at all scales of time. >> >> >> So, the idea that iperf can estimate queue depth by Little's Lemma by >> just measuring saturation of capacity of a path is bogus.The less Poisso= n, >> the worse the estimate gets, by a huge factor. >> >> >> >> >> Where does the Poisson assumption come from? Well, like many theorems, >> it is the simplest tractable closed form solution - it creates a simplif= ied >> view, by being a "single-parameter" distribution (the parameter is calle= d >> lambda for a Poisson distribution). And the analysis of a simple queue >> with poisson arrival distribution and a static, fixed service time is th= e >> first interesting Queueing Theory example in most textbooks. It is >> suggestive of an interesting phenomenon, but it does NOT characterize an= y >> real system. >> >> >> It's the queueing theory equivalent of "First, we assume a spherical >> cow...". in doing an example in a freshman physics class. >> >> >> Unfortunately, most networking engineers understand neither queuing >> theory nor application networking usage in interactive applications. Whi= ch >> makes them arrogant. They assume all distributions are poisson! >> >> >> >> >> On Tuesday, July 6, 2021 9:46am, "Ben Greear" >> said: >> >> > Hello, >> > >> > I am interested to hear wish lists for network testing features. We >> make test >> > equipment, supporting lots >> > of wifi stations and a distributed architecture, with built-in udp, >> tcp, ipv6, >> > http, ... protocols, >> > and open to creating/improving some of our automated tests. >> > >> > I know Dave has some test scripts already, so I'm not necessarily >> looking to >> > reimplement that, >> > but more fishing for other/new ideas. >> > >> > Thanks, >> > Ben >> > >> > On 7/2/21 4:28 PM, Bob McMahon wrote: >> > > I think we need the language of math here. It seems like the network >> > power metric, introduced by Kleinrock and Jaffe in the late 70s, is >> something >> > useful. >> > > Effective end/end queue depths per Little's law also seems useful. >> Both are >> > available in iperf 2 from a test perspective. Repurposing test >> techniques to >> > actual >> > > traffic could be useful. Hence the question around what exact >> telemetry >> > is useful to apps making socket write() and read() calls. >> > > >> > > Bob >> > > >> > > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht > > >> wrote: >> > > >> > > In terms of trying to find "Quality" I have tried to encourage folk = to >> > > both read "zen and the art of motorcycle maintenance"[0], and Deming= 's >> > > work on "total quality management". >> > > >> > > My own slice at this network, computer and lifestyle "issue" is aimi= ng >> > > for "imperceptible latency" in all things. [1]. There's a lot of >> > > fallout from that in terms of not just addressing queuing delay, but >> > > caching, prefetching, and learning more about what a user really nee= ds >> > > (as opposed to wants) to know via intelligent agents. >> > > >> > > [0] If you want to get depressed, read Pirsig's successor to "zen...= ", >> > > lila, which is in part about what happens when an engineer hits an >> > > insoluble problem. >> > > [1] https://www.internetsociety.org/events/latency2013/ >> > >> > > >> > > >> > > >> > > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed > > >> wrote: >> > > > >> > > > Well, nice that the folks doing the conference are willing to >> > consider that quality of user experience has little to do with >> signalling rate at >> > the >> > > physical layer or throughput of FTP transfers. >> > > > >> > > > >> > > > >> > > > But honestly, the fact that they call the problem "network quality= " >> > suggests that they REALLY, REALLY don't understand the Internet isn't >> the hardware >> > or >> > > the routers or even the routing algorithms *to its users*. >> > > > >> > > > >> > > > >> > > > By ignoring the diversity of applications now and in the future, >> > and the fact that we DON'T KNOW what will be coming up, this conferenc= e >> will >> > likely fall >> > > into the usual trap that net-heads fall into - optimizing for some >> > imaginary reality that doesn't exist, and in fact will probably never >> be what >> > users >> > > actually will do given the chance. >> > > > >> > > > >> > > > >> > > > I saw this issue in 1976 in the group developing the original >> > Internet protocols - a desire to put *into the network* special tricks >> to optimize >> > ASR33 >> > > logins to remote computers from terminal concentrators (aka remote >> > login), bulk file transfers between file systems on different >> time-sharing >> > systems, and >> > > "sessions" (virtual circuits) that required logins. And then trying = to >> > exploit underlying "multicast" by building it into the IP layer, >> because someone >> > > thought that TV broadcast would be the dominant application. >> > > > >> > > > >> > > > >> > > > Frankly, to think of "quality" as something that can be "provided" >> > by "the network" misses the entire point of "end-to-end argument in >> system >> > design". >> > > Quality is not a property defined or created by The Network. If you >> want >> > to talk about Quality, you need to talk about users - all the users at >> all times, >> > > now and into the future, and that's something you can't do if you >> don't >> > bother to include current and future users talking about what they >> might expect >> > to >> > > experience that they don't experience. >> > > > >> > > > >> > > > >> > > > There was much fighting back in 1976 that basically involved >> > "network experts" saying that the network was the place to "solve" suc= h >> issues as >> > quality, >> > > so applications could avoid having to solve such issues. >> > > > >> > > > >> > > > >> > > > What some of us managed to do was to argue that you can't "solve" >> > such issues. All you can do is provide a framework that enables >> different uses to >> > > *cooperate* in some way. >> > > > >> > > > >> > > > >> > > > Which is why the Internet drops packets rather than queueing them, >> > and why diffserv cannot work. >> > > > >> > > > (I know the latter is conftroversial, but at the moment, ALL of >> > diffserv attempts to talk about end-to-end applicaiton specific >> metrics, but >> > never, ever >> > > explains what the diffserv control points actually do w.r.t. what th= e >> IP >> > layer can actually control. So it is meaningless - another violation o= f >> the >> > > so-called end-to-end principle). >> > > > >> > > > >> > > > >> > > > Networks are about getting packets from here to there, multiplexin= g >> > the underlying resources. That's it. Quality is a whole different >> thing. Quality >> > can >> > > be improved by end-to-end approaches, if the underlying network >> provides >> > some kind of thing that actually creates a way for end-to-end >> applications to >> > > affect queueing and routing decisions, and more importantly getting >> > "telemetry" from the network regarding what is actually going on with >> the other >> > > end-to-end users sharing the infrastructure. >> > > > >> > > > >> > > > >> > > > This conference won't talk about it this way. So don't waste your >> > time. >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht" >> > = >> >> said: >> > > > >> > > > > The program committee members are *amazing*. Perhaps, finally, >> > we can >> > > > > move the bar for the internet's quality metrics past endless, >> > blind >> > > > > repetitions of speedtest. >> > > > > >> > > > > For complete details, please see: >> > > > > https://www.iab.org/activities/workshops/network-quality/ >> > >> > > > > >> > > > > Submissions Due: Monday 2nd August 2021, midnight AOE >> > (Anywhere On Earth) >> > > > > Invitations Issued by: Monday 16th August 2021 >> > > > > >> > > > > Workshop Date: This will be a virtual workshop, spread over >> > three days: >> > > > > >> > > > > 1400-1800 UTC Tue 14th September 2021 >> > > > > 1400-1800 UTC Wed 15th September 2021 >> > > > > 1400-1800 UTC Thu 16th September 2021 >> > > > > >> > > > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira >> > > > > >> > > > > The Program Committee members: >> > > > > >> > > > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, >> > Sam >> > > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen, >> > Geoff >> > > > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja >> > Kuehlewind, >> > > > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen >> > Nichols, >> > > > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein. >> > > > > >> > > > > Send Submissions to: network-quality-workshop-pc@iab.org >> > > >. >> > > > > >> > > > > Position papers from academia, industry, the open source >> > community and >> > > > > others that focus on measurements, experiences, observations >> > and >> > > > > advice for the future are welcome. Papers that reflect >> > experience >> > > > > based on deployed services are especially welcome. The >> > organizers >> > > > > understand that specific actions taken by operators are >> > unlikely to be >> > > > > discussed in detail, so papers discussing general categories >> > of >> > > > > actions and issues without naming specific technologies, >> > products, or >> > > > > other players in the ecosystem are expected. Papers should not >> > focus >> > > > > on specific protocol solutions. >> > > > > >> > > > > The workshop will be by invitation only. Those wishing to >> > attend >> > > > > should submit a position paper to the address above; it may >> > take the >> > > > > form of an Internet-Draft. >> > > > > >> > > > > All inputs submitted and considered relevant will be published >> > on the >> > > > > workshop website. The organisers will decide whom to invite >> > based on >> > > > > the submissions received. Sessions will be organized according >> > to >> > > > > content, and not every accepted submission or invited attendee >> > will >> > > > > have an opportunity to present as the intent is to foster >> > discussion >> > > > > and not simply to have a sequence of presentations. >> > > > > >> > > > > Position papers from those not planning to attend the virtual >> > sessions >> > > > > themselves are also encouraged. A workshop report will be >> > published >> > > > > afterwards. >> > > > > >> > > > > Overview: >> > > > > >> > > > > "We believe that one of the major factors behind this lack of >> > progress >> > > > > is the popular perception that throughput is the often sole >> > measure of >> > > > > the quality of Internet connectivity. With such narrow focus, >> > people >> > > > > don=E2=80=99t consider questions such as: >> > > > > >> > > > > What is the latency under typical working conditions? >> > > > > How reliable is the connectivity across longer time periods? >> > > > > Does the network allow the use of a broad range of protocols? >> > > > > What services can be run by clients of the network? >> > > > > What kind of IPv4, NAT or IPv6 connectivity is offered, and >> > are there firewalls? >> > > > > What security mechanisms are available for local services, >> > such as DNS? >> > > > > To what degree are the privacy, confidentiality, integrity >> > and >> > > > > authenticity of user communications guarded? >> > > > > >> > > > > Improving these aspects of network quality will likely depend >> > on >> > > > > measurement and exposing metrics to all involved parties, >> > including to >> > > > > end users in a meaningful way. Such measurements and exposure >> > of the >> > > > > right metrics will allow service providers and network >> > operators to >> > > > > focus on the aspects that impacts the users=E2=80=99 experience >> > most and at >> > > > > the same time empowers users to choose the Internet service >> > that will >> > > > > give them the best experience." >> > > > > >> > > > > >> > > > > -- >> > > > > Latest Podcast: >> > > > > >> > >> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920= / >> > < >> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920= / >> > >> > > > > >> > > > > Dave T=C3=A4ht CTO, TekLibre, LLC >> > > > > _______________________________________________ >> > > > > Cerowrt-devel mailing list >> > > > > Cerowrt-devel@lists.bufferbloat.net >> > > > >> > > > > https://lists.bufferbloat.net/listinfo/cerowrt-devel >> > >> > > > > >> > > >> > > >> > > >> > > -- >> > > Latest Podcast: >> > > >> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920= / >> > < >> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920= / >> > >> > > >> > > Dave T=C3=A4ht CTO, TekLibre, LLC >> > > _______________________________________________ >> > > Make-wifi-fast mailing list >> > > Make-wifi-fast@lists.bufferbloat.net >> > > > >> > > https://lists.bufferbloat.net/listinfo/make-wifi-fast >> > >> > > >> > > >> > > This electronic communication and the information and any files >> transmitted >> > with it, or attached to it, are confidential and are intended solely >> for the use >> > of >> > > the individual or entity to whom it is addressed and may contain >> information >> > that is confidential, legally privileged, protected by privacy laws, o= r >> otherwise >> > > restricted from disclosure to anyone else. If you are not the intend= ed >> > recipient or the person responsible for delivering the e-mail to the >> intended >> > recipient, >> > > you are hereby notified that any use, copying, distributing, >> dissemination, >> > forwarding, printing, or copying of this e-mail is strictly prohibited= . >> If you >> > > received this e-mail in error, please return the e-mail to the >> sender, delete >> > it from your computer, and destroy any printed copy of it. >> > > >> > > _______________________________________________ >> > > Starlink mailing list >> > > Starlink@lists.bufferbloat.net >> > > https://lists.bufferbloat.net/listinfo/starlink >> > > >> > >> > >> > -- >> > Ben Greear >> > Candela Technologies Inc http://www.candelatech.com >> > >> _______________________________________________ >> Starlink mailing list >> Starlink@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/starlink >> >> _______________________________________________ >> Make-wifi-fast mailing list >> Make-wifi-fast@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/make-wifi-fast > > > --=20 This electronic communication and the information and any files transmitted= =20 with it, or attached to it, are confidential and are intended solely for=20 the use of the individual or entity to whom it is addressed and may contain= =20 information that is confidential, legally privileged, protected by privacy= =20 laws, or otherwise restricted from disclosure to anyone else. If you are=20 not the intended recipient or the person responsible for delivering the=20 e-mail to the intended recipient, you are hereby notified that any use,=20 copying, distributing, dissemination, forwarding, printing, or copying of= =20 this e-mail is strictly prohibited. If you received this e-mail in error,= =20 please return the e-mail to the sender, delete it from your computer, and= =20 destroy any printed copy of it. --000000000000cfe17505c6ca35ba Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
"Analyzing that is really difficult, and if we don=E2= =80=99t measure and sense, we have no hope of understanding, controlling, o= r ameliorating such situations."

It is truly a high honor to ob= serve the queueing theory and control=C2=A0theory discussions to the world = class experts here. We simple test guys must measure things and we'd li= ke those things to be generally useful to all who can help towards improvem= ents. Hence back to my original question, what network, or other, telemetry= do experts here see as useful=C2=A0towards measuring active traffic to hel= p with this?

Just some background, and my apologies for the indulge= nce, but we'd like our automation rigs to be able to better emulate &qu= ot;real world scenarios" and use stochastic based regression type sign= als when something goes wrong which, for us, is typically a side effect to = a driver or firmware code change and commit. (Humans need machine level sup= port for this.) It's also very frustrating that modern data centers are= n't generally providing GPS atomic time to servers. (I think part of th= e idea behind IP packets, etc. was to mitigate fault domains and the PSTN s= tratum clocks were a huge weak point.) I find, today, not having a common c= lock reference "accurate and precise enough" is hindering=C2=A0pr= ogress towards understanding=C2=A0the complexity and towards the ameliorati= ng, at least from our attempts to map "bothersome to machine and/or hu= mans and relevant real world=C2=A0phenomenon" into our automation=C2= =A0environments allowing us to catch things early in the eng life cycle.
A few of us have pushed over the last five or more years to add one wa= y delay (OWD) of the test traffic (which is not the same as 1/2 RTT nor an = ICMP ping delay) into iperf 2. That code is available to anyone. The lack o= f adoption applied to OWD has been disheartening. One common response has b= een, "We don't need that because users can't get their devices= sync'd to=C2=A0the atomic clock anyway." (Also 3 is a larger numb= er than 2 so iperf3 must be better than iperf2 so let us keep using that as= our measurement tool - though I digress=C2=A0 ;) ;)

Bob

PS.= One can get a stratum=C2=A01 clock with a raspberry pi working in a home f= or about $200. I've got one in my home (along with a $2500 OCXO from sp= ectracom) and the=C2=A0Pi is=C2=A0reasonable.=C2=A0https://www.satsignal.eu/ntp/Raspber= ry-Pi-NTP.html=C2=A0

On Fri, Jul 9, 2021 at 4:01 PM Leonard Kleinr= ock <lk@cs.ucla.edu> wrote:
=
David,

No question that non-stati= onarity and instability are what we often see in networks.=C2=A0 And, non-s= tationarity and instability are both topics that lead to very complex analy= tical problems in queueing theory.=C2=A0 You can find some results on the t= ransient analysis in the queueing theory literature (including the second v= olume of my Queueing Systems book), but they are limited and hard. Neverthe= less, the literature does contain some works on transient analysis of queue= ing systems as applied to network congestion control - again limited.=C2=A0= On the other hand, as you said, control theory addresses stability head on = and does offer some tools as well, but again, it is hairy.=C2=A0
=
Averages are only averages, but they can provide valuable in= formation. For sure, latency can and does confound behavior.=C2=A0 But, as = you point out, it is the proliferation of control protocols that are, in so= me cases, deployed willy-nilly in networks without proper evaluation of the= ir behavior that can lead to the nasty cycle of large transient latency, fr= antic repeating of web requests, protocols sending multiple copies, lack of= awareness of true capacity or queue size or throughput, etc, all of which = you articulate so well, create the chaos and frustration in the network.=C2= =A0 Analyzing that is really difficult, and if we don=E2=80=99t measure and= sense, we have no hope of understanding, controlling, or ameliorating such= situations. =C2=A0

Len

On Jul 9, 2021, at 12:31 PM, David P. Reed <dpreed@deepplum.com> wrote:

Len - I admit I made= a mistake in challenging Little's Law as being based on Poisson proces= ses. It is more general. But it tells you an "average" in its bas= e form, and latency averages are not useful for end user applications.

=C2= =A0

However, Little's Law does assume something that is not actually val= id about the kind of distributions seen in the network, and in fact, it is = NOT true that networks converge on Poisson arrival times.

=C2=A0

The key issu= e is well-described in the sandard analysis of the M/M/1 queue (e.g. https://e= n.wikipedia.org/wiki/M/M/1_queue) , which is done only for Poisson proc= esses, and is also limited to "stable" systems. But networks are = never stable when fully loaded. They get unstable and those instabilities p= ersist for a long time in the network. Instability is at core the underlyin= g *requirement* of the Internet's usage.

=C2=A0

So specifically: real net= works, even large ones, and certainly the Internet today, are not asymptoti= c limits of sums of stationary stochastic arrival processes. Each esternal = terminal of any real network has a real user there, running a real applicat= ion, and the network is a complex graph. This makes it completely unlike a = single queue. Even the links within a network carry a relatively small numb= er of application flows. There's no ability to apply the Law of Large N= umbers to the distributions, because any particular path contains only a sm= all number of serialized flows with hightly variable rates.

=C2=A0

Here'= ;s an example of what really happens in a real network (I've observed t= his in 5 different cities on ATT's cellular network, back when it was r= unning Alcatel Lucent HSPA+ gear in those cities).
But you can see this on= any network where transient overload occurs, creating instability.
=C2=A0

=C2= =A0

At 7 AM, the data transmission of the network is roughty stable. That= 9;s because no links are overloaded within the network. Little's Law ca= n tell you by observing the delay and throughput on any path that the avera= ge delay in the network is X.

=C2=A0

Continue sampling delay in the network a= s the day wears on. At about 10 AM, ping delay starts to soar into the mult= iple second range. No packers are lost. The peak ping time is about 4000 mi= lliseconds - 4 seconds in most of the networks. This is in downtown, no rad= io errors are reported, no link errors.
So it is all queueing delay.=C2=A0=

= =C2=A0

Now what Little's law doesn't tell you much about average del= ay, because clearly *some* subpiece of the network is fully saturated. But = what is interesting here is what is happening and where. You can't tell= what is saturated, and in fact the entire network is quite unstable, becau= se the peak is constantly varying and you don't know where the throughp= ut is. All the packets are now arriving 4 seconds or so later.

=C2=A0

Why is = the situaton not worse than 4 seconds? Well, there are multiple things goin= g on:

=C2=A0

1) TCP may be doing a lot of retransmissions (non-Poisson at all= , not random either. The arrival process is entirely deterministic in each = source, based on the retransmission timeout) or it may not be.

=C2=A0

2) User= s are pissed off, because they clicked on a web page, and got nothing back.= They retry on their screen, or they try another site. Meanwhile, the under= lying TCP connection remains there, pumping the network full of more packet= s on that old path, which is still backed up with packets that haven't = been delivered that are sitting in queues. The real arrival process is not = Poisson at all, its a deterministic, repeated retrsnsmission plus a new att= empt to connect to a new site.

=C2=A0

3) When the users get a web page back e= ventually, it is filled with names of other pieces needed to display that w= eb page, which causes some number (often as many as 100) new pages to be fe= tched, ALL at the same time. Certainly not a stochastic process that will j= ust obey the law of large numbers.

=C2=A0

All of these things are the result = of initial instability, causing queues to build up.

=C2=A0

So what is the sta= te of the system? is it stable? is it stochastic? Is it the sum of enough s= tochastic stable flows to average out to Poisson?

=C2=A0

The answer is clearl= y NO. Control theory (not queuing theory) suggests that this system is comp= letely uncontrolled and unstable.

=C2=A0

So if the system is in this state, w= hat does Little's Lemma tell us? What is the meaning of that hightly va= riable 4 second delay on ping packets, in terms of average utilizaton of th= e network?

=C2=A0

We don't even know what all the users really might need= , if the system hadn't become unstable, because some users have given u= p, and others are trying even harder, and new users are arriving.

=C2=A0

=
What= we do know, because ATT (at my suggestion) reconfigured their system after= blaming Apple Computer company for "bugs" in the original iPhone= in public, is that simply *dropping* packets sitting in queues more than a= couple milliseconds MADE THE USERS HAPPY. Apparently the required capacity= was there all along!=C2=A0

=C2=A0

So I conclude that the 4 second delay was = the largest delay users could barely tolerate before deciding the network w= as DOWN and going away. And that the backup was the accumulation of useless= packets sitting in queues because none of the end systems were receiving c= ongestion signals (which for the Internet stack begins with packet dropping= ).

=C2=A0

I should say that most operators, and especially ATT in this case, = do not measure end-to-end latency. Instead they use Little's Lemma to q= uery routers for their current throughput in bits per second, and calculate= latency as if Little's Lemma applied. This results in reports to manag= ement that literally say:

=C2=A0

=C2=A0 The network is not dropping packets, = utilization is near 100% on many of our switches and routers.

=C2=A0

And mana= gement responds, Hooray! Because utilization of 100% of their hardware is t= heir investors' metric of maximizing profits. The hardware they are ope= rating is fully utilized. No waste! And users are happy because no packets = have been dropped!

=C2=A0

Hmm... what's wrong with this picture? I can se= e why Donovan, CTO, would accuse Apple of lousy software that was ruining i= Phone user experience!=C2=A0 His network was operating without ANY problems= .
So it must be Apple!

=C2=A0

Well, no. The entire problem, as we saw when A= TT just changed to shorten egress queues and drop packets when the egress q= ueues overflowed, was that ATT's network was amplifying instability, no= t at the link level, but at the network level.

=C2=A0

And queueing theory can= help with that, but *intro queueing theory* cannot.

=C2=A0

And a big part of= that problem is the pervasive belief that, at the network boundary, *Poiss= on arrival* is a reasonable model for use in all cases.

=C2=A0

=C2=A0

=C2=A0

=

=C2=A0=

= =C2=A0

=C2=A0

=C2=A0

=C2=A0

=C2=A0

=C2=A0

On Friday, July 9, 2021 6:05am, "Luca Muscar= iello" <m= uscariello@ieee.org> said:

For those who = might be interested in Little's law
there is a nic= e paper by John Little on the occasion=C2=A0
of the 50th an= niversary=C2=A0 of the result.
=C2=A0
Nice rea= d.=C2=A0
Luca=C2= =A0
=C2=A0
P.S.=C2= =A0
Who has = not a copy of L. Kleinrock's books? I do have and am not ready to lend = them!
On Fri, Jul 9, 2021 at 11:01 AM Leona= rd Kleinrock <lk@cs.= ucla.edu> wrote:
David,
I totally appreciate =C2=A0your attention to when and when not analyti= cal modeling works. Let me clarify a few things from your note.
First, Little's law (also known as Little=E2=80=99s lemma or, as I= use in my book, Little=E2=80=99s result) does not assume Poisson arrivals = - =C2=A0it is good for any arrival process and any service= process and is an equality between time averages.=C2=A0 It states that the= time average of the number in a system (for a sample path w)=C2=A0is equal to the average arrival = rate to the system multiplied by the time-averaged time in the system for t= hat sample path.=C2=A0 This is often written as =C2=A0=C2=A0NTimeAvg =3D=CE=BB=C2=B7T= TimeAvg . =C2=A0More= over, if the system is also ergodic, then the time average equals the ensem= ble average and we often write it as=C2=A0N =CC=84 =3D= =CE=BB T =CC=84 .= =C2=A0In any case, this requires neithe= r Poisson arrivals nor exponential service times. =C2=A0
=C2=A0
Queueing theorists often do study the case of Poisson arrivals.=C2=A0 = True, it makes the analysis easier, yet there is a better reason it is ofte= n used, and that is because the sum of a large number of independent statio= nary renewal processes approaches a Poisson process.=C2=A0 So nature often = gives us Poisson arrivals. =C2=A0
Best,
Len
On Jul 8, 2021, at 12:38 PM, David P. Reed <dpreed@deepplum.com> wrote:

I wi= ll tell you flat out that the arrival time distribution assumption made by = Little's Lemma that allows "estimation of queue depth" is tot= ally unreasonable on ANY Internet in practice.

=C2=A0

The = assumption is a Poisson Arrival Process. In reality, traffic arrivals in re= al internet applications are extremely far from Poisson, and, of course, us= ing TCP windowing, become highly intercorrelated with crossing traffic that= shares the same queue.

=C2=A0

So, = as I've tried to tell many, many net-heads (people who ignore applicati= ons layer behavior, like the people that think latency doesn't matter t= o end users, only throughput), end-to-end packet arrival times on a practic= al network are incredibly far from Poisson - and they are more like fractal= probability distributions, very irregular at all scales of time.

=C2=A0

So, = the idea that iperf can estimate queue depth by Little's Lemma by just = measuring saturation of capacity of a path is bogus.The less Poisson, the w= orse the estimate gets, by a huge factor.

=C2=A0

=C2=A0

Wher= e does the Poisson assumption come from?=C2=A0 Well, like many theorems, it= is the simplest tractable closed form solution - it creates a simplified v= iew, by being a "single-parameter" distribution (the parameter is= called lambda for a Poisson distribution).=C2=A0 And the analysis of a sim= ple queue with poisson arrival distribution and a static, fixed service tim= e is the first interesting Queueing Theory example in most textbooks. It is= suggestive of an interesting phenomenon, but it does NOT characterize any = real system.

=C2=A0

It&#= 39;s the queueing theory equivalent of "First, we assume a spherical c= ow...". in doing an example in a freshman physics class.

=C2=A0

Unfo= rtunately, most networking engineers understand neither queuing theory nor = application networking usage in interactive applications. Which makes them = arrogant. They assume all distributions are poisson!

=C2=A0

=C2=A0

On T= uesday, July 6, 2021 9:46am, "Ben Greear" <greearb@candelatech.com> s= aid:

>= Hello,
>
> I am interested to hear wish lists for network tes= ting features. We make test
> equipment, supporting lots
> of w= ifi stations and a distributed architecture, with built-in udp, tcp, ipv6,<= br>> http, ... protocols,
> and open to creating/improving some of= our automated tests.
>
> I know Dave has some test scripts al= ready, so I'm not necessarily looking to
> reimplement that,
&= gt; but more fishing for other/new ideas.
>
> Thanks,
> = Ben
>
> On 7/2/21 4:28 PM, Bob McMahon wrote:
> > I t= hink we need the language=C2=A0of math here. It seems like the network
&= gt; power metric, introduced by Kleinrock and=C2=A0Jaffe in the late 70s, i= s something
> useful.
> > Effective end/end queue depths per= Little's law also seems useful. Both are
> available in iperf 2 = from a test perspective. Repurposing test techniques to
> actual
&= gt; > traffic could be useful. Hence=C2=A0the question around what exact= telemetry
> is useful to apps making socket write() and read() calls= .
> >
> > Bob
> >
> > On Fri, Jul 2, 20= 21 at 10:07 AM Dave Taht <dave.taht@gmail.com
> <mailto:dave.taht@gmail.com>> wrote:=
> >
> > In terms of trying to find "Quality" I= have tried to encourage folk to
> > both read "zen and the a= rt of motorcycle maintenance"[0], and Deming's
> > work o= n "total quality management".
> >
> > My own sl= ice at this network, computer and lifestyle "issue" is aiming
= > > for "imperceptible latency" in all things. [1]. There&#= 39;s a lot of
> > fallout from that in terms of not just addressin= g queuing delay, but
> > caching, prefetching, and learning more a= bout what a user really needs
> > (as opposed to wants) to know vi= a intelligent agents.
> >
> > [0] If you want to get depr= essed, read Pirsig's successor to "zen...",
> > lila= , which is in part about what happens when an engineer hits an
> >= insoluble problem.
> > [1] https://www.internetsociety.org= /events/latency2013/
> <https://www.internetsociety.org= /events/latency2013/>
> >
> >
> >
>= > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com
> <mailto:dpreed@deeppl= um.com>> wrote:
> > >
> > > Well, nice th= at the folks doing the conference=C2=A0 are willing to
> consider tha= t quality of user experience has little to do with signalling rate at
&g= t; the
> > physical layer or throughput of FTP transfers.
> = > >
> > >
> > >
> > > But honestl= y, the fact that they call the problem "network quality"
> = suggests that they REALLY, REALLY don't understand the Internet isn'= ;t the hardware
> or
> > the routers or even the routing alg= orithms *to its users*.
> > >
> > >
> > &g= t;
> > > By ignoring the diversity of applications now and in t= he future,
> and the fact that we DON'T KNOW what will be coming = up, this conference will
> likely fall
> > into the usual tr= ap that net-heads fall into - optimizing for some
> imaginary reality= that doesn't exist, and in fact will probably never be what
> us= ers
> > actually will do given the chance.
> > >
&g= t; > >
> > >
> > > I saw this issue in 1976 i= n the group developing the original
> Internet protocols - a desire t= o put *into the network* special tricks to optimize
> ASR33
> &= gt; logins to remote computers from terminal concentrators (aka remote
&= gt; login), bulk file transfers between file systems on different time-shar= ing
> systems, and
> > "sessions" (virtual circuit= s) that required logins. And then trying to
> exploit underlying &quo= t;multicast" by building it into the IP layer, because someone
>= > thought that TV broadcast would be the dominant application.
> = > >
> > >
> > >
> > > Frankly, to= think of "quality" as something that can be "provided"=
> by "the network" misses the entire point of "end-to= -end argument in system
> design".
> > Quality is not a= property defined or created by The Network. If you want
> to talk ab= out Quality, you need to talk about users - all the users at all times,
= > > now and into the future, and that's something you can't d= o if you don't
> bother to include current and future users talki= ng about what they might expect
> to
> > experience that the= y don't experience.
> > >
> > >
> > &g= t;
> > > There was much fighting back in 1976 that basically in= volved
> "network experts" saying that the network was the = place to "solve" such issues as
> quality,
> > so = applications could avoid having to solve such issues.
> > >
= > > >
> > >
> > > What some of us managed = to do was to argue that you can't "solve"
> such issues= . All you can do is provide a framework that enables different uses to
&= gt; > *cooperate* in some way.
> > >
> > >
&g= t; > >
> > > Which is why the Internet drops packets rath= er than queueing them,
> and why diffserv cannot work.
> > &= gt;
> > > (I know the latter is conftroversial, but at the mome= nt, ALL of
> diffserv attempts to talk about end-to-end applicaiton s= pecific metrics, but
> never, ever
> > explains what the dif= fserv control points actually do w.r.t. what the IP
> layer can actua= lly control. So it is meaningless - another violation of the
> > s= o-called end-to-end principle).
> > >
> > >
>= > >
> > > Networks are about getting packets from here t= o there, multiplexing
> the underlying resources. That's it. Qual= ity is a whole different thing. Quality
> can
> > be improve= d by end-to-end approaches, if the underlying network provides
> some= kind of thing that actually creates a way for end-to-end applications to> > affect queueing and routing decisions, and more importantly get= ting
> "telemetry" from the network regarding what is actua= lly going on with the other
> > end-to-end users sharing the infra= structure.
> > >
> > >
> > >
> &g= t; > This conference won't talk about it this way. So don't wast= e your
> time.
> > >
> > >
> > ><= br>> > >
> > >
> > >
> > >
= > > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht"> <dave.tah= t@gmail.com <mailto:dave.taht@gmail.com>> said:
> > >
> &g= t; > > The program committee members are *amazing*. Perhaps, finally,=
> we can
> > > > move the bar for the internet's = quality metrics past endless,
> blind
> > > > repetiti= ons of speedtest.
> > > >
> > > > For complet= e details, please see:
> > > > https://www.iab.= org/activities/workshops/network-quality/
> <http= s://www.iab.org/activities/workshops/network-quality/>
> > = > >
> > > > Submissions Due: Monday 2nd August 2021, m= idnight AOE
> (Anywhere On Earth)
> > > > Invitations = Issued by: Monday 16th August 2021
> > > >
> > >= > Workshop Date: This will be a virtual workshop, spread over
> t= hree days:
> > > >
> > > > 1400-1800 UTC Tue = 14th September 2021
> > > > 1400-1800 UTC Wed 15th September= 2021
> > > > 1400-1800 UTC Thu 16th September 2021
> = > > >
> > > > Workshop co-chairs: Wes Hardaker, Evg= eny Khorov, Omer Shapira
> > > >
> > > > The = Program Committee members:
> > > >
> > > > Ja= ri Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire,
> Sam
&= gt; > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgen= sen,
> Geoff
> > > > Huston, Cullen Jennings, Katarzyn= a Kosek-Szott, Mirja
> Kuehlewind,
> > > > Jason Livin= good, Matt Mathias, Randall Meyer, Kathleen
> Nichols,
> > &= gt; > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
>= > > >
> > > > Send Submissions to: network-quality-w= orkshop-pc@iab.org
> <mailto:network-quality-workshop-pc@iab.org= >.
> > > >
> > > > Position papers fro= m academia, industry, the open source
> community and
> > &g= t; > others that focus on measurements, experiences, observations
>= ; and
> > > > advice for the future are welcome. Papers that= reflect
> experience
> > > > based on deployed servic= es are especially welcome. The
> organizers
> > > > un= derstand that specific actions taken by operators are
> unlikely to b= e
> > > > discussed in detail, so papers discussing general = categories
> of
> > > > actions and issues without nam= ing specific technologies,
> products, or
> > > > othe= r players in the ecosystem are expected. Papers should not
> focus> > > > on specific protocol solutions.
> > > >=
> > > > The workshop will be by invitation only. Those wish= ing to
> attend
> > > > should submit a position paper= to the address above; it may
> take the
> > > > form = of an Internet-Draft.
> > > >
> > > > All inp= uts submitted and considered relevant will be published
> on the
&= gt; > > > workshop website. The organisers will decide whom to inv= ite
> based on
> > > > the submissions received. Sessi= ons will be organized according
> to
> > > > content, = and not every accepted submission or invited attendee
> will
> = > > > have an opportunity to present as the intent is to foster> discussion
> > > > and not simply to have a sequence o= f presentations.
> > > >
> > > > Position pap= ers from those not planning to attend the virtual
> sessions
> = > > > themselves are also encouraged. A workshop report will be> published
> > > > afterwards.
> > > >> > > > Overview:
> > > >
> > > &g= t; "We believe that one of the major factors behind this lack of
&g= t; progress
> > > > is the popular perception that throughpu= t is the often sole
> measure of
> > > > the quality o= f Internet connectivity. With such narrow focus,
> people
> >= ; > > don=E2=80=99t consider questions such as:
> > > >= ;
> > > > What is the latency under typical working conditio= ns?
> > > > How reliable is the connectivity across longer t= ime periods?
> > > > Does the network allow the use of a bro= ad range of protocols?
> > > > What services can be run by c= lients of the network?
> > > > What kind of IPv4, NAT or IPv= 6 connectivity is offered, and
> are there firewalls?
> > &g= t; > What security mechanisms are available for local services,
> = such as DNS?
> > > > To what degree are the privacy, confide= ntiality, integrity
> and
> > > > authenticity of user= communications guarded?
> > > >
> > > > Impr= oving these aspects of network quality will likely depend
> on
>= ; > > > measurement and exposing metrics to all involved parties,<= br>> including to
> > > > end users in a meaningful way. = Such measurements and exposure
> of the
> > > > right = metrics will allow service providers and network
> operators to
&g= t; > > > focus on the aspects that impacts the users=E2=80=99 expe= rience
> most and at
> > > > the same time empowers us= ers to choose the Internet service
> that will
> > > >= give them the best experience."
> > > >
> > &= gt; >
> > > > --
> > > > Latest Podcast:> > > >
> https://www.link= edin.com/feed/update/urn:li:activity:6791014284936785920/
> <<= a href=3D"https://www.linkedin.com/feed/update/urn:li:activity:679101428493= 6785920/" target=3D"_blank">https://www.linkedin.com/feed/update/urn:li:act= ivity:6791014284936785920/>
> > > >
> > >= > Dave T=C3=A4ht CTO, TekLibre, LLC
> > > > ____________= ___________________________________
> > > > Cerowrt-devel ma= iling list
> > > > Cerowrt-devel@lists.bufferbloat.net
= > <mailto:Cerowrt-devel@lists.bufferbloat.net>
> > >= > https://lists.bufferbloat.net/listinfo/cerowrt-devel
&= gt; <https://lists.bufferbloat.net/listinfo/cerowrt-devel>= ;
> > > >
> >
> >
> >
> >= ; --
> > Latest Podcast:
> > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/<= /a>
> <
https://www.linkedin.com/feed/= update/urn:li:activity:6791014284936785920/>
> >
> &g= t; Dave T=C3=A4ht CTO, TekLibre, LLC
> > _________________________= ______________________
> > Make-wifi-fast mailing list
> >= ; Make-wifi-fast@lists.bufferbloat.net
> <mailto:Make-wifi-fast@= lists.bufferbloat.net>
> > https://lists.bufferblo= at.net/listinfo/make-wifi-fast
> <https://lists.buffe= rbloat.net/listinfo/make-wifi-fast>
> >
> >
>= ; > This electronic communication and the information and any files tran= smitted
> with it, or attached to it, are confidential and are intend= ed solely for the use
> of
> > the individual or entity to w= hom it is addressed and may contain information
> that is confidentia= l, legally privileged, protected by privacy laws, or otherwise
> >= restricted from disclosure to anyone else. If you are not the intended
= > recipient or the person responsible for delivering the e-mail to the i= ntended
> recipient,
> > you are hereby notified that any us= e, copying, distributing, dissemination,
> forwarding, printing, or c= opying of this e-mail is strictly prohibited. If you
> > received = this e-mail in error, please return the e-mail to the sender, delete
>= ; it from your computer, and destroy any printed copy of it.
> >> > _______________________________________________
> > St= arlink mailing list
> > Starlink@lists.bufferbloat.net
> > <= a href=3D"https://lists.bufferbloat.net/listinfo/starlink" target=3D"_blank= ">https://lists.bufferbloat.net/listinfo/starlink
> >
> =
>
> --
> Ben Greear <greearb@candelatech.com>
> Cande= la Technologies Inc http://www.candelatech.com
>
_______________________________________________
Starlink mailing listStarli= nk@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/s= tarlink
_______________________________________________
Make-wifi-fast mailing = list
Make-wifi-fast@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/make-wifi-fast


This ele= ctronic communication and the information and any files transmitted with it= , or attached to it, are confidential and are intended solely for the use o= f the individual or entity to whom it is addressed and may contain informat= ion that is confidential, legally privileged, protected by privacy laws, or= otherwise restricted from disclosure to anyone else. If you are not the in= tended recipient or the person responsible for delivering the e-mail to the= intended recipient, you are hereby notified that any use, copying, distrib= uting, dissemination, forwarding, printing, or copying of this e-mail is st= rictly prohibited. If you received this e-mail in error, please return the = e-mail to the sender, delete it from your computer, and destroy any printed= copy of it. --000000000000cfe17505c6ca35ba-- --000000000000d7facc05c6ca3513 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIIQagYJKoZIhvcNAQcCoIIQWzCCEFcCAQExDzANBglghkgBZQMEAgEFADALBgkqhkiG9w0BBwGg gg3BMIIFDTCCA/WgAwIBAgIQeEqpED+lv77edQixNJMdADANBgkqhkiG9w0BAQsFADBMMSAwHgYD VQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMzETMBEGA1UEChMKR2xvYmFsU2lnbjETMBEGA1UE AxMKR2xvYmFsU2lnbjAeFw0yMDA5MTYwMDAwMDBaFw0yODA5MTYwMDAwMDBaMFsxCzAJBgNVBAYT AkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMTEwLwYDVQQDEyhHbG9iYWxTaWduIEdDQyBS MyBQZXJzb25hbFNpZ24gMiBDQSAyMDIwMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA vbCmXCcsbZ/a0fRIQMBxp4gJnnyeneFYpEtNydrZZ+GeKSMdHiDgXD1UnRSIudKo+moQ6YlCOu4t rVWO/EiXfYnK7zeop26ry1RpKtogB7/O115zultAz64ydQYLe+a1e/czkALg3sgTcOOcFZTXk38e aqsXsipoX1vsNurqPtnC27TWsA7pk4uKXscFjkeUE8JZu9BDKaswZygxBOPBQBwrA5+20Wxlk6k1 e6EKaaNaNZUy30q3ArEf30ZDpXyfCtiXnupjSK8WU2cK4qsEtj09JS4+mhi0CTCrCnXAzum3tgcH cHRg0prcSzzEUDQWoFxyuqwiwhHu3sPQNmFOMwIDAQABo4IB2jCCAdYwDgYDVR0PAQH/BAQDAgGG MGAGA1UdJQRZMFcGCCsGAQUFBwMCBggrBgEFBQcDBAYKKwYBBAGCNxQCAgYKKwYBBAGCNwoDBAYJ KwYBBAGCNxUGBgorBgEEAYI3CgMMBggrBgEFBQcDBwYIKwYBBQUHAxEwEgYDVR0TAQH/BAgwBgEB /wIBADAdBgNVHQ4EFgQUljPR5lgXWzR1ioFWZNW+SN6hj88wHwYDVR0jBBgwFoAUj/BLf6guRSSu TVD6Y5qL3uLdG7wwegYIKwYBBQUHAQEEbjBsMC0GCCsGAQUFBzABhiFodHRwOi8vb2NzcC5nbG9i YWxzaWduLmNvbS9yb290cjMwOwYIKwYBBQUHMAKGL2h0dHA6Ly9zZWN1cmUuZ2xvYmFsc2lnbi5j b20vY2FjZXJ0L3Jvb3QtcjMuY3J0MDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwuZ2xvYmFs c2lnbi5jb20vcm9vdC1yMy5jcmwwWgYDVR0gBFMwUTALBgkrBgEEAaAyASgwQgYKKwYBBAGgMgEo CjA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxzaWduLmNvbS9yZXBvc2l0b3J5LzAN BgkqhkiG9w0BAQsFAAOCAQEAdAXk/XCnDeAOd9nNEUvWPxblOQ/5o/q6OIeTYvoEvUUi2qHUOtbf jBGdTptFsXXe4RgjVF9b6DuizgYfy+cILmvi5hfk3Iq8MAZsgtW+A/otQsJvK2wRatLE61RbzkX8 9/OXEZ1zT7t/q2RiJqzpvV8NChxIj+P7WTtepPm9AIj0Keue+gS2qvzAZAY34ZZeRHgA7g5O4TPJ /oTd+4rgiU++wLDlcZYd/slFkaT3xg4qWDepEMjT4T1qFOQIL+ijUArYS4owpPg9NISTKa1qqKWJ jFoyms0d0GwOniIIbBvhI2MJ7BSY9MYtWVT5jJO3tsVHwj4cp92CSFuGwunFMzCCA18wggJHoAMC AQICCwQAAAAAASFYUwiiMA0GCSqGSIb3DQEBCwUAMEwxIDAeBgNVBAsTF0dsb2JhbFNpZ24gUm9v dCBDQSAtIFIzMRMwEQYDVQQKEwpHbG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWduMB4XDTA5 MDMxODEwMDAwMFoXDTI5MDMxODEwMDAwMFowTDEgMB4GA1UECxMXR2xvYmFsU2lnbiBSb290IENB IC0gUjMxEzARBgNVBAoTCkdsb2JhbFNpZ24xEzARBgNVBAMTCkdsb2JhbFNpZ24wggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDMJXaQeQZ4Ihb1wIO2hMoonv0FdhHFrYhy/EYCQ8eyip0E XyTLLkvhYIJG4VKrDIFHcGzdZNHr9SyjD4I9DCuul9e2FIYQebs7E4B3jAjhSdJqYi8fXvqWaN+J J5U4nwbXPsnLJlkNc96wyOkmDoMVxu9bi9IEYMpJpij2aTv2y8gokeWdimFXN6x0FNx04Druci8u nPvQu7/1PQDhBjPogiuuU6Y6FnOM3UEOIDrAtKeh6bJPkC4yYOlXy7kEkmho5TgmYHWyn3f/kRTv riBJ/K1AFUjRAjFhGV64l++td7dkmnq/X8ET75ti+w1s4FRpFqkD2m7pg5NxdsZphYIXAgMBAAGj QjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSP8Et/qC5FJK5N UPpjmove4t0bvDANBgkqhkiG9w0BAQsFAAOCAQEAS0DbwFCq/sgM7/eWVEVJu5YACUGssxOGhigH M8pr5nS5ugAtrqQK0/Xx8Q+Kv3NnSoPHRHt44K9ubG8DKY4zOUXDjuS5V2yq/BKW7FPGLeQkbLmU Y/vcU2hnVj6DuM81IcPJaP7O2sJTqsyQiunwXUaMld16WCgaLx3ezQA3QY/tRG3XUyiXfvNnBB4V 14qWtNPeTCekTBtzc3b0F5nCH3oO4y0IrQocLP88q1UOD5F+NuvDV0m+4S4tfGCLw0FREyOdzvcy a5QBqJnnLDMfOjsl0oZAzjsshnjJYS8Uuu7bVW/fhO4FCU29KNhyztNiUGUe65KXgzHZs7XKR1g/ XzCCBUkwggQxoAMCAQICDBhL7k9eiTHfluW70TANBgkqhkiG9w0BAQsFADBbMQswCQYDVQQGEwJC RTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTExMC8GA1UEAxMoR2xvYmFsU2lnbiBHQ0MgUjMg UGVyc29uYWxTaWduIDIgQ0EgMjAyMDAeFw0yMTAyMjIwNDQyMDRaFw0yMjA5MDEwODA5NDlaMIGM MQswCQYDVQQGEwJJTjESMBAGA1UECBMJS2FybmF0YWthMRIwEAYDVQQHEwlCYW5nYWxvcmUxFjAU BgNVBAoTDUJyb2FkY29tIEluYy4xFDASBgNVBAMTC0JvYiBNY01haG9uMScwJQYJKoZIhvcNAQkB Fhhib2IubWNtYWhvbkBicm9hZGNvbS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB AQDyY95HWFm48WhKUyFbAS9JxiDqBHBdAbgjx4iF46lkqZdVkIJ8pGfcXoGd10Vp9yL5VQevDAt/ A/Jh22uhSgKR9Almeux9xWGhG8cyZwcCwYrsMt84FqCgEQidT+7YGNdd9oKrjU7mFC7pAnnw+cGI d3NFryurgnNPwfEK0X7HwRsga5pM+Zelr/ZM8MkphE1hCvTuPGakNylOFhP+wKL8Bmhsq5tNIInw DrPV5EPUikwiGMDmkX8o6roGiUwyqAp8dMZKJZ/vS/aWEELV+gm21Btr7eqdAWyqm09McVpkM4th v/FOYcj8DeJr8MXmHW53gN2fv0BzQjqAdrdCBPNRAgMBAAGjggHZMIIB1TAOBgNVHQ8BAf8EBAMC BaAwgaMGCCsGAQUFBwEBBIGWMIGTME4GCCsGAQUFBzAChkJodHRwOi8vc2VjdXJlLmdsb2JhbHNp Z24uY29tL2NhY2VydC9nc2djY3IzcGVyc29uYWxzaWduMmNhMjAyMC5jcnQwQQYIKwYBBQUHMAGG NWh0dHA6Ly9vY3NwLmdsb2JhbHNpZ24uY29tL2dzZ2NjcjNwZXJzb25hbHNpZ24yY2EyMDIwME0G A1UdIARGMEQwQgYKKwYBBAGgMgEoCjA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxz aWduLmNvbS9yZXBvc2l0b3J5LzAJBgNVHRMEAjAAMEkGA1UdHwRCMEAwPqA8oDqGOGh0dHA6Ly9j cmwuZ2xvYmFsc2lnbi5jb20vZ3NnY2NyM3BlcnNvbmFsc2lnbjJjYTIwMjAuY3JsMCMGA1UdEQQc MBqBGGJvYi5tY21haG9uQGJyb2FkY29tLmNvbTATBgNVHSUEDDAKBggrBgEFBQcDBDAfBgNVHSME GDAWgBSWM9HmWBdbNHWKgVZk1b5I3qGPzzAdBgNVHQ4EFgQUpyXYr5rh8cZzkns+zXmMG1YkBk4w DQYJKoZIhvcNAQELBQADggEBACfauRPak93nzbpn8UXqRZqg6iUZch/UfGj9flerMl4TlK5jWulz Y+rRg+iWkjiLk3O+kKu6GI8TLXB2rsoTnrHYij96Uad5/Ut3Q5F4S0ILgOWVU38l0VZIGGG0CzG1 eLUgN2zjLg++xJuzqijuKQCJb/3+il2MTJ8dcDaXuYcjg7Vt6+EtCBS1SGMVhOTH4Fp50yGWj8ZA bPF1uuJM+dGLJLheUizCr5J/OBEdENg+DSmrqoZ+kZd76iRaF2CkhboR2394Ft8lFlKQiU0q8lnR 9/kdZ0F0iCcUfhaLaGYWujW7N0LZ+rQuTfuPGLx9zZNeNMWSZi/Pc8vdCO7EnlIxggJtMIICaQIB ATBrMFsxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMTEwLwYDVQQDEyhH bG9iYWxTaWduIEdDQyBSMyBQZXJzb25hbFNpZ24gMiBDQSAyMDIwAgwYS+5PXokx35blu9EwDQYJ YIZIAWUDBAIBBQCggdQwLwYJKoZIhvcNAQkEMSIEIDhyYUvQ9eHlQ1IUHehpwvPZyQTW0LKXtCG3 QNlOeXvGMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTIxMDcxMDE5 NTExMlowaQYJKoZIhvcNAQkPMVwwWjALBglghkgBZQMEASowCwYJYIZIAWUDBAEWMAsGCWCGSAFl AwQBAjAKBggqhkiG9w0DBzALBgkqhkiG9w0BAQowCwYJKoZIhvcNAQEHMAsGCWCGSAFlAwQCATAN BgkqhkiG9w0BAQEFAASCAQDpk8woTdDNC4R78iXjx6iL3CJTKXegovyfEca+/pEuXkVcghL3SjhB GjrwZDa1l9e54ZAhQnaev5mzRk6V3AB/JV0MeZeXi2MO6DZqA1xIh03iwQ23tAn88ta5y2IPL3K/ r4k2UiLJdrqvEHC+0WAeUrqwqfx01UTxh+C8AyR3+bPpOhYl8tgkBhU+TULozPVwugfsPRdVj4CV s1xkbzFN4YXpbmggre5LLosY8f5LyZXwoqmyrdXZSheZtdV07e5YcLx5HkKMmvZ+xHG3AJvDfw7Q jbEnBF3jJNUJ/bosBV6Jp+Yt3Pj+ObOUD4wdb18uCDrM+xe4TlTnSgw/sy4N --000000000000d7facc05c6ca3513--