From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [IPv6:2a00:1450:4864:20::52c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 2DBD63CB47 for ; Sun, 18 Jul 2021 15:06:37 -0400 (EDT) Received: by mail-ed1-x52c.google.com with SMTP id h8so20618746eds.4 for ; Sun, 18 Jul 2021 12:06:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Ye8Vd+ViRvUuMjav6bjbY6Rg4Wzz47YCX6RoePcTL70=; b=FLHaO9ShqJ7DR5q+IbD8DLDxL4eJ9m4PLFAmmJ12dMeBs7wSgzpSS9+/9KDLSVcSoq o+IKbe6oaF3XSmMN80zczGtgvj9WQrBaNSsGVmVz/MQbjcVIFXAsomAWWjSn1PqHL59A NM8CSDXJc8XMSEA5EZ58Ht/tc2WFISAwnPc2s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ye8Vd+ViRvUuMjav6bjbY6Rg4Wzz47YCX6RoePcTL70=; b=tThgw6wsqvfX4eiKhTVvVjDhQbbuMevbYQb6J3Y07RETc1c1PSA718aatWRthpFtuW nwMdoRokj7m+lfs6cnM+r4n98HtQQ4IwwyO4SXpGmZFyH+S2Cdu5LJGAG2ozGXTlWV/X 7jLbU1zaaFXj5PFRqlChlXqwAm0UEv67PZm5LEJ6/rUcxT/oEuaMnd9jNwre+u2PJy+K j4eP3K+qDpdD2LiT5KoYu2dQj8eYNA5HHXDhvMsvCbpdl/zDX0Vz9ovs+kPpOlWMXx2/ phRRViINEqrePtDmpCgxi/cMK84TdGSlO3CZOAmo7C/kPhnaaSdEnbljOK3lRzpQpxgt mHpA== X-Gm-Message-State: AOAM5317e8CvaK6rDWUXfYHcD5jjUqP6uGy8AB0+SCph+07hiUf6cbnw v9QpBh1DnZNz5WtUwzF8eXQalj4oPLZ/Qj9pjljIj3Yb4WHmB/8wGZ5GmoDCVdp23/F2C3VyZ3H P7TfZ5cJSQ4zaNYCxJxfo47nwBmKQ+xbOvUjWVvbP X-Google-Smtp-Source: ABdhPJz4HLFMlxl8Mbwqz1QAdMI/VlKmlwOtGUEQdzzmHTgSjsa+DM3WQhTpQ7FxPdcSFKNYW3kh+vxG/AwJYWem0Nk= X-Received: by 2002:a05:6402:1218:: with SMTP id c24mr29519827edw.59.1626635195877; Sun, 18 Jul 2021 12:06:35 -0700 (PDT) MIME-Version: 1.0 References: <1625188609.32718319@apps.rackspace.com> <989de0c1-e06c-cda9-ebe6-1f33df8a4c24@candelatech.com> <1625773080.94974089@apps.rackspace.com> <1625859083.09751240@apps.rackspace.com> <1626111630.69692379@apps.rackspace.com> <9c3d61c1-7013-414e-964d-9e83f596e69d@candelatech.com> <1e8bdf58-2a21-f543-a248-be58bcbddbcf@candelatech.com> In-Reply-To: From: Bob McMahon Date: Sun, 18 Jul 2021 12:06:24 -0700 Message-ID: Subject: Re: [Bloat] Little's Law mea culpa, but not invalidating my main point To: Aaron Wood Cc: Ben Greear , "starlink@lists.bufferbloat.net" , Make-Wifi-fast , Leonard Kleinrock , "David P. Reed" , Cake List , "codel@lists.bufferbloat.net" , cerowrt-devel , bloat Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-256; boundary="0000000000000da38b05c76a85a8" X-List-Received-Date: Sun, 18 Jul 2021 19:06:37 -0000 --0000000000000da38b05c76a85a8 Content-Type: multipart/alternative; boundary="0000000000000826e105c76a85eb" --0000000000000826e105c76a85eb Content-Type: text/plain; charset="UTF-8" Just an FYI, iperf 2 uses a 4 usec delay for TCP and 100 usec delay for UDP to fill the token bucket. We thought about providing a knob for this but decided not to. We figured a busy wait CPU thread wasn't a big deal because of the trend of many CPU cores. The threaded design works well for this. We also support fq-pacing and isochronous traffic using clock_nanosleep() to schedule the writes. We'll probably add Markov chain support but that's not critical and may not affect actionable engineering. We found isoch as a useful traffic profile, at least for our WiFi testing. I'm going to add support for TCP_NOTSENT_LOWAT for select()/write() based transmissions. I'm doubtful this is very useful as event based scheduling based on times seems better. We'll probably use it for unit testing WiFi aggregation and see if it helps there or not. I'll see if it aligns with the OWD measurements. On queue depth, we use two techniques. The most obvious is to measure the end to end delay and use rx histograms, getting all the samples without averaging. The second, internal for us only, is using network telemetry and mapping all the clock domains to the GPS domain. Any moment in time the end/end path can be inspected to where every packet is. Our automated testing is focused around unit tests and used to statistically monitor code changes (which come at a high rate and apply to a broad range of chips) - so the requirements can be very different from a network or service provider. Agreed that the amount of knobs and reactive components are a challenge. And one must assume non-linearity which becomes obvious after a few direct measurements (i.e. no averaging.) The challenge of statistical;y reproducible is always there. We find Montec Carlo techniques can be useful only when they are proven to be statistically reproducible. Bob On Sat, Jul 17, 2021 at 4:29 PM Aaron Wood wrote: > On Mon, Jul 12, 2021 at 1:32 PM Ben Greear > wrote: > >> UDP is better for getting actual packet latency, for sure. TCP is >> typical-user-experience-latency though, >> so it is also useful. >> >> I'm interested in the test and visualization side of this. If there were >> a way to give engineers >> a good real-time look at a complex real-world network, then they have >> something to go on while trying >> to tune various knobs in their network to improve it. >> > > I've always liked the smoke-ping visualization, although a single graph is > only really useful for a single pair of endpoints (or a single segment, > maybe). But I can see using a repeated set of graphs (Tufte has some > examples), that can represent an overview of pairwise collections of > latency+loss: > https://www.edwardtufte.com/bboard/images/0003Cs-8047.GIF > https://www.edwardtufte.com/tufte/psysvcs_p2 > > These work for understanding because the tiled graphs are all identically > constructed, and the reader first learns how to read a single tile, and > then learns the pattern of which tiles represent which measurements. > > Further, they are opinionated. In the second link above, the y axis is > not based on the measured data, but standardized expected values, which (I > think) is key to quick readability. You never need to read the axes. Much > like setting up gauges such that "nominal" is always at the same indicator > position for all graphs (e.g. straight up). At a glance, you can see if > things are "correct" or not. > > That tiling arrangement wouldn't be great for showing interrelationships > (although it may give you a good historical view of correlated behavior). > One thought is to overlay a network graph diagram (graph of all network > links) with small "sparkline" type graphs. > > For a more physical-based network graph, I could see visualizing the queue > depth for each egress port (max value over a time of X, or percentage of > time at max depth). > > Taken together, the timewise correlation could be useful (which peers are > having problems communicating, and which ports between them are impacted?). > > I think getting good data about queue depth may be the hard part, > especially catching transients and the duty cycle / pulse-width of the load > (and then converting that to a number). Back when I uncovered the iperf > application-level pacing granularity was too high 5 years ago, I called it > them "millibursts", and maybe dtaht pointed out that link utilization is > always 0% or 100%, and it's just a matter of the PWM of the packet rate > that makes it look like something in between. > https://burntchrome.blogspot.com/2016/09/iperf3-and-microbursts.html > > > > I'll let others try to figure out how build and tune the knobs, but the >> data acquisition and >> visualization is something we might try to accomplish. I have a feeling >> I'm not the >> first person to think of this, however....probably someone already has >> done such >> a thing. >> >> Thanks, >> Ben >> >> On 7/12/21 1:04 PM, Bob McMahon wrote: >> > I believe end host's TCP stats are insufficient as seen per the >> "failed" congested control mechanisms over the last decades. I think Jaffe >> pointed this out in >> > 1979 though he was using what's been deemed on this thread as >> "spherical cow queueing theory." >> > >> > "Flow control in store-and-forward computer networks is appropriate for >> decentralized execution. A formal description of a class of "decentralized >> flow control >> > algorithms" is given. The feasibility of maximizing power with such >> algorithms is investigated. On the assumption that communication links >> behave like M/M/1 >> > servers it is shown that no "decentralized flow control algorithm" can >> maximize network power. Power has been suggested in the literature as a >> network >> > performance objective. It is also shown that no objective based only on >> the users' throughputs and average delay is decentralizable. Finally, a >> restricted class >> > of algorithms cannot even approximate power." >> > >> > https://ieeexplore.ieee.org/document/1095152 >> > >> > Did Jaffe make a mistake? >> > >> > Also, it's been observed that latency is non-parametric in it's >> distributions and computing gaussians per the central limit theorem for OWD >> feedback loops >> > aren't effective. How does one design a control loop around things that >> are non-parametric? It also begs the question, what are the feed forward >> knobs that can >> > actually help? >> > >> > Bob >> > >> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear > > wrote: >> > >> > Measuring one or a few links provides a bit of data, but seems like >> if someone is trying to understand >> > a large and real network, then the OWD between point A and B needs >> to just be input into something much >> > more grand. Assuming real-time OWD data exists between 100 to 1000 >> endpoint pairs, has anyone found a way >> > to visualize this in a useful manner? >> > >> > Also, considering something better than ntp may not really scale to >> 1000+ endpoints, maybe round-trip >> > time is only viable way to get this type of data. In that case, >> maybe clever logic could use things >> > like trace-route to get some idea of how long it takes to get >> 'onto' the internet proper, and so estimate >> > the last-mile latency. My assumption is that the last-mile latency >> is where most of the pervasive >> > assymetric network latencies would exist (or just ping 8.8.8.8 >> which is 20ms from everywhere due to >> > $magic). >> > >> > Endpoints could also triangulate a bit if needed, using some anchor >> points in the network >> > under test. >> > >> > Thanks, >> > Ben >> > >> > On 7/12/21 11:21 AM, Bob McMahon wrote: >> > > iperf 2 supports OWD and gives full histograms for TCP write to >> read, TCP connect times, latency of packets (with UDP), latency of "frames" >> with >> > > simulated video traffic (TCP and UDP), xfer times of bursts with >> low duty cycle traffic, and TCP RTT (sampling based.) It also has support >> for sampling (per >> > > interval reports) down to 100 usecs if configured with >> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've >> released all this as open source. >> > > >> > > OWD only works if the end realtime clocks are synchronized using >> a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data >> centers don't >> > provide >> > > sufficient level of clock accuracy and the GPS pulse per second >> * to colo and vm customers. >> > > >> > > https://iperf2.sourceforge.io/iperf-manpage.html >> > > >> > > Bob >> > > >> > > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed < >> dpreed@deepplum.com > dpreed@deepplum.com >> > >> wrote: >> > > >> > > >> > > On Monday, July 12, 2021 9:46am, "Livingood, Jason" < >> Jason_Livingood@comcast.com >> > > Jason_Livingood@comcast.com>>> said: >> > > >> > > > I think latency/delay is becoming seen to be as important >> certainly, if not a more direct proxy for end user QoE. This is all still >> evolving and I >> > have >> > > to say is a super interesting & fun thing to work on. :-) >> > > >> > > If I could manage to sell one idea to the management >> hierarchy of communications industry CEOs (operators, vendors, ...) it is >> this one: >> > > >> > > "It's the end-to-end latency, stupid!" >> > > >> > > And I mean, by end-to-end, latency to complete a task at a >> relevant layer of abstraction. >> > > >> > > At the link level, it's packet send to packet receive >> completion. >> > > >> > > But at the transport level including retransmission buffers, >> it's datagram (or message) origination until the acknowledgement arrives >> for that >> > message being >> > > delivered after whatever number of retransmissions, freeing >> the retransmission buffer. >> > > >> > > At the WWW level, it's mouse click to display update >> corresponding to completion of the request. >> > > >> > > What should be noted is that lower level latencies don't >> directly predict the magnitude of higher-level latencies. But longer lower >> level latencies >> > almost >> > > always amplfify higher level latencies. Often non-linearly. >> > > >> > > Throughput is very, very weakly related to these latencies, >> in contrast. >> > > >> > > The amplification process has to do with the presence of >> queueing. Queueing is ALWAYS bad for latency, and throughput only helps if >> it is in exactly the >> > > right place (the so-called input queue of the bottleneck >> process, which is often a link, but not always). >> > > >> > > Can we get that slogan into Harvard Business Review? Can we >> get it taught in Managerial Accounting at HBS? (which does address >> logistics/supply chain >> > queueing). >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > This electronic communication and the information and any files >> transmitted with it, or attached to it, are confidential and are intended >> solely for the >> > use of >> > > the individual or entity to whom it is addressed and may contain >> information that is confidential, legally privileged, protected by privacy >> laws, or >> > otherwise >> > > restricted from disclosure to anyone else. If you are not the >> intended recipient or the person responsible for delivering the e-mail to >> the intended >> > recipient, >> > > you are hereby notified that any use, copying, distributing, >> dissemination, forwarding, printing, or copying of this e-mail is strictly >> prohibited. If you >> > > received this e-mail in error, please return the e-mail to the >> sender, delete it from your computer, and destroy any printed copy of it. >> > >> > >> > -- >> > Ben Greear > >> >> > Candela Technologies Inc http://www.candelatech.com >> > >> > >> > This electronic communication and the information and any files >> transmitted with it, or attached to it, are confidential and are intended >> solely for the use of >> > the individual or entity to whom it is addressed and may contain >> information that is confidential, legally privileged, protected by privacy >> laws, or otherwise >> > restricted from disclosure to anyone else. If you are not the intended >> recipient or the person responsible for delivering the e-mail to the >> intended recipient, >> > you are hereby notified that any use, copying, distributing, >> dissemination, forwarding, printing, or copying of this e-mail is strictly >> prohibited. If you >> > received this e-mail in error, please return the e-mail to the sender, >> delete it from your computer, and destroy any printed copy of it. >> >> >> -- >> Ben Greear >> Candela Technologies Inc http://www.candelatech.com >> >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat >> > -- This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it. --0000000000000826e105c76a85eb Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Just an FYI,

iperf 2 uses a 4 usec delay for TCP an= d 100 usec delay for UDP=C2=A0to fill the token bucket. We thought about pr= oviding a knob for this but decided not to. We figured a busy wait CPU thre= ad wasn't a big deal because of the trend=C2=A0of many CPU cores. The t= hreaded=C2=A0design works well for this. We also support fq-pacing and isoc= hronous traffic using clock_nanosleep() to schedule the writes. We'll p= robably=C2=A0add Markov chain support but that's not critical and may n= ot affect actionable engineering. We found isoch as a useful traffic profil= e, at least for our WiFi testing. I'm going to add support for TCP_NOTS= ENT_LOWAT for select()/write() based transmissions. I'm doubtful this i= s very useful as event based scheduling based on times seems better. We'= ;ll probably use it for unit testing WiFi aggregation and see if it helps t= here or not. I'll see if it aligns with the OWD measurements.=C2=A0
=
On queue depth, we use two techniques. The most obvious is to measure t= he end to end delay and use rx histograms, getting all the samples without = averaging. The second, internal for us only, is using network telemetry and= mapping all the clock domains to the GPS domain. Any moment in time the en= d/end path can be inspected to where every packet is.=C2=A0

Our auto= mated testing is focused around unit tests and used to statistically monito= r code changes (which come at a high rate and apply to a broad range of chi= ps) - so the requirements can be very different from a network or service p= rovider.

Agreed that the amount of knobs and reactive components are= a challenge. And one must assume non-linearity which becomes obvious after= a few direct measurements (i.e. no averaging.) The challenge of statistica= l;y reproducible is always there. We find Montec Carlo techniques can be us= eful only when they are proven to be statistically reproducible.

Bob=


On Sat, Jul 17, 2021 at 4:29 PM Aaron Wood <woody77@gmail.com> wrote:
On Mon,= Jul 12, 2021 at 1:32 PM Ben Greear <greearb@candelatech.com> wrote:
<= div class=3D"gmail_quote">
UDP is better for getting actual packet latency, for sure.=C2=A0 TCP is ty= pical-user-experience-latency though,
so it is also useful.

I'm interested in the test and visualization side of this.=C2=A0 If the= re were a way to give engineers
a good real-time look at a complex real-world network, then they have somet= hing to go on while trying
to tune various knobs in their network to improve it.
=
I've always liked the smoke-ping visualization, although= a single graph is only really useful for a single pair of endpoints (or a = single segment, maybe).=C2=A0 But I can see using a repeated set of graphs = (Tufte has some examples), that can represent an overview of pairwise colle= ctions of latency+loss:

These work for understan= ding because the tiled graphs are all identically constructed, and the read= er first learns how to read a single tile, and then learns the pattern of w= hich tiles represent which measurements.

Further, = they are opinionated.=C2=A0 In the second link above, the y axis is not bas= ed on the measured data, but standardized expected values, which (I think) = is key to quick readability.=C2=A0 You never need to read the axes.=C2=A0 M= uch like setting up gauges such that "nominal" is always at the s= ame indicator position for all graphs (e.g. straight up).=C2=A0 At a glance= , you can see if things are "correct" or not.

That tiling arrangement wouldn't be great for showing interrelati= onships (although it may give you a good historical view of correlated beha= vior).=C2=A0 One thought is to overlay a network graph diagram (graph of al= l network links) with small "sparkline" type graphs.
For a more physical-based network graph, I could see visualizi= ng the queue depth for each egress port (max value over a time of X, or per= centage of time at max depth).

Taken together, the= timewise correlation could be useful (which peers are having problems comm= unicating, and which ports between them are impacted?).

I think getting good data about queue depth may be the hard part, esp= ecially catching transients and the duty cycle / pulse-width of the load (a= nd then converting that to a number).=C2=A0 Back when I uncovered the iperf= application-level pacing granularity was too high 5 years ago, I called it= them "millibursts", and maybe dtaht pointed out that link utiliz= ation is always 0% or 100%, and it's just a matter of the PWM of the pa= cket rate that makes it look like something in between.



I'll let others try to figure out how build and tune the knobs, but the= data acquisition and
visualization is something we might try to accomplish.=C2=A0 I have a feeli= ng I'm not the
first person to think of this, however....probably someone already has done= such
a thing.

Thanks,
Ben

On 7/12/21 1:04 PM, Bob McMahon wrote:
> I believe end host's TCP stats are insufficient as seen per the &q= uot;failed" congested control mechanisms over the last decades. I thin= k Jaffe pointed this out in
> 1979 though he was using what's been deemed on this thread as &quo= t;spherical cow queueing theory."
>
> "Flow control in store-and-forward computer networks is appropria= te for decentralized execution. A formal description of a class of "de= centralized flow control
> algorithms" is given. The feasibility of maximizing power with su= ch algorithms is investigated. On the assumption that communication links b= ehave like M/M/1
> servers it is shown that no "decentralized flow control algorithm= " can maximize network power. Power has been suggested in the literatu= re as a network
> performance objective. It is also shown that no objective based only o= n the users' throughputs and average delay is decentralizable. Finally,= a restricted class
> of algorithms cannot even approximate power."
>
> https://ieeexplore.ieee.org/document/1095152 >
> Did Jaffe make a mistake?
>
> Also, it's been observed that latency=C2=A0is non-parametric in it= 's distributions and computing gaussians=C2=A0per the central limit the= orem for OWD feedback loops
> aren't effective. How does one design a control loop around things= that are non-parametric? It also begs the question, what are the feed forw= ard knobs that can
> actually help?
>
> Bob
>
> On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com <mail= to:greearb@can= delatech.com>> wrote:
>
>=C2=A0 =C2=A0 =C2=A0Measuring one or a few links provides a bit of data= , but seems like if someone is trying to understand
>=C2=A0 =C2=A0 =C2=A0a large and real network, then the OWD between poin= t A and B needs to just be input into something much
>=C2=A0 =C2=A0 =C2=A0more grand.=C2=A0 Assuming real-time OWD data exist= s between 100 to 1000 endpoint pairs, has anyone found a way
>=C2=A0 =C2=A0 =C2=A0to visualize this in a useful manner?
>
>=C2=A0 =C2=A0 =C2=A0Also, considering something better than ntp may not= really scale to 1000+ endpoints, maybe round-trip
>=C2=A0 =C2=A0 =C2=A0time is only viable way to get this type of data.= =C2=A0 In that case, maybe clever logic could use things
>=C2=A0 =C2=A0 =C2=A0like trace-route to get some idea of how long it ta= kes to get 'onto' the internet proper, and so estimate
>=C2=A0 =C2=A0 =C2=A0the last-mile latency.=C2=A0 My assumption is that = the last-mile latency is where most of the pervasive
>=C2=A0 =C2=A0 =C2=A0assymetric network latencies would exist (or just p= ing 8.8.8.8 which is 20ms from everywhere due to
>=C2=A0 =C2=A0 =C2=A0$magic).
>
>=C2=A0 =C2=A0 =C2=A0Endpoints could also triangulate a bit if needed, u= sing some anchor points in the network
>=C2=A0 =C2=A0 =C2=A0under test.
>
>=C2=A0 =C2=A0 =C2=A0Thanks,
>=C2=A0 =C2=A0 =C2=A0Ben
>
>=C2=A0 =C2=A0 =C2=A0On 7/12/21 11:21 AM, Bob McMahon wrote:
>=C2=A0 =C2=A0 =C2=A0 > iperf 2 supports OWD and gives full histogram= s for TCP write to read, TCP connect times, latency of packets (with UDP), = latency of "frames" with
>=C2=A0 =C2=A0 =C2=A0 > simulated=C2=A0video=C2=A0traffic (TCP and UD= P), xfer times of bursts with low duty cycle traffic, and TCP RTT (sampling= based.) It also has support for sampling (per
>=C2=A0 =C2=A0 =C2=A0 > interval reports) down to 100 usecs if config= ured with --enable-fastsampling, otherwise the fastest sampling is 5 ms. We= 've released all this as open source.
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 > OWD only works if the end realtime clocks are= synchronized using a "machine level" protocol such as IEEE 1588 = or PTP. Sadly, *most data centers don't
>=C2=A0 =C2=A0 =C2=A0provide
>=C2=A0 =C2=A0 =C2=A0 > sufficient level of clock accuracy and the GP= S pulse per second * to colo and vm customers.
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 > https://iperf2.sourcef= orge.io/iperf-manpage.html
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 > Bob
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 > On Mon, Jul 12, 2021 at 10:40 AM David P. Ree= d <dpreed@deepp= lum.com <mailto:dpreed@deepplum.com> <mailto:dpreed@deepplum.com
>=C2=A0 =C2=A0 =C2=A0<mailto:dpreed@deepplum.com>>> wrote:
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0On Monday, July 12, 2021 9= :46am, "Livingood, Jason" <Jason_Livingood@comcast.com <mailto:Jason_Living= ood@comcast.com>
>=C2=A0 =C2=A0 =C2=A0<mailto:Jason_Livingood@comcast.com <mailto:Jason_Livingood@c= omcast.com>>> said:
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0 > I think latency/dela= y is becoming seen to be as important certainly, if not a more direct proxy= for end user QoE. This is all still evolving and I
>=C2=A0 =C2=A0 =C2=A0have
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0to say is a super interest= ing & fun thing to work on. :-)
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0If I could manage to sell = one idea to the management hierarchy of communications industry CEOs (opera= tors, vendors, ...) it is this one:
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0"It's the end-to-= end latency, stupid!"
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0And I mean, by end-to-end,= latency to complete a task at a relevant layer of abstraction.
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0At the link level, it'= s packet send to packet receive completion.
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0But at the transport level= including retransmission buffers, it's datagram (or message) originati= on until the acknowledgement arrives for that
>=C2=A0 =C2=A0 =C2=A0message being
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0delivered after whatever n= umber of retransmissions, freeing the retransmission buffer.
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0At the WWW level, it's= mouse click to display update corresponding to completion of the request.<= br> >=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0What should be noted is th= at lower level latencies don't directly predict the magnitude of higher= -level latencies. But longer lower level latencies
>=C2=A0 =C2=A0 =C2=A0almost
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0always amplfify higher lev= el latencies. Often non-linearly.
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0Throughput is very, very w= eakly related to these latencies, in contrast.
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0The amplification process = has to do with the presence of queueing. Queueing is ALWAYS bad for latency= , and throughput only helps if it is in exactly the
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0right place (the so-called= input queue of the bottleneck process, which is often a link, but not alwa= ys).
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >=C2=A0 =C2=A0 =C2=A0Can we get that slogan int= o Harvard Business Review? Can we get it taught in Managerial Accounting at= HBS? (which does address logistics/supply chain
>=C2=A0 =C2=A0 =C2=A0queueing).
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 >
>=C2=A0 =C2=A0 =C2=A0 > This electronic communication and the informa= tion and any files transmitted with it, or attached to it, are confidential= and are intended solely for the
>=C2=A0 =C2=A0 =C2=A0use of
>=C2=A0 =C2=A0 =C2=A0 > the individual or entity to whom it is addres= sed and may contain information that is confidential, legally privileged, p= rotected by privacy laws, or
>=C2=A0 =C2=A0 =C2=A0otherwise
>=C2=A0 =C2=A0 =C2=A0 > restricted from disclosure to anyone else. If= you are not the intended recipient or the person responsible for deliverin= g the e-mail to the intended
>=C2=A0 =C2=A0 =C2=A0recipient,
>=C2=A0 =C2=A0 =C2=A0 > you are hereby notified that any use, copying= , distributing, dissemination, forwarding, printing, or copying of this e-m= ail is strictly prohibited. If you
>=C2=A0 =C2=A0 =C2=A0 > received this e-mail in error, please return = the e-mail to the sender, delete it from your computer, and destroy any pri= nted copy of it.
>
>
>=C2=A0 =C2=A0 =C2=A0--
>=C2=A0 =C2=A0 =C2=A0Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com>>
>=C2=A0 =C2=A0 =C2=A0Candela Technologies Inc
http://www.candelatech.co= m
>
>
> This electronic communication and the information and any files transm= itted with it, or attached to it, are confidential and are intended solely = for the use of
> the individual or entity to whom it is addressed and may contain infor= mation that is confidential, legally privileged, protected by privacy laws,= or otherwise
> restricted from disclosure to anyone else. If you are not the intended= recipient or the person responsible for delivering the e-mail to the inten= ded recipient,
> you are hereby notified that any use, copying, distributing, dissemina= tion, forwarding, printing, or copying of this e-mail is strictly prohibite= d. If you
> received this e-mail in error, please return the e-mail to the sender,= delete it from your computer, and destroy any printed copy of it.


--
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc=C2=A0 http://www.candelatech.com

_______________________________________________
Bloat mailing list
Bloat@list= s.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

This ele= ctronic communication and the information and any files transmitted with it= , or attached to it, are confidential and are intended solely for the use o= f the individual or entity to whom it is addressed and may contain informat= ion that is confidential, legally privileged, protected by privacy laws, or= otherwise restricted from disclosure to anyone else. If you are not the in= tended recipient or the person responsible for delivering the e-mail to the= intended recipient, you are hereby notified that any use, copying, distrib= uting, dissemination, forwarding, printing, or copying of this e-mail is st= rictly prohibited. If you received this e-mail in error, please return the = e-mail to the sender, delete it from your computer, and destroy any printed= copy of it. --0000000000000826e105c76a85eb-- --0000000000000da38b05c76a85a8 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIIQagYJKoZIhvcNAQcCoIIQWzCCEFcCAQExDzANBglghkgBZQMEAgEFADALBgkqhkiG9w0BBwGg gg3BMIIFDTCCA/WgAwIBAgIQeEqpED+lv77edQixNJMdADANBgkqhkiG9w0BAQsFADBMMSAwHgYD VQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMzETMBEGA1UEChMKR2xvYmFsU2lnbjETMBEGA1UE AxMKR2xvYmFsU2lnbjAeFw0yMDA5MTYwMDAwMDBaFw0yODA5MTYwMDAwMDBaMFsxCzAJBgNVBAYT AkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMTEwLwYDVQQDEyhHbG9iYWxTaWduIEdDQyBS MyBQZXJzb25hbFNpZ24gMiBDQSAyMDIwMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA vbCmXCcsbZ/a0fRIQMBxp4gJnnyeneFYpEtNydrZZ+GeKSMdHiDgXD1UnRSIudKo+moQ6YlCOu4t rVWO/EiXfYnK7zeop26ry1RpKtogB7/O115zultAz64ydQYLe+a1e/czkALg3sgTcOOcFZTXk38e aqsXsipoX1vsNurqPtnC27TWsA7pk4uKXscFjkeUE8JZu9BDKaswZygxBOPBQBwrA5+20Wxlk6k1 e6EKaaNaNZUy30q3ArEf30ZDpXyfCtiXnupjSK8WU2cK4qsEtj09JS4+mhi0CTCrCnXAzum3tgcH cHRg0prcSzzEUDQWoFxyuqwiwhHu3sPQNmFOMwIDAQABo4IB2jCCAdYwDgYDVR0PAQH/BAQDAgGG MGAGA1UdJQRZMFcGCCsGAQUFBwMCBggrBgEFBQcDBAYKKwYBBAGCNxQCAgYKKwYBBAGCNwoDBAYJ KwYBBAGCNxUGBgorBgEEAYI3CgMMBggrBgEFBQcDBwYIKwYBBQUHAxEwEgYDVR0TAQH/BAgwBgEB /wIBADAdBgNVHQ4EFgQUljPR5lgXWzR1ioFWZNW+SN6hj88wHwYDVR0jBBgwFoAUj/BLf6guRSSu TVD6Y5qL3uLdG7wwegYIKwYBBQUHAQEEbjBsMC0GCCsGAQUFBzABhiFodHRwOi8vb2NzcC5nbG9i YWxzaWduLmNvbS9yb290cjMwOwYIKwYBBQUHMAKGL2h0dHA6Ly9zZWN1cmUuZ2xvYmFsc2lnbi5j b20vY2FjZXJ0L3Jvb3QtcjMuY3J0MDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwuZ2xvYmFs c2lnbi5jb20vcm9vdC1yMy5jcmwwWgYDVR0gBFMwUTALBgkrBgEEAaAyASgwQgYKKwYBBAGgMgEo CjA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxzaWduLmNvbS9yZXBvc2l0b3J5LzAN BgkqhkiG9w0BAQsFAAOCAQEAdAXk/XCnDeAOd9nNEUvWPxblOQ/5o/q6OIeTYvoEvUUi2qHUOtbf jBGdTptFsXXe4RgjVF9b6DuizgYfy+cILmvi5hfk3Iq8MAZsgtW+A/otQsJvK2wRatLE61RbzkX8 9/OXEZ1zT7t/q2RiJqzpvV8NChxIj+P7WTtepPm9AIj0Keue+gS2qvzAZAY34ZZeRHgA7g5O4TPJ /oTd+4rgiU++wLDlcZYd/slFkaT3xg4qWDepEMjT4T1qFOQIL+ijUArYS4owpPg9NISTKa1qqKWJ jFoyms0d0GwOniIIbBvhI2MJ7BSY9MYtWVT5jJO3tsVHwj4cp92CSFuGwunFMzCCA18wggJHoAMC AQICCwQAAAAAASFYUwiiMA0GCSqGSIb3DQEBCwUAMEwxIDAeBgNVBAsTF0dsb2JhbFNpZ24gUm9v dCBDQSAtIFIzMRMwEQYDVQQKEwpHbG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWduMB4XDTA5 MDMxODEwMDAwMFoXDTI5MDMxODEwMDAwMFowTDEgMB4GA1UECxMXR2xvYmFsU2lnbiBSb290IENB IC0gUjMxEzARBgNVBAoTCkdsb2JhbFNpZ24xEzARBgNVBAMTCkdsb2JhbFNpZ24wggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDMJXaQeQZ4Ihb1wIO2hMoonv0FdhHFrYhy/EYCQ8eyip0E XyTLLkvhYIJG4VKrDIFHcGzdZNHr9SyjD4I9DCuul9e2FIYQebs7E4B3jAjhSdJqYi8fXvqWaN+J J5U4nwbXPsnLJlkNc96wyOkmDoMVxu9bi9IEYMpJpij2aTv2y8gokeWdimFXN6x0FNx04Druci8u nPvQu7/1PQDhBjPogiuuU6Y6FnOM3UEOIDrAtKeh6bJPkC4yYOlXy7kEkmho5TgmYHWyn3f/kRTv riBJ/K1AFUjRAjFhGV64l++td7dkmnq/X8ET75ti+w1s4FRpFqkD2m7pg5NxdsZphYIXAgMBAAGj QjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSP8Et/qC5FJK5N UPpjmove4t0bvDANBgkqhkiG9w0BAQsFAAOCAQEAS0DbwFCq/sgM7/eWVEVJu5YACUGssxOGhigH M8pr5nS5ugAtrqQK0/Xx8Q+Kv3NnSoPHRHt44K9ubG8DKY4zOUXDjuS5V2yq/BKW7FPGLeQkbLmU Y/vcU2hnVj6DuM81IcPJaP7O2sJTqsyQiunwXUaMld16WCgaLx3ezQA3QY/tRG3XUyiXfvNnBB4V 14qWtNPeTCekTBtzc3b0F5nCH3oO4y0IrQocLP88q1UOD5F+NuvDV0m+4S4tfGCLw0FREyOdzvcy a5QBqJnnLDMfOjsl0oZAzjsshnjJYS8Uuu7bVW/fhO4FCU29KNhyztNiUGUe65KXgzHZs7XKR1g/ XzCCBUkwggQxoAMCAQICDBhL7k9eiTHfluW70TANBgkqhkiG9w0BAQsFADBbMQswCQYDVQQGEwJC RTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTExMC8GA1UEAxMoR2xvYmFsU2lnbiBHQ0MgUjMg UGVyc29uYWxTaWduIDIgQ0EgMjAyMDAeFw0yMTAyMjIwNDQyMDRaFw0yMjA5MDEwODA5NDlaMIGM MQswCQYDVQQGEwJJTjESMBAGA1UECBMJS2FybmF0YWthMRIwEAYDVQQHEwlCYW5nYWxvcmUxFjAU BgNVBAoTDUJyb2FkY29tIEluYy4xFDASBgNVBAMTC0JvYiBNY01haG9uMScwJQYJKoZIhvcNAQkB Fhhib2IubWNtYWhvbkBicm9hZGNvbS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB AQDyY95HWFm48WhKUyFbAS9JxiDqBHBdAbgjx4iF46lkqZdVkIJ8pGfcXoGd10Vp9yL5VQevDAt/ A/Jh22uhSgKR9Almeux9xWGhG8cyZwcCwYrsMt84FqCgEQidT+7YGNdd9oKrjU7mFC7pAnnw+cGI d3NFryurgnNPwfEK0X7HwRsga5pM+Zelr/ZM8MkphE1hCvTuPGakNylOFhP+wKL8Bmhsq5tNIInw DrPV5EPUikwiGMDmkX8o6roGiUwyqAp8dMZKJZ/vS/aWEELV+gm21Btr7eqdAWyqm09McVpkM4th v/FOYcj8DeJr8MXmHW53gN2fv0BzQjqAdrdCBPNRAgMBAAGjggHZMIIB1TAOBgNVHQ8BAf8EBAMC BaAwgaMGCCsGAQUFBwEBBIGWMIGTME4GCCsGAQUFBzAChkJodHRwOi8vc2VjdXJlLmdsb2JhbHNp Z24uY29tL2NhY2VydC9nc2djY3IzcGVyc29uYWxzaWduMmNhMjAyMC5jcnQwQQYIKwYBBQUHMAGG NWh0dHA6Ly9vY3NwLmdsb2JhbHNpZ24uY29tL2dzZ2NjcjNwZXJzb25hbHNpZ24yY2EyMDIwME0G A1UdIARGMEQwQgYKKwYBBAGgMgEoCjA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxz aWduLmNvbS9yZXBvc2l0b3J5LzAJBgNVHRMEAjAAMEkGA1UdHwRCMEAwPqA8oDqGOGh0dHA6Ly9j cmwuZ2xvYmFsc2lnbi5jb20vZ3NnY2NyM3BlcnNvbmFsc2lnbjJjYTIwMjAuY3JsMCMGA1UdEQQc MBqBGGJvYi5tY21haG9uQGJyb2FkY29tLmNvbTATBgNVHSUEDDAKBggrBgEFBQcDBDAfBgNVHSME GDAWgBSWM9HmWBdbNHWKgVZk1b5I3qGPzzAdBgNVHQ4EFgQUpyXYr5rh8cZzkns+zXmMG1YkBk4w DQYJKoZIhvcNAQELBQADggEBACfauRPak93nzbpn8UXqRZqg6iUZch/UfGj9flerMl4TlK5jWulz Y+rRg+iWkjiLk3O+kKu6GI8TLXB2rsoTnrHYij96Uad5/Ut3Q5F4S0ILgOWVU38l0VZIGGG0CzG1 eLUgN2zjLg++xJuzqijuKQCJb/3+il2MTJ8dcDaXuYcjg7Vt6+EtCBS1SGMVhOTH4Fp50yGWj8ZA bPF1uuJM+dGLJLheUizCr5J/OBEdENg+DSmrqoZ+kZd76iRaF2CkhboR2394Ft8lFlKQiU0q8lnR 9/kdZ0F0iCcUfhaLaGYWujW7N0LZ+rQuTfuPGLx9zZNeNMWSZi/Pc8vdCO7EnlIxggJtMIICaQIB ATBrMFsxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMTEwLwYDVQQDEyhH bG9iYWxTaWduIEdDQyBSMyBQZXJzb25hbFNpZ24gMiBDQSAyMDIwAgwYS+5PXokx35blu9EwDQYJ YIZIAWUDBAIBBQCggdQwLwYJKoZIhvcNAQkEMSIEILGHZtu8dfz9np8m4TO8OnX12n7vfBxMHG3g bMAR20jHMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTIxMDcxODE5 MDYzNlowaQYJKoZIhvcNAQkPMVwwWjALBglghkgBZQMEASowCwYJYIZIAWUDBAEWMAsGCWCGSAFl AwQBAjAKBggqhkiG9w0DBzALBgkqhkiG9w0BAQowCwYJKoZIhvcNAQEHMAsGCWCGSAFlAwQCATAN BgkqhkiG9w0BAQEFAASCAQAkPlcDulhwGXRfF4kBl91chXReRgW5QB41FqGdO/okVi8pqJF6gTvU MBa3qbHWkB4S4L9MnJGRIBEkTeGLKjj5voL8EI4dEbnTYiVNkut+jC33qKaW6E3enZya4OnD0JqQ wjAMV6xSc6RGKab6BYdeyM12v62NFFWaiGCkto2Dka13DZk286TGHfnNdNvKKPN9k859isC0PPBZ X3idaPzNUO8nJbaBbQGe6k6uYwuWx34jZ4gX7EnOd2kGKZ1Pc6rDUBKlpvBYl76RW8dZ7SpD0WeC V0hJT98m3a574aIQU96TJvM5NK9/tLl+sPn+M8NAQVCrWYCZW4Jv+vo/zotp --0000000000000da38b05c76a85a8--