<div dir="ltr">Thanks for this. I plan to purchase the second volume to go with my copy of volume 1. There is (always) more to learn and your expertise is very helpful.<br><br>Bob<br><br>PS. As a side note, I've added support for <a href="https://iperf2.sourceforge.io/iperf-manpage.html" target="_blank">TCP_NOTSENT_LOWAT in iperf 2.1.4</a> and it's proving interesting per WiFi/BT latency testing including helping to mitigate sender side bloat. <br><dl><dt style="color:rgb(0,0,0);font-family:Times;font-size:medium"><b>--tcp-write-prefetch </b><i>n</i>[kmKM]</dt><dd style="color:rgb(0,0,0);font-family:Times;font-size:medium">Set TCP_NOTSENT_LOWAT on the socket and use event based writes per select() on the socket.</dd></dl>I'll probably add measuring the select() delays to see if that correlates to things like RF arbitrations, etc.<br><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jul 21, 2021 at 4:20 PM Leonard Kleinrock <<a href="mailto:lk@cs.ucla.edu" target="_blank">lk@cs.ucla.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div dir="auto"><div dir="auto"><div dir="auto">Just a few comments following David Reed's insightful comments re the history of the ARPANET and its approach to flow control. I have attached some pages from my Volume II which provide an understanding of how we addressed flow control and its implementation in the ARPANET.</div><div dir="auto"><br></div><div dir="auto">The early days of the ARPANET design and evaluation involved detailed design of what we did call “Flow Control”. In my "Queueing Systems, Volume II: Computer Applications”, John Wiley, 1976, I documented much of what we designed and evaluated for the ARPANET, and focused on performance, deadlocks, lockups and degradations due to flow control design. Aspects of congestion control were considered, but this 2-volume book was mostly about understanding congestion. Of interest are the many deadlocks that we discovered in those early days as we evaluated and measured the network behavior. Flow control was designed into that early network, but it had a certain ad-hoc flavor and I point out the danger of requiring flows to depend upon the acquisition of multiple tokens that were allocated from different portions of the network at the same time in a distributed fashion. The attached relevant sections of the book address these issues; I thought it would be of value to see what we were looking at back then. </div><div dir="auto"><br></div><div dir="auto">On a related topic regarding flow and congestion control (<span style="font-style:normal">as triggered by David’s comment</span><i> "</i><font face="ArialMT"><i>at most one packet waiting for each egress link in the bottleneck path.”</i>)</font>, in 1978, I published a <a href="https://www.lk.cs.ucla.edu/data/files/Kleinrock/On%20Flow%20Control%20in%20Computer%20Networks.pdf" target="_blank">paper</a> in which I extended the notion of Power (the ratio of throughput to response time) that had been introduced by <a href="https://www.sciencedirect.com/science/article/abs/pii/0376507578900284" target="_blank">Giessler, et a</a>l and I pointed out the amazing properties that emerged when Power is optimized, e.g., that one should keep each hop in the pipe “just full”, i.e., one message per hop. As it turns out, and as has been discussed in this email chain, <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1095152" target="_blank">Jaffe</a> showed in 1981 that this optimization was not decentralizable and so no one pursued this optimal operating point (notwithstanding the fact that I published other papers on this issue, for example in <a href="https://www.lk.cs.ucla.edu/data/files/Kleinrock/Power%20and%20Deterministic%20Rules%20of%20Thumb%20for%20Probabilistic.pdf" target="_blank">1979</a> and in <a href="https://www.lk.cs.ucla.edu/data/files/Gail/power.pdf" target="_blank">1981</a>). So this issue of Power lay dormant for decades until Van Jacobsen, et al, resurrected the idea with their BBR flow control design in <a href="https://queue.acm.org/detail.cfm?id=3022184" target="_blank">2016</a> when they showed that indeed one could decentralize power. Considerable research has since followed their paper including another by me in <a href="https://www.lk.cs.ucla.edu/data/files/Kleinrock/Internet%20congestion%20control%20using%20the%20power%20metric%20LK%20Mod%20aug%202%202018.pdf" target="_blank">2018</a>. (This was not the first time that a publication challenging the merits of a new idea negatively impacted that idea for decades - for example, the 1988 book <a href="https://www.amazon.com/Perceptrons-Introduction-Computational-Geometry-Expanded/dp/0262631113/ref=sr_1_2?dchild=1&keywords=perceptrons&qid=1626846378&sr=8-2" target="_blank">“Perceptrons”</a> by Minsky and Papert discouraged research into neural networks for many years until that idea was proven to have merit.) But the story is not over as much work has yet to be done to develop the algorithms that can properly deal with congestion in the sense that this email chain continues to discuss it. </div><div dir="auto"><br></div><div dir="auto">Best,</div><div dir="auto">Len</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div><div><br><blockquote type="cite"><div>On Jul 13, 2021, at 10:49 AM, David P. Reed <<a href="mailto:dpreed@deepplum.com" target="_blank">dpreed@deepplum.com</a>> wrote:</div><br><div><span style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">Bob -</span><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><span style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">On Tuesday, July 13, 2021 1:07pm, "Bob McMahon" <</span><a href="mailto:bob.mcmahon@broadcom.com" style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px" target="_blank">bob.mcmahon@broadcom.com</a><span style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">> said:</span><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><blockquote type="cite" style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none">"Control at endpoints benefits greatly from even small amounts of<br>information supplied by the network about the degree of congestion present<br>on the path."<br><br>Agreed. The ECN mechanism seems like a shared thermostat in a building.<br>It's basically an on/off where everyone is trying to set the temperature.<br>It does affect, in a non-linear manner, but still an effect. Better than a<br>thermostat set at infinity or 0 Kelvin for sure.<br><br>I find the assumption that congestion occurs "in network" as not always<br>true. Taking OWD measurements with read side rate limiting suggests that<br>equally important to mitigating bufferbloat driven latency using congestion<br>signals is to make sure apps read "fast enough" whatever that means. I<br>rarely hear about how important it is for apps to prioritize reads over<br>open sockets. Not sure why that's overlooked and bufferbloat gets all the<br>attention. I'm probably missing something.<br></blockquote><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><span style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">In the early days of the Internet protocol and also even ARPANET Host-Host protocol there were those who conflated host-level "flow control" (matching production rate of data into the network to the destination *process* consumption rate of data on a virtual circuit with a source capable of variable and unbounded bit rate) with "congestion control" in the network. The term "congestion control" wasn't even used in the Internetworking project when it was discussing design in the late 1970's. I tried to use it in our working group meetings, and every time I said "congestion" the response would be phrased as "flow".</span><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><span style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">The classic example was printing a file's contents from disk to an ASR33 terminal on an TIP (Terminal IMP). There was flow control in the end-to-end protocol to avoid overflowing the TTY's limited buffer. But those who grew up with ARPANET knew that thare was no way to accumulate queueing in the IMP network, because of RFNM's that required permission for each new packet to be sent. RFNM's implicitly prevented congestion from being caused by a virtual circuit. But a flow control problem remained, because at the higher level protocol, buffering would overflow at the TIP.</span><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><span style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">TCP adopted a different end-to-end *flow* control, so it solved the flow control problem by creating a Windowing mechanism. But it did not by itself solve the *congestion* control problem, even congestion built up inside the network by a wide-open window and a lazy operating system at the receiving end that just said, I've got a lot of virtual memory so I'll open the window to maximum size.</span><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><span style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">There was a lot of confusion, because the guys who came from the ARPANET environment, with all links being the same speed and RFNM limits on rate, couldn't see why the Internet stack was so collapse-prone. I think Multics, for example, as a giant virtual memory system caused congestion by opening up its window too much.</span><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><span style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">This is where Van Jacobson discovered that dropped packets were a "good enough" congestion signal because of "fate sharing" among the packets that flowed on a bottleneck path, and that windowing (invented for flow control by the receiver to protect itself from overflow if the receiver couldn't receive fast enough) could be used to slow down the sender to match the rate of senders to the capacity of the internal bottleneck link. An elegant "hack" that actually worked really well in practice.</span><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><span style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline">Now we view it as a bug if the receiver opens its window too much, or otherwise doesn't translate dropped packets (or other incipient-congestion signals) to shut down the source transmission rate as quickly as possible. Fortunately, the proper state of the internet - the one it should seek as its ideal state - is that there is at most one packet waiting for each egress link in the bottleneck path. This stable state ensures that the window-reduction or slow-down signal encounters no congestion, with high probability. [Excursions from one-packet queue occur, but since only one-packet waiting is sufficient to fill the bottleneck link to capacity, they can't achieve higher throughput in steady state. In practice, noisy arrival distributions can reduce throughput, so allowing a small number of packets to be waiting on a bottleneck link's queue can slightly increase throughput. That's not asymptotically relevant, but as mentioned, the Internet is never near asymptotic behavior.]</span><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><br style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><blockquote type="cite" style="font-family:ArialMT;font-size:16px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><br>Bob<br><br>On Tue, Jul 13, 2021 at 12:15 AM Amr Rizk <<a href="mailto:amr@rizk.com.de" target="_blank">amr@rizk.com.de</a>> wrote:<br><br><blockquote type="cite">Ben,<br><br>it depends on what one tries to measure. Doing a rate scan using UDP (to<br>measure latency distributions under load) is the best thing that we have<br>but without actually knowing how resources are shared (fair share as in<br>WiFi, FIFO as nearly everywhere else) it becomes very difficult to<br>interpret the results or provide a proper argument on latency. You are<br>right - TCP stats are a proxy for user experience but I believe they are<br>difficult to reproduce (we are always talking about very short TCP flows -<br>the infinite TCP flow that converges to a steady behavior is purely<br>academic).<br><br>By the way, Little's law is a strong tool when it comes to averages. To be<br>able to say more (e.g. 1% of the delays is larger than x) one requires more<br>information (e.g. the traffic - On-OFF pattern) see [1]. I am not sure<br>when does such information readily exist.<br><br>Best<br>Amr<br><br>[1] <a href="https://dl.acm.org/doi/10.1145/3341617.3326146" target="_blank">https://dl.acm.org/doi/10.1145/3341617.3326146</a> or if behind a paywall<br><a href="https://www.dcs.warwick.ac.uk/~florin/lib/sigmet19b.pdf" target="_blank">https://www.dcs.warwick.ac.uk/~florin/lib/sigmet19b.pdf</a><br><br>--------------------------------<br>Amr Rizk (<a href="mailto:amr.rizk@uni-due.de" target="_blank">amr.rizk@uni-due.de</a>)<br>University of Duisburg-Essen<br><br>-----Ursprüngliche Nachricht-----<br>Von: Bloat <<a href="mailto:bloat-bounces@lists.bufferbloat.net" target="_blank">bloat-bounces@lists.bufferbloat.net</a>> Im Auftrag von Ben Greear<br>Gesendet: Montag, 12. Juli 2021 22:32<br>An: Bob McMahon <<a href="mailto:bob.mcmahon@broadcom.com" target="_blank">bob.mcmahon@broadcom.com</a>><br>Cc: <a href="mailto:starlink@lists.bufferbloat.net" target="_blank">starlink@lists.bufferbloat.net</a>; Make-Wifi-fast <<br><a href="mailto:make-wifi-fast@lists.bufferbloat.net" target="_blank">make-wifi-fast@lists.bufferbloat.net</a>>; Leonard Kleinrock <<a href="mailto:lk@cs.ucla.edu" target="_blank">lk@cs.ucla.edu</a>>;<br>David P. Reed <<a href="mailto:dpreed@deepplum.com" target="_blank">dpreed@deepplum.com</a>>; Cake List <<a href="mailto:cake@lists.bufferbloat.net" target="_blank">cake@lists.bufferbloat.net</a>>;<br><a href="mailto:codel@lists.bufferbloat.net" target="_blank">codel@lists.bufferbloat.net</a>; cerowrt-devel <<br><a href="mailto:cerowrt-devel@lists.bufferbloat.net" target="_blank">cerowrt-devel@lists.bufferbloat.net</a>>; bloat <<a href="mailto:bloat@lists.bufferbloat.net" target="_blank">bloat@lists.bufferbloat.net</a>><br>Betreff: Re: [Bloat] Little's Law mea culpa, but not invalidating my main<br>point<br><br>UDP is better for getting actual packet latency, for sure. TCP is<br>typical-user-experience-latency though, so it is also useful.<br><br>I'm interested in the test and visualization side of this. If there were<br>a way to give engineers a good real-time look at a complex real-world<br>network, then they have something to go on while trying to tune various<br>knobs in their network to improve it.<br><br>I'll let others try to figure out how build and tune the knobs, but the<br>data acquisition and visualization is something we might try to<br>accomplish. I have a feeling I'm not the first person to think of this,<br>however....probably someone already has done such a thing.<br><br>Thanks,<br>Ben<br><br>On 7/12/21 1:04 PM, Bob McMahon wrote:<br><blockquote type="cite">I believe end host's TCP stats are insufficient as seen per the<br>"failed" congested control mechanisms over the last decades. I think<br>Jaffe pointed this out in<br>1979 though he was using what's been deemed on this thread as "spherical<br></blockquote>cow queueing theory."<br><blockquote type="cite"><br>"Flow control in store-and-forward computer networks is appropriate<br>for decentralized execution. A formal description of a class of<br>"decentralized flow control algorithms" is given. The feasibility of<br>maximizing power with such algorithms is investigated. On the<br>assumption that communication links behave like M/M/1 servers it is<br></blockquote>shown that no "decentralized flow control algorithm" can maximize network<br>power. Power has been suggested in the literature as a network performance<br>objective. It is also shown that no objective based only on the users'<br>throughputs and average delay is decentralizable. Finally, a restricted<br>class of algorithms cannot even approximate power."<br><blockquote type="cite"><br><a href="https://ieeexplore.ieee.org/document/1095152" target="_blank">https://ieeexplore.ieee.org/document/1095152</a><br><br>Did Jaffe make a mistake?<br><br>Also, it's been observed that latency is non-parametric in it's<br>distributions and computing gaussians per the central limit theorem<br>for OWD feedback loops aren't effective. How does one design a control<br></blockquote>loop around things that are non-parametric? It also begs the question, what<br>are the feed forward knobs that can actually help?<br><blockquote type="cite"><br>Bob<br><br>On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <<a href="mailto:greearb@candelatech.com" target="_blank">greearb@candelatech.com</a><br></blockquote><mailto:<a href="mailto:greearb@candelatech.com" target="_blank">greearb@candelatech.com</a>>> wrote:<br><blockquote type="cite"><br> Measuring one or a few links provides a bit of data, but seems like<br></blockquote>if someone is trying to understand<br><blockquote type="cite"> a large and real network, then the OWD between point A and B needs<br></blockquote>to just be input into something much<br><blockquote type="cite"> more grand. Assuming real-time OWD data exists between 100 to 1000<br></blockquote>endpoint pairs, has anyone found a way<br><blockquote type="cite"> to visualize this in a useful manner?<br><br> Also, considering something better than ntp may not really scale to<br></blockquote>1000+ endpoints, maybe round-trip<br><blockquote type="cite"> time is only viable way to get this type of data. In that case,<br></blockquote>maybe clever logic could use things<br><blockquote type="cite"> like trace-route to get some idea of how long it takes to get 'onto'<br></blockquote>the internet proper, and so estimate<br><blockquote type="cite"> the last-mile latency. My assumption is that the last-mile latency<br></blockquote>is where most of the pervasive<br><blockquote type="cite"> assymetric network latencies would exist (or just ping 8.8.8.8 which<br></blockquote>is 20ms from everywhere due to<br><blockquote type="cite"> $magic).<br><br> Endpoints could also triangulate a bit if needed, using some anchor<br></blockquote>points in the network<br><blockquote type="cite"> under test.<br><br> Thanks,<br> Ben<br><br> On 7/12/21 11:21 AM, Bob McMahon wrote:<br><blockquote type="cite">iperf 2 supports OWD and gives full histograms for TCP write to<br></blockquote></blockquote>read, TCP connect times, latency of packets (with UDP), latency of "frames"<br>with<br><blockquote type="cite"><blockquote type="cite">simulated video traffic (TCP and UDP), xfer times of bursts with<br></blockquote></blockquote>low duty cycle traffic, and TCP RTT (sampling based.) It also has support<br>for sampling (per<br><blockquote type="cite"><blockquote type="cite">interval reports) down to 100 usecs if configured with<br></blockquote></blockquote>--enable-fastsampling, otherwise the fastest sampling is 5 ms. We've<br>released all this as open source.<br><blockquote type="cite"><blockquote type="cite"><br>OWD only works if the end realtime clocks are synchronized using<br></blockquote></blockquote>a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data<br>centers don't<br><blockquote type="cite"> provide<br><blockquote type="cite">sufficient level of clock accuracy and the GPS pulse per second *<br></blockquote></blockquote>to colo and vm customers.<br><blockquote type="cite"><blockquote type="cite"><br><a href="https://iperf2.sourceforge.io/iperf-manpage.html" target="_blank">https://iperf2.sourceforge.io/iperf-manpage.html</a><br><br>Bob<br><br>On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <<br></blockquote></blockquote><a href="mailto:dpreed@deepplum.com" target="_blank">dpreed@deepplum.com</a> <mailto:<a href="mailto:dpreed@deepplum.com" target="_blank">dpreed@deepplum.com</a>> <mailto:<br><a href="mailto:dpreed@deepplum.com" target="_blank">dpreed@deepplum.com</a><br><blockquote type="cite"> <mailto:<a href="mailto:dpreed@deepplum.com" target="_blank">dpreed@deepplum.com</a>>>> wrote:<br><blockquote type="cite"><br><br> On Monday, July 12, 2021 9:46am, "Livingood, Jason" <<br></blockquote></blockquote><a href="mailto:Jason_Livingood@comcast.com" target="_blank">Jason_Livingood@comcast.com</a> <mailto:<a href="mailto:Jason_Livingood@comcast.com" target="_blank">Jason_Livingood@comcast.com</a>><br><blockquote type="cite"> <mailto:<a href="mailto:Jason_Livingood@comcast.com" target="_blank">Jason_Livingood@comcast.com</a> <mailto:<br></blockquote><a href="mailto:Jason_Livingood@comcast.com" target="_blank">Jason_Livingood@comcast.com</a>>>> said:<br><blockquote type="cite"><blockquote type="cite"><br><blockquote type="cite">I think latency/delay is becoming seen to be as important<br></blockquote></blockquote></blockquote>certainly, if not a more direct proxy for end user QoE. This is all still<br>evolving and I<br><blockquote type="cite"> have<br><blockquote type="cite"> to say is a super interesting & fun thing to work on. :-)<br><br> If I could manage to sell one idea to the management<br></blockquote></blockquote>hierarchy of communications industry CEOs (operators, vendors, ...) it is<br>this one:<br><blockquote type="cite"><blockquote type="cite"><br> "It's the end-to-end latency, stupid!"<br><br> And I mean, by end-to-end, latency to complete a task at a<br></blockquote></blockquote>relevant layer of abstraction.<br><blockquote type="cite"><blockquote type="cite"><br> At the link level, it's packet send to packet receive<br></blockquote></blockquote>completion.<br><blockquote type="cite"><blockquote type="cite"><br> But at the transport level including retransmission buffers,<br></blockquote></blockquote>it's datagram (or message) origination until the acknowledgement arrives<br>for that<br><blockquote type="cite"> message being<br><blockquote type="cite"> delivered after whatever number of retransmissions, freeing<br></blockquote></blockquote>the retransmission buffer.<br><blockquote type="cite"><blockquote type="cite"><br> At the WWW level, it's mouse click to display update<br></blockquote></blockquote>corresponding to completion of the request.<br><blockquote type="cite"><blockquote type="cite"><br> What should be noted is that lower level latencies don't<br></blockquote></blockquote>directly predict the magnitude of higher-level latencies. But longer lower<br>level latencies<br><blockquote type="cite"> almost<br><blockquote type="cite"> always amplfify higher level latencies. Often non-linearly.<br><br> Throughput is very, very weakly related to these latencies,<br></blockquote></blockquote>in contrast.<br><blockquote type="cite"><blockquote type="cite"><br> The amplification process has to do with the presence of<br></blockquote></blockquote>queueing. Queueing is ALWAYS bad for latency, and throughput only helps if<br>it is in exactly the<br><blockquote type="cite"><blockquote type="cite"> right place (the so-called input queue of the bottleneck<br></blockquote></blockquote>process, which is often a link, but not always).<br><blockquote type="cite"><blockquote type="cite"><br> Can we get that slogan into Harvard Business Review? Can we<br></blockquote></blockquote>get it taught in Managerial Accounting at HBS? (which does address<br>logistics/supply chain<br><blockquote type="cite"> queueing).<br><blockquote type="cite"><br><br><br><br><br><br><br>This electronic communication and the information and any files<br></blockquote></blockquote>transmitted with it, or attached to it, are confidential and are intended<br>solely for the<br><blockquote type="cite"> use of<br><blockquote type="cite">the individual or entity to whom it is addressed and may contain<br></blockquote></blockquote>information that is confidential, legally privileged, protected by privacy<br>laws, or<br><blockquote type="cite"> otherwise<br><blockquote type="cite">restricted from disclosure to anyone else. If you are not the<br></blockquote></blockquote>intended recipient or the person responsible for delivering the e-mail to<br>the intended<br><blockquote type="cite"> recipient,<br><blockquote type="cite">you are hereby notified that any use, copying, distributing,<br></blockquote></blockquote>dissemination, forwarding, printing, or copying of this e-mail is strictly<br>prohibited. If you<br><blockquote type="cite"><blockquote type="cite">received this e-mail in error, please return the e-mail to the<br></blockquote></blockquote>sender, delete it from your computer, and destroy any printed copy of it.<br><blockquote type="cite"><br><br> --<br> Ben Greear <<a href="mailto:greearb@candelatech.com" target="_blank">greearb@candelatech.com</a> <mailto:<a href="mailto:greearb@candelatech.com" target="_blank">greearb@candelatech.com</a><br><blockquote type="cite"><br></blockquote> Candela Technologies Inc <a href="http://www.candelatech.com" target="_blank">http://www.candelatech.com</a><br><br><br>This electronic communication and the information and any files<br>transmitted with it, or attached to it, are confidential and are<br>intended solely for the use of the individual or entity to whom it is<br>addressed and may contain information that is confidential, legally<br>privileged, protected by privacy laws, or otherwise restricted from<br></blockquote>disclosure to anyone else. If you are not the intended recipient or the<br>person responsible for delivering the e-mail to the intended recipient, you<br>are hereby notified that any use, copying, distributing, dissemination,<br>forwarding, printing, or copying of this e-mail is strictly prohibited. If<br>you received this e-mail in error, please return the e-mail to the sender,<br>delete it from your computer, and destroy any printed copy of it.<br><br><br>--<br>Ben Greear <<a href="mailto:greearb@candelatech.com" target="_blank">greearb@candelatech.com</a>><br>Candela Technologies Inc <a href="http://www.candelatech.com" target="_blank">http://www.candelatech.com</a><br><br>_______________________________________________<br>Bloat mailing list<br><a href="mailto:Bloat@lists.bufferbloat.net" target="_blank">Bloat@lists.bufferbloat.net</a><br><a href="https://lists.bufferbloat.net/listinfo/bloat" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br><br><br></blockquote><br>--<br>This electronic communication and the information and any files transmitted<br>with it, or attached to it, are confidential and are intended solely for<br>the use of the individual or entity to whom it is addressed and may contain<br>information that is confidential, legally privileged, protected by privacy<br>laws, or otherwise restricted from disclosure to anyone else. If you are<br>not the intended recipient or the person responsible for delivering the<br>e-mail to the intended recipient, you are hereby notified that any use,<br>copying, distributing, dissemination, forwarding, printing, or copying of<br>this e-mail is strictly prohibited. If you received this e-mail in error,<br>please return the e-mail to the sender, delete it from your computer, and<br>destroy any printed copy of it.</blockquote></div></blockquote></div></div></div></div></div></div><div><div dir="auto"><div dir="auto"><div dir="auto"><div></div></div></div></div></div></blockquote></div>
<br>
<span style="background-color:rgb(255,255,255)"><font size="2">This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.</font></span>