* [Bloat] Apple WWDC Talks on Latency/Bufferbloat
@ 2021-06-11 19:14 Nathan Owens
2021-06-11 21:58 ` Jonathan Morton
0 siblings, 1 reply; 12+ messages in thread
From: Nathan Owens @ 2021-06-11 19:14 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 1690 bytes --]
Some relevant talks / publicity at WWDC -- the first mentioning CoDel,
queueing, etc. Featuring Stuart Cheshire. iOS 15 adds a developer test for
loaded latency, reported in "RPM" or round-trips per minute.
I ran it on my machine:
nowens@mac1015 ~ % /usr/bin/networkQuality
==== SUMMARY ====
Upload capacity: 90.867 Mbps
Download capacity: 93.616 Mbps
Upload flows: 16
Download flows: 20
Responsiveness: Medium (840 RPM)
Reduce network delays for your app
https://developer.apple.com/videos/play/wwdc2021/10239/
CPU performance and network throughput rates keep improving, but the speed
of light is one limit that isn't going any higher. Learn the APIs and best
practices to maximize your app's responsiveness and efficiency by keeping
network round-trip times low and minimizing the number of round trips when
performing network operations.
Optimize for 5G networks
https://developer.apple.com/videos/play/wwdc2021/10103/
5G enables new opportunities for your app or game through better
performance for data transfer, higher bandwidth, lower latency, and much
more. Discover how you can take advantage of the latest networking
technology and Apple hardware to create adaptive experiences for your
content that best suit someone's data connection and optimize network
traffic.
Accelerate networking with HTTP/3 and QUIC
https://developer.apple.com/videos/play/wwdc2021/10094/
The web is changing, and the next major version of HTTP is here. Learn how
HTTP/3 reduces latency and improves reliability for your app and discover
how its underlying transport, QUIC, unlocks new innovations in your own
custom protocols using new transport functionality and multi-streaming
connection groups
[-- Attachment #2: Type: text/html, Size: 2185 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] Apple WWDC Talks on Latency/Bufferbloat
2021-06-11 19:14 [Bloat] Apple WWDC Talks on Latency/Bufferbloat Nathan Owens
@ 2021-06-11 21:58 ` Jonathan Morton
0 siblings, 0 replies; 12+ messages in thread
From: Jonathan Morton @ 2021-06-11 21:58 UTC (permalink / raw)
To: Nathan Owens; +Cc: bloat
> On 11 Jun, 2021, at 10:14 pm, Nathan Owens <nathan@nathan.io> wrote:
>
> round-trips per minute
Wow, one of my suggestions finally got some traction.
- Jonathan Morton
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] Apple WWDC Talks on Latency/Bufferbloat
2021-07-06 18:54 ` Christoph Paasch
@ 2021-07-06 19:08 ` Sebastian Moeller
0 siblings, 0 replies; 12+ messages in thread
From: Sebastian Moeller @ 2021-07-06 19:08 UTC (permalink / raw)
To: Christoph Paasch; +Cc: Matt Mathis, bloat
Hello Christoph,
thanks for your detailed response!
> On Jul 6, 2021, at 20:54, Christoph Paasch <cpaasch@apple.com> wrote:
>
> Hello Sebastian,
>
> On 06/29/21 - 09:58, Sebastian Moeller wrote:
>> Hi Christoph,
>>
>> one question below:
>>
>>> On Jun 18, 2021, at 01:43, Christoph Paasch via Bloat
>>> <bloat@lists.bufferbloat.net> wrote:
>>>
>>> Hello,
>>>
>>> On 06/17/21 - 11:16, Matt Mathis via Bloat wrote:
>>>> Is there a paper or spec for RPM?
>>>
>>> we try to publish an IETF-draft on the methodology before the upcoming
>>> IETF in July.
>>>
>>> But, in the mean-time please see inline:
>>>
>>>> There are at least two different ways to define RPM, both of which
>>>> might be relevant.
>>>>
>>>> At the TCP layer: it can be directly computed from a packet capture.
>>>> The trick is to time reverse a trace and compute the critical path
>>>> backwards through the trace: what event triggered each segment or ACK,
>>>> and count round trips. This would be super robust but does not include
>>>> the queueing required in the kernel socket buffers. I need to think
>>>> some more about computing TCP RPM from tcp_info or other kernel
>>>> instrumentation - it might be possible.
>>>
>>> We explicitly opted against measuring purely TCP-level round-trip times.
>>> Because there are countless transparent TCP-proxies out there that would
>>> skew these numbers. Our goal with RPM/Responsiveness is to measure how
>>> an end-user would experience the network. Which means, DNS-resolution,
>>> TCP handshake-time, TLS-handshake, HTTP/2 Request/response. Because, at
>>> the end, that's what actually matters to the users.
>>>
>>>> A different RPM can be done in the application, above TCP, for example
>>>> by ping-ponging messages. This would include the delays traversing the
>>>> kernel socket buffers which have to be at least as large as a full
>>>> network RTT.
>>>>
>>>> This is perhaps an important point: due to the retransmit and
>>>> reassuebly queues (which are required to implement robust data
>>>> delivery) TCP must be able hold at least a full RTT of data in it's own
>>>> buffers, which means that under some conditions the RTT as seen by the
>>>> application has be be at least twice the network's RTT, including any
>>>> bloat in the network.
>>>
>>> Currently, we measure RPM on separate connections (not the load-bearing
>>> ones). We are also measuring on the load-bearing connections themselves
>>> through H2 Ping frames. But for the reasons you described we haven't yet
>>> factored it into the RPM-number.
>>>
>>> One way may be to inspect with TCP_INFO whether or not the connections
>>> had retransmissions and then throw away the number. On the other hand,
>>> if the network becomes extremely lossy under working conditions, it does
>>> impact the user-experience and so it could make sense to take this into
>>> account.
>>>
>>>
>>> In the end, we realized how hard it is to accurately measure bufferbloat
>>> within a reasonable time-frame (our goal is to finish the test within
>>> ~15 seconds).
>>
>> [SM] I understand that 10-15 seconds is the amount of time users
>> have been trained to expect an on-line speedtest to take, but
>> experiments with flent/RRUL showed that there are latency affection
>> processes on slower timescales that are better visible if one can
>> also run a test for 60 - 300 seconds (e.g. cyclic WiFi channel
>> probing). Does your tool optionally allow to specify a longer
>> run-time?
>
> Currently the tool does not have a "deep-dive"-mode. There are a few things
> (besides running longer) that a "deep-dive"-mode could provide. For example,
> traceroute-style probes during the test to identify the location of the
> bufferbloat.
[SM] Oh, shiny ;) To be useful/interpretable such a tracerouter style path traversal should be performed from both sides of a link (I am sure you know, but my go to slide-deck is https://archive.nanog.org/sites/default/files/10_Roisman_Traceroute.pdf). But it would be sweet if there was a reliable way to get bi-directional traceroutes over path one actually uses.
> Use H3 for testing and/or run TCP on a different port to
> identify traffic-classifiers/transparent TCP-proxies that treat things
> differently. Study the impact of TCP bulk transfer on UDP latency. And so
> on...
> Such a deep-dive mode would be possible in the command-line tool but very
> unlikely in the UI-mode.
[SM] Fair enough, thanks.
>
> Our primary goal in this first iteration is to provide a tool that gives a
> quick insight into how bad/good the bufferbloat is on the network in such a
> way that a non-expert user can run it and understand the result.
[SM] Worthy goal.
> We also want it to be using standard protocols so that any basic web-server can
> be configured to serve as an endpoint to it and because that's the protocols
> that the users are actually using in the end.
[SM] +1; Yes, tests with the production protocols, ideally to the "production" servers seems like a great way forward.
Regards
Sebastian
>
>
> Cheers,
> Christoph
>
>
>> Thinking of it, to keep everybody on their toes, how
>> about occasionally running a test with longer run-time (maybe after
>> asking the users consent) and store the test duration as part of the
>> results?
>>
>>
>> Best Regards Sebastian
>>
>>
>>>
>>> We hope that with the IETF-draft we can get the right people together to
>>> iterate over it and squash out a very accurate measurement that
>>> represents what users would experience.
>>>
>>>
>>> Cheers, Christoph
>>>
>>>
>>>>
>>>> Thanks, --MM-- The best way to predict the future is to create it. -
>>>> Alan Kay
>>>>
>>>> We must not tolerate intolerance; however our response must be
>>>> carefully measured: too strong would be hypocritical and risks
>>>> spiraling out of control; too weak risks being mistaken for tacit
>>>> approval.
>>>>
>>>>
>>>> On Sat, Jun 12, 2021 at 9:11 AM Rich Brown <richb.hanover@gmail.com>
>>>> wrote:
>>>>
>>>>>> On Jun 12, 2021, at 12:00 PM, bloat-request@lists.bufferbloat.net
>>>>>> wrote:
>>>>>>
>>>>>> Some relevant talks / publicity at WWDC -- the first mentioning
>>>>>> CoDel, queueing, etc. Featuring Stuart Cheshire. iOS 15 adds a
>>>>>> developer test
>>>>> for
>>>>>> loaded latency, reported in "RPM" or round-trips per minute.
>>>>>>
>>>>>> I ran it on my machine: nowens@mac1015 ~ % /usr/bin/networkQuality
>>>>>> ==== SUMMARY ==== Upload capacity: 90.867 Mbps Download capacity:
>>>>>> 93.616 Mbps Upload flows: 16 Download flows: 20 Responsiveness:
>>>>>> Medium (840 RPM)
>>>>>
>>>>> Does anyone know how to get the command-line version for current (not
>>>>> upcoming) macOS? Thanks.
>>>>>
>>>>> Rich _______________________________________________ Bloat mailing
>>>>> list Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>
>>>
>>>> _______________________________________________ Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>> _______________________________________________ Bloat mailing list
>>> Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] Apple WWDC Talks on Latency/Bufferbloat
2021-06-29 7:58 ` Sebastian Moeller
@ 2021-07-06 18:54 ` Christoph Paasch
2021-07-06 19:08 ` Sebastian Moeller
0 siblings, 1 reply; 12+ messages in thread
From: Christoph Paasch @ 2021-07-06 18:54 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Matt Mathis, bloat
Hello Sebastian,
On 06/29/21 - 09:58, Sebastian Moeller wrote:
> Hi Christoph,
>
> one question below:
>
> > On Jun 18, 2021, at 01:43, Christoph Paasch via Bloat
> > <bloat@lists.bufferbloat.net> wrote:
> >
> > Hello,
> >
> > On 06/17/21 - 11:16, Matt Mathis via Bloat wrote:
> >> Is there a paper or spec for RPM?
> >
> > we try to publish an IETF-draft on the methodology before the upcoming
> > IETF in July.
> >
> > But, in the mean-time please see inline:
> >
> >> There are at least two different ways to define RPM, both of which
> >> might be relevant.
> >>
> >> At the TCP layer: it can be directly computed from a packet capture.
> >> The trick is to time reverse a trace and compute the critical path
> >> backwards through the trace: what event triggered each segment or ACK,
> >> and count round trips. This would be super robust but does not include
> >> the queueing required in the kernel socket buffers. I need to think
> >> some more about computing TCP RPM from tcp_info or other kernel
> >> instrumentation - it might be possible.
> >
> > We explicitly opted against measuring purely TCP-level round-trip times.
> > Because there are countless transparent TCP-proxies out there that would
> > skew these numbers. Our goal with RPM/Responsiveness is to measure how
> > an end-user would experience the network. Which means, DNS-resolution,
> > TCP handshake-time, TLS-handshake, HTTP/2 Request/response. Because, at
> > the end, that's what actually matters to the users.
> >
> >> A different RPM can be done in the application, above TCP, for example
> >> by ping-ponging messages. This would include the delays traversing the
> >> kernel socket buffers which have to be at least as large as a full
> >> network RTT.
> >>
> >> This is perhaps an important point: due to the retransmit and
> >> reassuebly queues (which are required to implement robust data
> >> delivery) TCP must be able hold at least a full RTT of data in it's own
> >> buffers, which means that under some conditions the RTT as seen by the
> >> application has be be at least twice the network's RTT, including any
> >> bloat in the network.
> >
> > Currently, we measure RPM on separate connections (not the load-bearing
> > ones). We are also measuring on the load-bearing connections themselves
> > through H2 Ping frames. But for the reasons you described we haven't yet
> > factored it into the RPM-number.
> >
> > One way may be to inspect with TCP_INFO whether or not the connections
> > had retransmissions and then throw away the number. On the other hand,
> > if the network becomes extremely lossy under working conditions, it does
> > impact the user-experience and so it could make sense to take this into
> > account.
> >
> >
> > In the end, we realized how hard it is to accurately measure bufferbloat
> > within a reasonable time-frame (our goal is to finish the test within
> > ~15 seconds).
>
> [SM] I understand that 10-15 seconds is the amount of time users
> have been trained to expect an on-line speedtest to take, but
> experiments with flent/RRUL showed that there are latency affection
> processes on slower timescales that are better visible if one can
> also run a test for 60 - 300 seconds (e.g. cyclic WiFi channel
> probing). Does your tool optionally allow to specify a longer
> run-time?
Currently the tool does not have a "deep-dive"-mode. There are a few things
(besides running longer) that a "deep-dive"-mode could provide. For example,
traceroute-style probes during the test to identify the location of the
bufferbloat. Use H3 for testing and/or run TCP on a different port to
identify traffic-classifiers/transparent TCP-proxies that treat things
differently. Study the impact of TCP bulk transfer on UDP latency. And so
on...
Such a deep-dive mode would be possible in the command-line tool but very
unlikely in the UI-mode.
Our primary goal in this first iteration is to provide a tool that gives a
quick insight into how bad/good the bufferbloat is on the network in such a
way that a non-expert user can run it and understand the result.
We also want it to be using standard protocols so that any basic web-server can
be configured to serve as an endpoint to it and because that's the protocols
that the users are actually using in the end.
Cheers,
Christoph
> Thinking of it, to keep everybody on their toes, how
> about occasionally running a test with longer run-time (maybe after
> asking the users consent) and store the test duration as part of the
> results?
>
>
> Best Regards Sebastian
>
>
> >
> > We hope that with the IETF-draft we can get the right people together to
> > iterate over it and squash out a very accurate measurement that
> > represents what users would experience.
> >
> >
> > Cheers, Christoph
> >
> >
> >>
> >> Thanks, --MM-- The best way to predict the future is to create it. -
> >> Alan Kay
> >>
> >> We must not tolerate intolerance; however our response must be
> >> carefully measured: too strong would be hypocritical and risks
> >> spiraling out of control; too weak risks being mistaken for tacit
> >> approval.
> >>
> >>
> >> On Sat, Jun 12, 2021 at 9:11 AM Rich Brown <richb.hanover@gmail.com>
> >> wrote:
> >>
> >>>> On Jun 12, 2021, at 12:00 PM, bloat-request@lists.bufferbloat.net
> >>>> wrote:
> >>>>
> >>>> Some relevant talks / publicity at WWDC -- the first mentioning
> >>>> CoDel, queueing, etc. Featuring Stuart Cheshire. iOS 15 adds a
> >>>> developer test
> >>> for
> >>>> loaded latency, reported in "RPM" or round-trips per minute.
> >>>>
> >>>> I ran it on my machine: nowens@mac1015 ~ % /usr/bin/networkQuality
> >>>> ==== SUMMARY ==== Upload capacity: 90.867 Mbps Download capacity:
> >>>> 93.616 Mbps Upload flows: 16 Download flows: 20 Responsiveness:
> >>>> Medium (840 RPM)
> >>>
> >>> Does anyone know how to get the command-line version for current (not
> >>> upcoming) macOS? Thanks.
> >>>
> >>> Rich _______________________________________________ Bloat mailing
> >>> list Bloat@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/bloat
> >>>
> >
> >> _______________________________________________ Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >
> > _______________________________________________ Bloat mailing list
> > Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] Apple WWDC Talks on Latency/Bufferbloat
2021-06-17 23:43 ` Christoph Paasch
2021-06-18 0:17 ` Matt Mathis
@ 2021-06-29 7:58 ` Sebastian Moeller
2021-07-06 18:54 ` Christoph Paasch
1 sibling, 1 reply; 12+ messages in thread
From: Sebastian Moeller @ 2021-06-29 7:58 UTC (permalink / raw)
To: Christoph Paasch; +Cc: Matt Mathis, bloat
Hi Christoph,
one question below:
> On Jun 18, 2021, at 01:43, Christoph Paasch via Bloat <bloat@lists.bufferbloat.net> wrote:
>
> Hello,
>
> On 06/17/21 - 11:16, Matt Mathis via Bloat wrote:
>> Is there a paper or spec for RPM?
>
> we try to publish an IETF-draft on the methodology before the upcoming IETF
> in July.
>
> But, in the mean-time please see inline:
>
>> There are at least two different ways to define RPM, both of which might be
>> relevant.
>>
>> At the TCP layer: it can be directly computed from a packet capture. The
>> trick is to time reverse a trace and compute the critical path backwards
>> through the trace: what event triggered each segment or ACK, and count
>> round trips. This would be super robust but does not include the queueing
>> required in the kernel socket buffers. I need to think some more about
>> computing TCP RPM from tcp_info or other kernel instrumentation - it might
>> be possible.
>
> We explicitly opted against measuring purely TCP-level round-trip times. Because
> there are countless transparent TCP-proxies out there that would skew these
> numbers. Our goal with RPM/Responsiveness is to measure how an end-user would
> experience the network. Which means, DNS-resolution, TCP handshake-time,
> TLS-handshake, HTTP/2 Request/response. Because, at the end, that's what
> actually matters to the users.
>
>> A different RPM can be done in the application, above TCP, for example by
>> ping-ponging messages. This would include the delays traversing the kernel
>> socket buffers which have to be at least as large as a full network RTT.
>>
>> This is perhaps an important point: due to the retransmit and
>> reassuebly queues (which are required to implement robust data delivery)
>> TCP must be able hold at least a full RTT of data in it's own buffers,
>> which means that under some conditions the RTT as seen by the application
>> has be be at least twice the network's RTT, including any bloat in the
>> network.
>
> Currently, we measure RPM on separate connections (not the load-bearing
> ones). We are also measuring on the load-bearing connections themselves
> through H2 Ping frames. But for the reasons you described we haven't yet
> factored it into the RPM-number.
>
> One way may be to inspect with TCP_INFO whether or not the connections had
> retransmissions and then throw away the number. On the other hand, if the
> network becomes extremely lossy under working conditions, it does impact the
> user-experience and so it could make sense to take this into account.
>
>
> In the end, we realized how hard it is to accurately measure bufferbloat
> within a reasonable time-frame (our goal is to finish the test within ~15
> seconds).
[SM] I understand that 10-15 seconds is the amount of time users have been trained to expect an on-line speedtest to take, but experiments with flent/RRUL showed that there are latency affection processes on slower timescales that are better visible if one can also run a test for 60 - 300 seconds (e.g. cyclic WiFi channel probing). Does your tool optionally allow to specify a longer run-time?
Thinking of it, to keep everybody on their toes, how about occasionally running a test with longer run-time (maybe after asking the users consent) and store the test duration as part of the results?
Best Regards
Sebastian
>
> We hope that with the IETF-draft we can get the right people together to
> iterate over it and squash out a very accurate measurement that represents
> what users would experience.
>
>
> Cheers,
> Christoph
>
>
>>
>> Thanks,
>> --MM--
>> The best way to predict the future is to create it. - Alan Kay
>>
>> We must not tolerate intolerance;
>> however our response must be carefully measured:
>> too strong would be hypocritical and risks spiraling out of
>> control;
>> too weak risks being mistaken for tacit approval.
>>
>>
>> On Sat, Jun 12, 2021 at 9:11 AM Rich Brown <richb.hanover@gmail.com> wrote:
>>
>>>> On Jun 12, 2021, at 12:00 PM, bloat-request@lists.bufferbloat.net wrote:
>>>>
>>>> Some relevant talks / publicity at WWDC -- the first mentioning CoDel,
>>>> queueing, etc. Featuring Stuart Cheshire. iOS 15 adds a developer test
>>> for
>>>> loaded latency, reported in "RPM" or round-trips per minute.
>>>>
>>>> I ran it on my machine:
>>>> nowens@mac1015 ~ % /usr/bin/networkQuality
>>>> ==== SUMMARY ====
>>>> Upload capacity: 90.867 Mbps
>>>> Download capacity: 93.616 Mbps
>>>> Upload flows: 16
>>>> Download flows: 20
>>>> Responsiveness: Medium (840 RPM)
>>>
>>> Does anyone know how to get the command-line version for current (not
>>> upcoming) macOS? Thanks.
>>>
>>> Rich
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] Apple WWDC Talks on Latency/Bufferbloat
2021-06-18 3:33 ` Matt Mathis
@ 2021-06-28 22:54 ` Christoph Paasch
0 siblings, 0 replies; 12+ messages in thread
From: Christoph Paasch @ 2021-06-28 22:54 UTC (permalink / raw)
To: Matt Mathis; +Cc: bloat, Randall Meyer
+Randall
On 06/17/21 - 20:33, Matt Mathis wrote:
> Also consider ippm. intarea might be a good choice for joint sponsorship,
> but they probably won't want to be the lead.
Indeed, ippm might be a good candidate. Thanks!
>
> BTW by using two TCP connections you potentially give a free pass to many
> types of networks (e.g. ECMP, SFQ, etc) and certain OS mis features.
Yes, we are aware of that. Which is why we are looking into how to minimize
the effects you mentioned in your previous email so that we can accurately
measure the latency under load on the load-bearing connections.
For the curious ones here. If you run on macOS the networkQuality command-line
with option "-c", you get more verbose output. In particular, you will see
the latency for H2-pings on the load-bearing connections (labeled
lud_self_dl_h2 and lud_self_ul_h2). You can see in the below how the
download load-bearing flows are suffering from too much data in the
TCP-socket buffer and data queued in the server's process behind the
bulk-data transfer (in this case it is Apache Traffic Server - we are
looking into how other server implementations behave).
MacBook-Pro:~ cpaasch$ networkQuality -c
{
"lud_self_ul_h2" : [
71.202995300292969,
89.105010986328125,
51.216960906982422,
581.09698486328125,
155.85601806640625,
304.031982421875,
271.76202392578125,
202.48997497558594,
139.15800476074219,
160.45701599121094,
247.11001586914062,
626.049072265625,
399.29306030273438,
335.45803833007812,
164.31092834472656
],
"responsiveness" : 1075,
"ul_throughput" : 28645646,
"lud_foreign_tcp_handshake_443" : [
34,
42,
34,
37,
39,
36,
36,
31
],
"lud_self_dl_h2" : [
313.34603881835938,
359.79306030273438,
699.39007568359375,
929.51605224609375,
1653.333984375,
2466.970947265625,
2981.800048828125,
2969.277099609375,
3595.947021484375,
3785.244873046875,
3572.677001953125,
2802.677978515625
],
"dl_flows" : 20,
"dl_throughput" : 396551712,
"ul_flows" : 12,
"lud_foreign_h2_req_resp" : [
63,
75,
80,
73,
70,
69,
78,
96
]
}
Christoph
> Thanks,
> --MM--
> The best way to predict the future is to create it. - Alan Kay
>
> We must not tolerate intolerance;
> however our response must be carefully measured:
> too strong would be hypocritical and risks spiraling out of
> control;
> too weak risks being mistaken for tacit approval.
>
>
> On Thu, Jun 17, 2021 at 6:04 PM Christoph Paasch <cpaasch@apple.com> wrote:
>
> > Not sure yet - there isn’t a good one that would really fit. Maybe tsvwg
> > or intarea.
> >
> > Suggestions?
> >
> > Cheers,
> > Christoph
> >
> > On Jun 17, 2021, at 5:17 PM, Matt Mathis <mattmathis@google.com> wrote:
> >
> >
> > Which WG are you targeting?
> >
> > Thanks,
> > --MM--
> > The best way to predict the future is to create it. - Alan Kay
> >
> > We must not tolerate intolerance;
> > however our response must be carefully measured:
> > too strong would be hypocritical and risks spiraling out of
> > control;
> > too weak risks being mistaken for tacit approval.
> >
> >
> > On Thu, Jun 17, 2021 at 4:43 PM Christoph Paasch <cpaasch@apple.com>
> > wrote:
> >
> >> Hello,
> >>
> >> On 06/17/21 - 11:16, Matt Mathis via Bloat wrote:
> >> > Is there a paper or spec for RPM?
> >>
> >> we try to publish an IETF-draft on the methodology before the upcoming
> >> IETF
> >> in July.
> >>
> >> But, in the mean-time please see inline:
> >>
> >> > There are at least two different ways to define RPM, both of which
> >> might be
> >> > relevant.
> >> >
> >> > At the TCP layer: it can be directly computed from a packet capture.
> >> The
> >> > trick is to time reverse a trace and compute the critical path backwards
> >> > through the trace: what event triggered each segment or ACK, and count
> >> > round trips. This would be super robust but does not include the
> >> queueing
> >> > required in the kernel socket buffers. I need to think some more about
> >> > computing TCP RPM from tcp_info or other kernel instrumentation - it
> >> might
> >> > be possible.
> >>
> >> We explicitly opted against measuring purely TCP-level round-trip times.
> >> Because
> >> there are countless transparent TCP-proxies out there that would skew
> >> these
> >> numbers. Our goal with RPM/Responsiveness is to measure how an end-user
> >> would
> >> experience the network. Which means, DNS-resolution, TCP handshake-time,
> >> TLS-handshake, HTTP/2 Request/response. Because, at the end, that's what
> >> actually matters to the users.
> >>
> >> > A different RPM can be done in the application, above TCP, for example
> >> by
> >> > ping-ponging messages. This would include the delays traversing the
> >> kernel
> >> > socket buffers which have to be at least as large as a full network RTT.
> >> >
> >> > This is perhaps an important point: due to the retransmit and
> >> > reassuebly queues (which are required to implement robust data delivery)
> >> > TCP must be able hold at least a full RTT of data in it's own buffers,
> >> > which means that under some conditions the RTT as seen by the
> >> application
> >> > has be be at least twice the network's RTT, including any bloat in the
> >> > network.
> >>
> >> Currently, we measure RPM on separate connections (not the load-bearing
> >> ones). We are also measuring on the load-bearing connections themselves
> >> through H2 Ping frames. But for the reasons you described we haven't yet
> >> factored it into the RPM-number.
> >>
> >> One way may be to inspect with TCP_INFO whether or not the connections had
> >> retransmissions and then throw away the number. On the other hand, if the
> >> network becomes extremely lossy under working conditions, it does impact
> >> the
> >> user-experience and so it could make sense to take this into account.
> >>
> >>
> >> In the end, we realized how hard it is to accurately measure bufferbloat
> >> within a reasonable time-frame (our goal is to finish the test within ~15
> >> seconds).
> >>
> >> We hope that with the IETF-draft we can get the right people together to
> >> iterate over it and squash out a very accurate measurement that represents
> >> what users would experience.
> >>
> >>
> >> Cheers,
> >> Christoph
> >>
> >>
> >> >
> >> > Thanks,
> >> > --MM--
> >> > The best way to predict the future is to create it. - Alan Kay
> >> >
> >> > We must not tolerate intolerance;
> >> > however our response must be carefully measured:
> >> > too strong would be hypocritical and risks spiraling out of
> >> > control;
> >> > too weak risks being mistaken for tacit approval.
> >> >
> >> >
> >> > On Sat, Jun 12, 2021 at 9:11 AM Rich Brown <richb.hanover@gmail.com>
> >> wrote:
> >> >
> >> > > > On Jun 12, 2021, at 12:00 PM, bloat-request@lists.bufferbloat.net
> >> wrote:
> >> > > >
> >> > > > Some relevant talks / publicity at WWDC -- the first mentioning
> >> CoDel,
> >> > > > queueing, etc. Featuring Stuart Cheshire. iOS 15 adds a developer
> >> test
> >> > > for
> >> > > > loaded latency, reported in "RPM" or round-trips per minute.
> >> > > >
> >> > > > I ran it on my machine:
> >> > > > nowens@mac1015 ~ % /usr/bin/networkQuality
> >> > > > ==== SUMMARY ====
> >> > > > Upload capacity: 90.867 Mbps
> >> > > > Download capacity: 93.616 Mbps
> >> > > > Upload flows: 16
> >> > > > Download flows: 20
> >> > > > Responsiveness: Medium (840 RPM)
> >> > >
> >> > > Does anyone know how to get the command-line version for current (not
> >> > > upcoming) macOS? Thanks.
> >> > >
> >> > > Rich
> >> > > _______________________________________________
> >> > > Bloat mailing list
> >> > > Bloat@lists.bufferbloat.net
> >> > > https://lists.bufferbloat.net/listinfo/bloat
> >> > >
> >>
> >> > _______________________________________________
> >> > Bloat mailing list
> >> > Bloat@lists.bufferbloat.net
> >> > https://lists.bufferbloat.net/listinfo/bloat
> >>
> >>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] Apple WWDC Talks on Latency/Bufferbloat
2021-06-18 1:03 ` Christoph Paasch
@ 2021-06-18 3:33 ` Matt Mathis
2021-06-28 22:54 ` Christoph Paasch
0 siblings, 1 reply; 12+ messages in thread
From: Matt Mathis @ 2021-06-18 3:33 UTC (permalink / raw)
To: Christoph Paasch; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 5983 bytes --]
Also consider ippm. intarea might be a good choice for joint sponsorship,
but they probably won't want to be the lead.
BTW by using two TCP connections you potentially give a free pass to many
types of networks (e.g. ECMP, SFQ, etc) and certain OS mis features.
Thanks,
--MM--
The best way to predict the future is to create it. - Alan Kay
We must not tolerate intolerance;
however our response must be carefully measured:
too strong would be hypocritical and risks spiraling out of
control;
too weak risks being mistaken for tacit approval.
On Thu, Jun 17, 2021 at 6:04 PM Christoph Paasch <cpaasch@apple.com> wrote:
> Not sure yet - there isn’t a good one that would really fit. Maybe tsvwg
> or intarea.
>
> Suggestions?
>
> Cheers,
> Christoph
>
> On Jun 17, 2021, at 5:17 PM, Matt Mathis <mattmathis@google.com> wrote:
>
>
> Which WG are you targeting?
>
> Thanks,
> --MM--
> The best way to predict the future is to create it. - Alan Kay
>
> We must not tolerate intolerance;
> however our response must be carefully measured:
> too strong would be hypocritical and risks spiraling out of
> control;
> too weak risks being mistaken for tacit approval.
>
>
> On Thu, Jun 17, 2021 at 4:43 PM Christoph Paasch <cpaasch@apple.com>
> wrote:
>
>> Hello,
>>
>> On 06/17/21 - 11:16, Matt Mathis via Bloat wrote:
>> > Is there a paper or spec for RPM?
>>
>> we try to publish an IETF-draft on the methodology before the upcoming
>> IETF
>> in July.
>>
>> But, in the mean-time please see inline:
>>
>> > There are at least two different ways to define RPM, both of which
>> might be
>> > relevant.
>> >
>> > At the TCP layer: it can be directly computed from a packet capture.
>> The
>> > trick is to time reverse a trace and compute the critical path backwards
>> > through the trace: what event triggered each segment or ACK, and count
>> > round trips. This would be super robust but does not include the
>> queueing
>> > required in the kernel socket buffers. I need to think some more about
>> > computing TCP RPM from tcp_info or other kernel instrumentation - it
>> might
>> > be possible.
>>
>> We explicitly opted against measuring purely TCP-level round-trip times.
>> Because
>> there are countless transparent TCP-proxies out there that would skew
>> these
>> numbers. Our goal with RPM/Responsiveness is to measure how an end-user
>> would
>> experience the network. Which means, DNS-resolution, TCP handshake-time,
>> TLS-handshake, HTTP/2 Request/response. Because, at the end, that's what
>> actually matters to the users.
>>
>> > A different RPM can be done in the application, above TCP, for example
>> by
>> > ping-ponging messages. This would include the delays traversing the
>> kernel
>> > socket buffers which have to be at least as large as a full network RTT.
>> >
>> > This is perhaps an important point: due to the retransmit and
>> > reassuebly queues (which are required to implement robust data delivery)
>> > TCP must be able hold at least a full RTT of data in it's own buffers,
>> > which means that under some conditions the RTT as seen by the
>> application
>> > has be be at least twice the network's RTT, including any bloat in the
>> > network.
>>
>> Currently, we measure RPM on separate connections (not the load-bearing
>> ones). We are also measuring on the load-bearing connections themselves
>> through H2 Ping frames. But for the reasons you described we haven't yet
>> factored it into the RPM-number.
>>
>> One way may be to inspect with TCP_INFO whether or not the connections had
>> retransmissions and then throw away the number. On the other hand, if the
>> network becomes extremely lossy under working conditions, it does impact
>> the
>> user-experience and so it could make sense to take this into account.
>>
>>
>> In the end, we realized how hard it is to accurately measure bufferbloat
>> within a reasonable time-frame (our goal is to finish the test within ~15
>> seconds).
>>
>> We hope that with the IETF-draft we can get the right people together to
>> iterate over it and squash out a very accurate measurement that represents
>> what users would experience.
>>
>>
>> Cheers,
>> Christoph
>>
>>
>> >
>> > Thanks,
>> > --MM--
>> > The best way to predict the future is to create it. - Alan Kay
>> >
>> > We must not tolerate intolerance;
>> > however our response must be carefully measured:
>> > too strong would be hypocritical and risks spiraling out of
>> > control;
>> > too weak risks being mistaken for tacit approval.
>> >
>> >
>> > On Sat, Jun 12, 2021 at 9:11 AM Rich Brown <richb.hanover@gmail.com>
>> wrote:
>> >
>> > > > On Jun 12, 2021, at 12:00 PM, bloat-request@lists.bufferbloat.net
>> wrote:
>> > > >
>> > > > Some relevant talks / publicity at WWDC -- the first mentioning
>> CoDel,
>> > > > queueing, etc. Featuring Stuart Cheshire. iOS 15 adds a developer
>> test
>> > > for
>> > > > loaded latency, reported in "RPM" or round-trips per minute.
>> > > >
>> > > > I ran it on my machine:
>> > > > nowens@mac1015 ~ % /usr/bin/networkQuality
>> > > > ==== SUMMARY ====
>> > > > Upload capacity: 90.867 Mbps
>> > > > Download capacity: 93.616 Mbps
>> > > > Upload flows: 16
>> > > > Download flows: 20
>> > > > Responsiveness: Medium (840 RPM)
>> > >
>> > > Does anyone know how to get the command-line version for current (not
>> > > upcoming) macOS? Thanks.
>> > >
>> > > Rich
>> > > _______________________________________________
>> > > Bloat mailing list
>> > > Bloat@lists.bufferbloat.net
>> > > https://lists.bufferbloat.net/listinfo/bloat
>> > >
>>
>> > _______________________________________________
>> > Bloat mailing list
>> > Bloat@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/bloat
>>
>>
[-- Attachment #2: Type: text/html, Size: 8199 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] Apple WWDC Talks on Latency/Bufferbloat
2021-06-18 0:17 ` Matt Mathis
@ 2021-06-18 1:03 ` Christoph Paasch
2021-06-18 3:33 ` Matt Mathis
0 siblings, 1 reply; 12+ messages in thread
From: Christoph Paasch @ 2021-06-18 1:03 UTC (permalink / raw)
To: Matt Mathis; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 5248 bytes --]
Not sure yet - there isn’t a good one that would really fit. Maybe tsvwg or intarea.
Suggestions?
Cheers,
Christoph
> On Jun 17, 2021, at 5:17 PM, Matt Mathis <mattmathis@google.com> wrote:
>
>
> Which WG are you targeting?
>
> Thanks,
> --MM--
> The best way to predict the future is to create it. - Alan Kay
>
> We must not tolerate intolerance;
> however our response must be carefully measured:
> too strong would be hypocritical and risks spiraling out of control;
> too weak risks being mistaken for tacit approval.
>
>
>> On Thu, Jun 17, 2021 at 4:43 PM Christoph Paasch <cpaasch@apple.com> wrote:
>> Hello,
>>
>> On 06/17/21 - 11:16, Matt Mathis via Bloat wrote:
>> > Is there a paper or spec for RPM?
>>
>> we try to publish an IETF-draft on the methodology before the upcoming IETF
>> in July.
>>
>> But, in the mean-time please see inline:
>>
>> > There are at least two different ways to define RPM, both of which might be
>> > relevant.
>> >
>> > At the TCP layer: it can be directly computed from a packet capture. The
>> > trick is to time reverse a trace and compute the critical path backwards
>> > through the trace: what event triggered each segment or ACK, and count
>> > round trips. This would be super robust but does not include the queueing
>> > required in the kernel socket buffers. I need to think some more about
>> > computing TCP RPM from tcp_info or other kernel instrumentation - it might
>> > be possible.
>>
>> We explicitly opted against measuring purely TCP-level round-trip times. Because
>> there are countless transparent TCP-proxies out there that would skew these
>> numbers. Our goal with RPM/Responsiveness is to measure how an end-user would
>> experience the network. Which means, DNS-resolution, TCP handshake-time,
>> TLS-handshake, HTTP/2 Request/response. Because, at the end, that's what
>> actually matters to the users.
>>
>> > A different RPM can be done in the application, above TCP, for example by
>> > ping-ponging messages. This would include the delays traversing the kernel
>> > socket buffers which have to be at least as large as a full network RTT.
>> >
>> > This is perhaps an important point: due to the retransmit and
>> > reassuebly queues (which are required to implement robust data delivery)
>> > TCP must be able hold at least a full RTT of data in it's own buffers,
>> > which means that under some conditions the RTT as seen by the application
>> > has be be at least twice the network's RTT, including any bloat in the
>> > network.
>>
>> Currently, we measure RPM on separate connections (not the load-bearing
>> ones). We are also measuring on the load-bearing connections themselves
>> through H2 Ping frames. But for the reasons you described we haven't yet
>> factored it into the RPM-number.
>>
>> One way may be to inspect with TCP_INFO whether or not the connections had
>> retransmissions and then throw away the number. On the other hand, if the
>> network becomes extremely lossy under working conditions, it does impact the
>> user-experience and so it could make sense to take this into account.
>>
>>
>> In the end, we realized how hard it is to accurately measure bufferbloat
>> within a reasonable time-frame (our goal is to finish the test within ~15
>> seconds).
>>
>> We hope that with the IETF-draft we can get the right people together to
>> iterate over it and squash out a very accurate measurement that represents
>> what users would experience.
>>
>>
>> Cheers,
>> Christoph
>>
>>
>> >
>> > Thanks,
>> > --MM--
>> > The best way to predict the future is to create it. - Alan Kay
>> >
>> > We must not tolerate intolerance;
>> > however our response must be carefully measured:
>> > too strong would be hypocritical and risks spiraling out of
>> > control;
>> > too weak risks being mistaken for tacit approval.
>> >
>> >
>> > On Sat, Jun 12, 2021 at 9:11 AM Rich Brown <richb.hanover@gmail.com> wrote:
>> >
>> > > > On Jun 12, 2021, at 12:00 PM, bloat-request@lists.bufferbloat.net wrote:
>> > > >
>> > > > Some relevant talks / publicity at WWDC -- the first mentioning CoDel,
>> > > > queueing, etc. Featuring Stuart Cheshire. iOS 15 adds a developer test
>> > > for
>> > > > loaded latency, reported in "RPM" or round-trips per minute.
>> > > >
>> > > > I ran it on my machine:
>> > > > nowens@mac1015 ~ % /usr/bin/networkQuality
>> > > > ==== SUMMARY ====
>> > > > Upload capacity: 90.867 Mbps
>> > > > Download capacity: 93.616 Mbps
>> > > > Upload flows: 16
>> > > > Download flows: 20
>> > > > Responsiveness: Medium (840 RPM)
>> > >
>> > > Does anyone know how to get the command-line version for current (not
>> > > upcoming) macOS? Thanks.
>> > >
>> > > Rich
>> > > _______________________________________________
>> > > Bloat mailing list
>> > > Bloat@lists.bufferbloat.net
>> > > https://lists.bufferbloat.net/listinfo/bloat
>> > >
>>
>> > _______________________________________________
>> > Bloat mailing list
>> > Bloat@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/bloat
>>
[-- Attachment #2: Type: text/html, Size: 7203 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] Apple WWDC Talks on Latency/Bufferbloat
2021-06-17 23:43 ` Christoph Paasch
@ 2021-06-18 0:17 ` Matt Mathis
2021-06-18 1:03 ` Christoph Paasch
2021-06-29 7:58 ` Sebastian Moeller
1 sibling, 1 reply; 12+ messages in thread
From: Matt Mathis @ 2021-06-18 0:17 UTC (permalink / raw)
To: Christoph Paasch; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 4778 bytes --]
Which WG are you targeting?
Thanks,
--MM--
The best way to predict the future is to create it. - Alan Kay
We must not tolerate intolerance;
however our response must be carefully measured:
too strong would be hypocritical and risks spiraling out of
control;
too weak risks being mistaken for tacit approval.
On Thu, Jun 17, 2021 at 4:43 PM Christoph Paasch <cpaasch@apple.com> wrote:
> Hello,
>
> On 06/17/21 - 11:16, Matt Mathis via Bloat wrote:
> > Is there a paper or spec for RPM?
>
> we try to publish an IETF-draft on the methodology before the upcoming IETF
> in July.
>
> But, in the mean-time please see inline:
>
> > There are at least two different ways to define RPM, both of which might
> be
> > relevant.
> >
> > At the TCP layer: it can be directly computed from a packet capture. The
> > trick is to time reverse a trace and compute the critical path backwards
> > through the trace: what event triggered each segment or ACK, and count
> > round trips. This would be super robust but does not include the
> queueing
> > required in the kernel socket buffers. I need to think some more about
> > computing TCP RPM from tcp_info or other kernel instrumentation - it
> might
> > be possible.
>
> We explicitly opted against measuring purely TCP-level round-trip times.
> Because
> there are countless transparent TCP-proxies out there that would skew these
> numbers. Our goal with RPM/Responsiveness is to measure how an end-user
> would
> experience the network. Which means, DNS-resolution, TCP handshake-time,
> TLS-handshake, HTTP/2 Request/response. Because, at the end, that's what
> actually matters to the users.
>
> > A different RPM can be done in the application, above TCP, for example by
> > ping-ponging messages. This would include the delays traversing the
> kernel
> > socket buffers which have to be at least as large as a full network RTT.
> >
> > This is perhaps an important point: due to the retransmit and
> > reassuebly queues (which are required to implement robust data delivery)
> > TCP must be able hold at least a full RTT of data in it's own buffers,
> > which means that under some conditions the RTT as seen by the application
> > has be be at least twice the network's RTT, including any bloat in the
> > network.
>
> Currently, we measure RPM on separate connections (not the load-bearing
> ones). We are also measuring on the load-bearing connections themselves
> through H2 Ping frames. But for the reasons you described we haven't yet
> factored it into the RPM-number.
>
> One way may be to inspect with TCP_INFO whether or not the connections had
> retransmissions and then throw away the number. On the other hand, if the
> network becomes extremely lossy under working conditions, it does impact
> the
> user-experience and so it could make sense to take this into account.
>
>
> In the end, we realized how hard it is to accurately measure bufferbloat
> within a reasonable time-frame (our goal is to finish the test within ~15
> seconds).
>
> We hope that with the IETF-draft we can get the right people together to
> iterate over it and squash out a very accurate measurement that represents
> what users would experience.
>
>
> Cheers,
> Christoph
>
>
> >
> > Thanks,
> > --MM--
> > The best way to predict the future is to create it. - Alan Kay
> >
> > We must not tolerate intolerance;
> > however our response must be carefully measured:
> > too strong would be hypocritical and risks spiraling out of
> > control;
> > too weak risks being mistaken for tacit approval.
> >
> >
> > On Sat, Jun 12, 2021 at 9:11 AM Rich Brown <richb.hanover@gmail.com>
> wrote:
> >
> > > > On Jun 12, 2021, at 12:00 PM, bloat-request@lists.bufferbloat.net
> wrote:
> > > >
> > > > Some relevant talks / publicity at WWDC -- the first mentioning
> CoDel,
> > > > queueing, etc. Featuring Stuart Cheshire. iOS 15 adds a developer
> test
> > > for
> > > > loaded latency, reported in "RPM" or round-trips per minute.
> > > >
> > > > I ran it on my machine:
> > > > nowens@mac1015 ~ % /usr/bin/networkQuality
> > > > ==== SUMMARY ====
> > > > Upload capacity: 90.867 Mbps
> > > > Download capacity: 93.616 Mbps
> > > > Upload flows: 16
> > > > Download flows: 20
> > > > Responsiveness: Medium (840 RPM)
> > >
> > > Does anyone know how to get the command-line version for current (not
> > > upcoming) macOS? Thanks.
> > >
> > > Rich
> > > _______________________________________________
> > > Bloat mailing list
> > > Bloat@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/bloat
> > >
>
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>
[-- Attachment #2: Type: text/html, Size: 6471 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] Apple WWDC Talks on Latency/Bufferbloat
2021-06-17 18:16 ` Matt Mathis
@ 2021-06-17 23:43 ` Christoph Paasch
2021-06-18 0:17 ` Matt Mathis
2021-06-29 7:58 ` Sebastian Moeller
0 siblings, 2 replies; 12+ messages in thread
From: Christoph Paasch @ 2021-06-17 23:43 UTC (permalink / raw)
To: Matt Mathis; +Cc: bloat
Hello,
On 06/17/21 - 11:16, Matt Mathis via Bloat wrote:
> Is there a paper or spec for RPM?
we try to publish an IETF-draft on the methodology before the upcoming IETF
in July.
But, in the mean-time please see inline:
> There are at least two different ways to define RPM, both of which might be
> relevant.
>
> At the TCP layer: it can be directly computed from a packet capture. The
> trick is to time reverse a trace and compute the critical path backwards
> through the trace: what event triggered each segment or ACK, and count
> round trips. This would be super robust but does not include the queueing
> required in the kernel socket buffers. I need to think some more about
> computing TCP RPM from tcp_info or other kernel instrumentation - it might
> be possible.
We explicitly opted against measuring purely TCP-level round-trip times. Because
there are countless transparent TCP-proxies out there that would skew these
numbers. Our goal with RPM/Responsiveness is to measure how an end-user would
experience the network. Which means, DNS-resolution, TCP handshake-time,
TLS-handshake, HTTP/2 Request/response. Because, at the end, that's what
actually matters to the users.
> A different RPM can be done in the application, above TCP, for example by
> ping-ponging messages. This would include the delays traversing the kernel
> socket buffers which have to be at least as large as a full network RTT.
>
> This is perhaps an important point: due to the retransmit and
> reassuebly queues (which are required to implement robust data delivery)
> TCP must be able hold at least a full RTT of data in it's own buffers,
> which means that under some conditions the RTT as seen by the application
> has be be at least twice the network's RTT, including any bloat in the
> network.
Currently, we measure RPM on separate connections (not the load-bearing
ones). We are also measuring on the load-bearing connections themselves
through H2 Ping frames. But for the reasons you described we haven't yet
factored it into the RPM-number.
One way may be to inspect with TCP_INFO whether or not the connections had
retransmissions and then throw away the number. On the other hand, if the
network becomes extremely lossy under working conditions, it does impact the
user-experience and so it could make sense to take this into account.
In the end, we realized how hard it is to accurately measure bufferbloat
within a reasonable time-frame (our goal is to finish the test within ~15
seconds).
We hope that with the IETF-draft we can get the right people together to
iterate over it and squash out a very accurate measurement that represents
what users would experience.
Cheers,
Christoph
>
> Thanks,
> --MM--
> The best way to predict the future is to create it. - Alan Kay
>
> We must not tolerate intolerance;
> however our response must be carefully measured:
> too strong would be hypocritical and risks spiraling out of
> control;
> too weak risks being mistaken for tacit approval.
>
>
> On Sat, Jun 12, 2021 at 9:11 AM Rich Brown <richb.hanover@gmail.com> wrote:
>
> > > On Jun 12, 2021, at 12:00 PM, bloat-request@lists.bufferbloat.net wrote:
> > >
> > > Some relevant talks / publicity at WWDC -- the first mentioning CoDel,
> > > queueing, etc. Featuring Stuart Cheshire. iOS 15 adds a developer test
> > for
> > > loaded latency, reported in "RPM" or round-trips per minute.
> > >
> > > I ran it on my machine:
> > > nowens@mac1015 ~ % /usr/bin/networkQuality
> > > ==== SUMMARY ====
> > > Upload capacity: 90.867 Mbps
> > > Download capacity: 93.616 Mbps
> > > Upload flows: 16
> > > Download flows: 20
> > > Responsiveness: Medium (840 RPM)
> >
> > Does anyone know how to get the command-line version for current (not
> > upcoming) macOS? Thanks.
> >
> > Rich
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> >
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] Apple WWDC Talks on Latency/Bufferbloat
2021-06-12 16:11 Rich Brown
@ 2021-06-17 18:16 ` Matt Mathis
2021-06-17 23:43 ` Christoph Paasch
0 siblings, 1 reply; 12+ messages in thread
From: Matt Mathis @ 2021-06-17 18:16 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 2365 bytes --]
Is there a paper or spec for RPM?
There are at least two different ways to define RPM, both of which might be
relevant.
At the TCP layer: it can be directly computed from a packet capture. The
trick is to time reverse a trace and compute the critical path backwards
through the trace: what event triggered each segment or ACK, and count
round trips. This would be super robust but does not include the queueing
required in the kernel socket buffers. I need to think some more about
computing TCP RPM from tcp_info or other kernel instrumentation - it might
be possible.
A different RPM can be done in the application, above TCP, for example by
ping-ponging messages. This would include the delays traversing the kernel
socket buffers which have to be at least as large as a full network RTT.
This is perhaps an important point: due to the retransmit and
reassuebly queues (which are required to implement robust data delivery)
TCP must be able hold at least a full RTT of data in it's own buffers,
which means that under some conditions the RTT as seen by the application
has be be at least twice the network's RTT, including any bloat in the
network.
Thanks,
--MM--
The best way to predict the future is to create it. - Alan Kay
We must not tolerate intolerance;
however our response must be carefully measured:
too strong would be hypocritical and risks spiraling out of
control;
too weak risks being mistaken for tacit approval.
On Sat, Jun 12, 2021 at 9:11 AM Rich Brown <richb.hanover@gmail.com> wrote:
> > On Jun 12, 2021, at 12:00 PM, bloat-request@lists.bufferbloat.net wrote:
> >
> > Some relevant talks / publicity at WWDC -- the first mentioning CoDel,
> > queueing, etc. Featuring Stuart Cheshire. iOS 15 adds a developer test
> for
> > loaded latency, reported in "RPM" or round-trips per minute.
> >
> > I ran it on my machine:
> > nowens@mac1015 ~ % /usr/bin/networkQuality
> > ==== SUMMARY ====
> > Upload capacity: 90.867 Mbps
> > Download capacity: 93.616 Mbps
> > Upload flows: 16
> > Download flows: 20
> > Responsiveness: Medium (840 RPM)
>
> Does anyone know how to get the command-line version for current (not
> upcoming) macOS? Thanks.
>
> Rich
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 3385 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bloat] Apple WWDC Talks on Latency/Bufferbloat
@ 2021-06-12 16:11 Rich Brown
2021-06-17 18:16 ` Matt Mathis
0 siblings, 1 reply; 12+ messages in thread
From: Rich Brown @ 2021-06-12 16:11 UTC (permalink / raw)
To: bloat
> On Jun 12, 2021, at 12:00 PM, bloat-request@lists.bufferbloat.net wrote:
>
> Some relevant talks / publicity at WWDC -- the first mentioning CoDel,
> queueing, etc. Featuring Stuart Cheshire. iOS 15 adds a developer test for
> loaded latency, reported in "RPM" or round-trips per minute.
>
> I ran it on my machine:
> nowens@mac1015 ~ % /usr/bin/networkQuality
> ==== SUMMARY ====
> Upload capacity: 90.867 Mbps
> Download capacity: 93.616 Mbps
> Upload flows: 16
> Download flows: 20
> Responsiveness: Medium (840 RPM)
Does anyone know how to get the command-line version for current (not upcoming) macOS? Thanks.
Rich
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2021-07-06 19:09 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-11 19:14 [Bloat] Apple WWDC Talks on Latency/Bufferbloat Nathan Owens
2021-06-11 21:58 ` Jonathan Morton
2021-06-12 16:11 Rich Brown
2021-06-17 18:16 ` Matt Mathis
2021-06-17 23:43 ` Christoph Paasch
2021-06-18 0:17 ` Matt Mathis
2021-06-18 1:03 ` Christoph Paasch
2021-06-18 3:33 ` Matt Mathis
2021-06-28 22:54 ` Christoph Paasch
2021-06-29 7:58 ` Sebastian Moeller
2021-07-06 18:54 ` Christoph Paasch
2021-07-06 19:08 ` Sebastian Moeller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox