From: rjmcmahon <rjmcmahon@rjmcmahon.com>
To: Dave Taht <dave.taht@gmail.com>
Cc: Ruediger.Geib@telekom.de, rpm@lists.bufferbloat.net, ippm@ietf.org
Subject: Re: [Rpm] [ippm] lightweight active sensing of bandwidth and buffering
Date: Wed, 02 Nov 2022 13:37:57 -0700 [thread overview]
Message-ID: <72c1083dfe84fdd2e0f9af2170f08369@rjmcmahon.com> (raw)
In-Reply-To: <CAA93jw7ZFpk+g5=9uNHc1TF6a3iBc7nn8UFsX8JwSgsgsukecg@mail.gmail.com>
I used iperf 2's Little's law calculation to find the buffer sizes
designed in by our hardware team(s). They were surprised that the
numbers exactly matched their designs - Little applied - and I never saw
either the hardware nor its design spec.
It seems reasonable to use something if & when it works and is useful.
The challenge seems to be knowing the limits of any claims (or
simulations.) I think engineers do this much when we assume linearity
over some small interval as an example in finite element analysis and
structures:
https://control.com/technical-articles/the-difference-between-linear-and-nonlinear-finite-element-analysis-fea/
Bob
> On Wed, Nov 2, 2022 at 12:29 PM rjmcmahon via Rpm
> <rpm@lists.bufferbloat.net> wrote:
>>
>> Most measuring bloat are ignoring queue build up phase and rather
>> start
>> taking measurements after the bottleneck queue is in a standing state.
>
> +10. It's the slow start transient that is holding things back. If we
> could, for example
> open up the 110+ objects and flows web pages require all at once, and
> let 'em rip, instead of 15 at a time, without destroying the network,
> web PLT would get much better.
>
>> My opinion, the best units for bloat is packets for UDP or bytes for
>> TCP. Min delay is a proxy measurement.
>
> bytes, period. bytes = time. Sure most udp today is small packets but
> quic and videconferencing change that.
>
>>
>> Little's law allows one to compute this though does assume the network
>> is in a stable state over the measurement interval. In the real world,
>> this probably is rarely true. So we, in test & measurement
>> engineering,
>> force the standing state with some sort of measurement co-traffic and
>> call it "working conditions" or equivalent. ;)
>
> There was an extremely long, nuanced debate about little's law and
> where it applies, last year, here:
>
> https://lists.bufferbloat.net/pipermail/cake/2021-July/005540.html
>
> I don't want to go into it, again.
>
>>
>> Bob
>> > Bob, Sebastian,
>> >
>> > not being active on your topic, just to add what I observed on
>> > congestion:
>> > - starts with an increase of jitter, but measured minimum delays still
>> > remain constant. Technically, a queue builds up some of the time, but
>> > it isn't present permanently.
>> > - buffer fill reaches a "steady state", called bufferbloat on access I
>> > think; technically, OWD increases also for the minimum delays, jitter
>> > now decreases (what you've described that as "the delay magnitude"
>> > decreases or "minimum CDF shift" respectively, if I'm correct). I'd
>> > expect packet loss to occur, once the buffer fill is on steady state,
>> > but loss might be randomly distributed and could be of a low
>> > percentage.
>> > - a sudden rather long load burst may cause a jump-start to
>> > "steady-state" buffer fill. The above holds for a slow but steady load
>> > increase (where the measurement frequency determines the timescale
>> > qualifying "slow").
>> > - in the end, max-min delay or delay distribution/jitter likely isn't
>> > an easy to handle single metric to identify congestion.
>> >
>> > Regards,
>> >
>> > Ruediger
>> >
>> >
>> >> On Nov 2, 2022, at 00:39, rjmcmahon via Rpm
>> >> <rpm@lists.bufferbloat.net> wrote:
>> >>
>> >> Bufferbloat shifts the minimum of the latency or OWD CDF.
>> >
>> > [SM] Thank you for spelling this out explicitly, I only worked on a
>> > vage implicit assumption along those lines. However what I want to
>> > avoid is using delay magnitude itself as classifier between high and
>> > low load condition as that seems statistically uncouth to then show
>> > that the delay differs between the two classes;).
>> > Yet, your comment convinced me that my current load threshold (at
>> > least for the high load condition) probably is too small, exactly
>> > because the "base" of the high-load CDFs coincides with the base of
>> > the low-load CDFs implying that the high-load class contains too many
>> > samples with decent delay (which after all is one of the goals of the
>> > whole autorate endeavor).
>> >
>> >
>> >> A suggestion is to disable x-axis auto-scaling and start from zero.
>> >
>> > [SM] Will reconsider. I started with start at zero, end then switched
>> > to an x-range that starts with the delay corresponding to 0.01% for
>> > the reflector/condition with the lowest such value and stops at 97.5%
>> > for the reflector/condition with the highest delay value. My rationale
>> > is that the base delay/path delay of each reflector is not all that
>> > informative* (and it can still be learned from reading the x-axis),
>> > the long tail > 50% however is where I expect most differences so I
>> > want to emphasize this and finally I wanted to avoid that the actual
>> > "curvy" part gets compressed so much that all lines more or less
>> > coincide. As I said, I will reconsider this
>> >
>> >
>> > *) We also maintain individual baselines per reflector, so I could
>> > just plot the differences from baseline, but that would essentially
>> > equalize all reflectors, and I think having a plot that easily shows
>> > reflectors with outlying base delay can be informative when selecting
>> > reflector candidates. However once we actually switch to OWDs baseline
>> > correction might be required anyways, as due to colck differences ICMP
>> > type 13/14 data can have massive offsets that are mostly indicative of
>> > un synched clocks**.
>> >
>> > **) This is whyI would prefer to use NTP servers as reflectors with
>> > NTP requests, my expectation is all of these should be reasonably
>> > synced by default so that offsets should be in the sane range....
>> >
>> >
>> >>
>> >> Bob
>> >>> For about 2 years now the cake w-adaptive bandwidth project has been
>> >>> exploring techniques to lightweightedly sense bandwidth and
>> >>> buffering problems. One of my favorites was their discovery that ICMP
>> >>> type 13 got them working OWD from millions of ipv4 devices!
>> >>> They've also explored leveraging ntp and multiple other methods, and
>> >>> have scripts available that do a good job of compensating for 5g and
>> >>> starlink's misbehaviors.
>> >>> They've also pioneered a whole bunch of new graphing techniques,
>> >>> which I do wish were used more than single number summaries
>> >>> especially in analyzing the behaviors of new metrics like rpm,
>> >>> samknows, ookla, and
>> >>> RFC9097 - to see what is being missed.
>> >>> There are thousands of posts about this research topic, a new post on
>> >>> OWD just went by here.
>> >>> https://forum.openwrt.org/t/cake-w-adaptive-bandwidth/135379/793
>> >>> and of course, I love flent's enormous graphing toolset for
>> >>> simulating and analyzing complex network behaviors.
>> >> _______________________________________________
>> >> Rpm mailing list
>> >> Rpm@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/rpm
>> >
>> > _______________________________________________
>> > ippm mailing list
>> > ippm@ietf.org
>> > https://www.ietf.org/mailman/listinfo/ippm
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
next prev parent reply other threads:[~2022-11-02 20:37 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CH0PR02MB79808E2508E6AED66DC7657AD32E9@CH0PR02MB7980.namprd02.prod.outlook.com>
[not found] ` <CH0PR02MB7980DFB52D45F2458782430FD3379@CH0PR02MB7980.namprd02.prod.outlook.com>
[not found] ` <CH0PR02MB7980D3036BF700A074D902A1D3379@CH0PR02MB7980.namprd02.prod.outlook.com>
2022-10-31 16:52 ` [Rpm] [ippm] Preliminary measurement comparison of "Working Latency" metrics Dave Taht
2022-10-31 18:52 ` rjmcmahon
2022-10-31 22:08 ` MORTON JR., AL
2022-10-31 22:44 ` rjmcmahon
2022-11-01 20:15 ` [Rpm] lightweight active sensing of bandwidth and buffering Dave Taht
2022-11-01 23:39 ` rjmcmahon
2022-11-02 8:23 ` Sebastian Moeller
2022-11-02 9:41 ` [Rpm] [ippm] " Ruediger.Geib
2022-11-02 19:29 ` rjmcmahon
2022-11-02 19:44 ` Dave Taht
2022-11-02 20:37 ` rjmcmahon [this message]
2022-11-02 21:13 ` Sebastian Moeller
2022-11-02 21:41 ` Sebastian Moeller
2022-11-03 8:20 ` Ruediger.Geib
2022-11-03 8:57 ` Sebastian Moeller
2022-11-03 11:25 ` Ruediger.Geib
2022-11-03 11:48 ` Sebastian Moeller
2022-11-02 19:21 ` [Rpm] " rjmcmahon
2022-10-31 20:40 ` [Rpm] [ippm] Preliminary measurement comparison of "Working Latency" metrics MORTON JR., AL
2022-10-31 23:30 ` Dave Taht
2022-11-01 4:21 ` Dave Taht
2022-11-01 14:51 ` MORTON JR., AL
2022-11-04 17:14 ` MORTON JR., AL
2022-11-04 18:12 ` rjmcmahon
2022-11-04 18:58 ` MORTON JR., AL
2022-11-04 19:10 ` Sebastian Moeller
2022-11-05 19:36 ` MORTON JR., AL
2022-12-11 19:21 ` MORTON JR., AL
2022-12-12 2:38 ` Dave Taht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/rpm.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=72c1083dfe84fdd2e0f9af2170f08369@rjmcmahon.com \
--to=rjmcmahon@rjmcmahon.com \
--cc=Ruediger.Geib@telekom.de \
--cc=dave.taht@gmail.com \
--cc=ippm@ietf.org \
--cc=rpm@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox