From: Dave Taht <dave.taht@gmail.com>
To: rjmcmahon <rjmcmahon@rjmcmahon.com>
Cc: "Livingood, Jason" <Jason_Livingood@comcast.com>,
Rpm <rpm@lists.bufferbloat.net>,
mike.reynolds@netforecast.com,
libreqos <libreqos@lists.bufferbloat.net>,
"David P. Reed" <dpreed@deepplum.com>,
starlink@lists.bufferbloat.net,
bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
Date: Mon, 9 Jan 2023 12:59:45 -0800 [thread overview]
Message-ID: <CAA93jw5FhWH53m2y7=vtFXw_mhJKLs97Fg1LzKb2G-9vaufCyg@mail.gmail.com> (raw)
In-Reply-To: <067248a1bde7da5be839f9555cc2419b@rjmcmahon.com>
On Mon, Jan 9, 2023 at 12:46 PM rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>
> The write to read latencies (OWD) are on the server side in CLT form.
> Use --histograms on the server side to enable them.
Thx. It is far more difficult to instrument things on the server side
of the testbed but we will tackle it.
> Your client side sampled TCP RTT is 6ms with less than a 1 ms of
> variance (or sqrt of variance as variance is typically squared) No
> retries suggest the network isn't dropping packets.
Thank you for analyzing that result. the cake aqm, set for a 5ms
target, with RFC3168-style ECN, is enabled on this path, on this
setup, at the moment. So the result is correct.
A second test with ecn off showed the expected retries.
I have emulations also of fifos, pie, fq-pie, fq-codel, red, blue,
sfq, with various realworld delays, and so on... but this is a bit
distracting at the moment from our focus, which was in optimizing the
XDP + ebpf based bridge and epping based sampling tools to crack
25Gbit.
I think iperf2 will be great for us after that settles down.
> All the newer bounceback code is only master and requires a compile from
> source. It will be released in 2.1.9 after testing cycles. Hopefully, in
> early March 2023
I would like to somehow parse and present those histograms.
>
> Bob
>
> https://sourceforge.net/projects/iperf2/
>
> > The DC that so graciously loaned us 3 machines for the testbed (thx
> > equinix!), does support ptp, but we have not configured it yet. In ntp
> > tests between these hosts we seem to be within 500us, and certainly
> > 50us would be great, in the future.
> >
> > I note that in all my kvetching about the new tests' needing
> > validation today... I kind of elided that I'm pretty happy with
> > iperf2's new tests that landed last august, and are now appearing in
> > linux package managers around the world. I hope more folk use them.
> > (sorry robert, it's been a long time since last august!)
> >
> > Our new testbed has multiple setups. In one setup - basically the
> > machine name is equal to a given ISP plan, and a key testing point is
> > looking at the differences between the FCC 25-3 and 100/20 plans in
> > the real world. However at our scale (25gbit) it turned out that
> > emulating the delay realistically has problematic.
> >
> > Anyway, here's a 25/3 result for iperf (other results and iperf test
> > type requests gladly accepted)
> >
> > root@lqos:~# iperf -6 --trip-times -c c25-3 -e -i 1
> > ------------------------------------------------------------
> > Client connecting to c25-3, TCP port 5001 with pid 2146556 (1 flows)
> > Write buffer size: 131072 Byte
> > TOS set to 0x0 (Nagle on)
> > TCP window size: 85.3 KByte (default)
> > ------------------------------------------------------------
> > [ 1] local fd77::3%bond0.4 port 59396 connected with fd77::1:2 port
> > 5001 (trip-times) (sock=3) (icwnd/mss/irtt=13/1428/948) (ct=1.10 ms)
> > on 2023-01-09 20:13:37 (UTC)
> > [ ID] Interval Transfer Bandwidth Write/Err Rtry
> > Cwnd/RTT(var) NetPwr
> > [ 1] 0.0000-1.0000 sec 3.25 MBytes 27.3 Mbits/sec 26/0 0
> > 19K/6066(262) us 562
> > [ 1] 1.0000-2.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
> > 15K/4671(207) us 673
> > [ 1] 2.0000-3.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
> > 13K/5538(280) us 568
> > [ 1] 3.0000-4.0000 sec 3.12 MBytes 26.2 Mbits/sec 25/0 0
> > 16K/6244(355) us 525
> > [ 1] 4.0000-5.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
> > 19K/6152(216) us 511
> > [ 1] 5.0000-6.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
> > 22K/6764(529) us 465
> > [ 1] 6.0000-7.0000 sec 3.12 MBytes 26.2 Mbits/sec 25/0 0
> > 15K/5918(605) us 554
> > [ 1] 7.0000-8.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
> > 18K/5178(327) us 608
> > [ 1] 8.0000-9.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
> > 19K/5758(473) us 546
> > [ 1] 9.0000-10.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
> > 16K/6141(280) us 512
> > [ 1] 0.0000-10.0952 sec 30.6 MBytes 25.4 Mbits/sec 245/0
> > 0 19K/5924(491) us 537
> >
> >
> > On Mon, Jan 9, 2023 at 11:13 AM rjmcmahon <rjmcmahon@rjmcmahon.com>
> > wrote:
> >>
> >> My biggest barrier is the lack of clock sync by the devices, i.e. very
> >> limited support for PTP in data centers and in end devices. This
> >> limits
> >> the ability to measure one way delays (OWD) and most assume that OWD
> >> is
> >> 1/2 and RTT which typically is a mistake. We know this intuitively
> >> with
> >> airplane flight times or even car commute times where the one way time
> >> is not 1/2 a round trip time. Google maps & directions provide a time
> >> estimate for the one way link. It doesn't compute a round trip and
> >> divide by two.
> >>
> >> For those that can get clock sync working, the iperf 2 --trip-times
> >> options is useful.
> >>
> >> --trip-times
> >> enable the measurement of end to end write to read latencies
> >> (client
> >> and server clocks must be synchronized)
> >>
> >> Bob
> >> > I have many kvetches about the new latency under load tests being
> >> > designed and distributed over the past year. I am delighted! that they
> >> > are happening, but most really need third party evaluation, and
> >> > calibration, and a solid explanation of what network pathologies they
> >> > do and don't cover. Also a RED team attitude towards them, as well as
> >> > thinking hard about what you are not measuring (operations research).
> >> >
> >> > I actually rather love the new cloudflare speedtest, because it tests
> >> > a single TCP connection, rather than dozens, and at the same time folk
> >> > are complaining that it doesn't find the actual "speed!". yet... the
> >> > test itself more closely emulates a user experience than speedtest.net
> >> > does. I am personally pretty convinced that the fewer numbers of flows
> >> > that a web page opens improves the likelihood of a good user
> >> > experience, but lack data on it.
> >> >
> >> > To try to tackle the evaluation and calibration part, I've reached out
> >> > to all the new test designers in the hope that we could get together
> >> > and produce a report of what each new test is actually doing. I've
> >> > tweeted, linked in, emailed, and spammed every measurement list I know
> >> > of, and only to some response, please reach out to other test designer
> >> > folks and have them join the rpm email list?
> >> >
> >> > My principal kvetches in the new tests so far are:
> >> >
> >> > 0) None of the tests last long enough.
> >> >
> >> > Ideally there should be a mode where they at least run to "time of
> >> > first loss", or periodically, just run longer than the
> >> > industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
> >> > there! It's really bad science to optimize the internet for 20
> >> > seconds. It's like optimizing a car, to handle well, for just 20
> >> > seconds.
> >> >
> >> > 1) Not testing up + down + ping at the same time
> >> >
> >> > None of the new tests actually test the same thing that the infamous
> >> > rrul test does - all the others still test up, then down, and ping. It
> >> > was/remains my hope that the simpler parts of the flent test suite -
> >> > such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
> >> > tests would provide calibration to the test designers.
> >> >
> >> > we've got zillions of flent results in the archive published here:
> >> > https://blog.cerowrt.org/post/found_in_flent/
> >> > ps. Misinformation about iperf 2 impacts my ability to do this.
> >>
> >> > The new tests have all added up + ping and down + ping, but not up +
> >> > down + ping. Why??
> >> >
> >> > The behaviors of what happens in that case are really non-intuitive, I
> >> > know, but... it's just one more phase to add to any one of those new
> >> > tests. I'd be deliriously happy if someone(s) new to the field
> >> > started doing that, even optionally, and boggled at how it defeated
> >> > their assumptions.
> >> >
> >> > Among other things that would show...
> >> >
> >> > It's the home router industry's dirty secret than darn few "gigabit"
> >> > home routers can actually forward in both directions at a gigabit. I'd
> >> > like to smash that perception thoroughly, but given our starting point
> >> > is a gigabit router was a "gigabit switch" - and historically been
> >> > something that couldn't even forward at 200Mbit - we have a long way
> >> > to go there.
> >> >
> >> > Only in the past year have non-x86 home routers appeared that could
> >> > actually do a gbit in both directions.
> >> >
> >> > 2) Few are actually testing within-stream latency
> >> >
> >> > Apple's rpm project is making a stab in that direction. It looks
> >> > highly likely, that with a little more work, crusader and
> >> > go-responsiveness can finally start sampling the tcp RTT, loss and
> >> > markings, more directly. As for the rest... sampling TCP_INFO on
> >> > windows, and Linux, at least, always appeared simple to me, but I'm
> >> > discovering how hard it is by delving deep into the rust behind
> >> > crusader.
> >> >
> >> > the goresponsiveness thing is also IMHO running WAY too many streams
> >> > at the same time, I guess motivated by an attempt to have the test
> >> > complete quickly?
> >> >
> >> > B) To try and tackle the validation problem:ps. Misinformation about
> >> > iperf 2 impacts my ability to do this.
> >>
> >> >
> >> > In the libreqos.io project we've established a testbed where tests can
> >> > be plunked through various ISP plan network emulations. It's here:
> >> > https://payne.taht.net (run bandwidth test for what's currently hooked
> >> > up)
> >> >
> >> > We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
> >> > leverage with that, so I don't have to nat the various emulations.
> >> > (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
> >> > to see more test designers setup a testbed like this to calibrate
> >> > their own stuff.
> >> >
> >> > Presently we're able to test:
> >> > flent
> >> > netperf
> >> > iperf2
> >> > iperf3
> >> > speedtest-cli
> >> > crusader
> >> > the broadband forum udp based test:
> >> > https://github.com/BroadbandForum/obudpst
> >> > trexx
> >> >
> >> > There's also a virtual machine setup that we can remotely drive a web
> >> > browser from (but I didn't want to nat the results to the world) to
> >> > test other web services.
> >> > _______________________________________________
> >> > Rpm mailing list
> >> > Rpm@lists.bufferbloat.net
> >> > https://lists.bufferbloat.net/listinfo/rpm
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
next prev parent reply other threads:[~2023-01-09 20:59 UTC|newest]
Thread overview: 168+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <mailman.2651.1672779463.1281.starlink@lists.bufferbloat.net>
2023-01-03 22:58 ` [Starlink] " David P. Reed
2023-01-09 14:44 ` Livingood, Jason
2023-01-09 15:26 ` Dave Taht
2023-01-09 17:00 ` Sebastian Moeller
2023-01-09 17:04 ` [Starlink] [LibreQoS] " Jeremy Austin
2023-01-09 18:33 ` Dave Taht
2023-01-09 19:06 ` David Collier-Brown
2023-01-09 18:54 ` [Starlink] [EXTERNAL] " Livingood, Jason
2023-01-09 19:19 ` [Starlink] [Rpm] " rjmcmahon
2023-01-09 19:56 ` [Starlink] [LibreQoS] " dan
2023-01-09 21:00 ` rjmcmahon
2023-03-13 10:02 ` [Starlink] [Rpm] [LibreQoS] " Sebastian Moeller
2023-03-13 15:08 ` Jeremy Austin
2023-03-13 15:50 ` Sebastian Moeller
2023-03-13 16:06 ` [Starlink] [Bloat] " Dave Taht
2023-03-13 16:19 ` Sebastian Moeller
2023-03-13 16:12 ` [Starlink] " dan
2023-03-13 16:36 ` Sebastian Moeller
2023-03-13 17:26 ` dan
2023-03-13 17:37 ` Jeremy Austin
2023-03-13 18:34 ` Sebastian Moeller
2023-03-13 18:14 ` Sebastian Moeller
2023-03-13 18:42 ` rjmcmahon
2023-03-13 18:51 ` Sebastian Moeller
2023-03-13 19:32 ` rjmcmahon
2023-03-13 20:00 ` Sebastian Moeller
2023-03-13 20:28 ` rjmcmahon
2023-03-14 4:27 ` [Starlink] On FiWi rjmcmahon
2023-03-14 11:10 ` Mike Puchol
2023-03-14 16:54 ` [Starlink] [Rpm] " Robert McMahon
2023-03-14 17:06 ` Robert McMahon
2023-03-14 17:11 ` [Starlink] [Bloat] " Sebastian Moeller
2023-03-14 17:35 ` Robert McMahon
2023-03-14 17:54 ` [Starlink] [LibreQoS] " dan
2023-03-14 18:14 ` Robert McMahon
2023-03-14 19:18 ` dan
2023-03-14 19:30 ` [Starlink] [Bloat] [LibreQoS] " Dave Taht
2023-03-14 20:06 ` rjmcmahon
2023-03-14 19:30 ` [Starlink] [LibreQoS] [Bloat] " rjmcmahon
2023-03-14 23:30 ` Bruce Perens
2023-03-15 0:11 ` Robert McMahon
2023-03-15 5:20 ` Bruce Perens
[not found] ` <CALQXh-PUgix7ApkTi5W8TMKVZfE4fyNk4WeiocZ6QU6R-m7naA@mail.gmail.com>
2023-03-15 17:05 ` [Starlink] [Rpm] [LibreQoS] [Bloat] " Bruce Perens
2023-03-15 17:44 ` rjmcmahon
2023-03-15 19:22 ` [Starlink] [Bloat] [Rpm] [LibreQoS] " David Lang
2023-03-15 17:32 ` [Starlink] [LibreQoS] [Bloat] [Rpm] " rjmcmahon
2023-03-15 17:42 ` dan
2023-03-15 18:03 ` Mike Puchol
2023-03-15 19:33 ` [Starlink] [Bloat] [LibreQoS] " David Lang
2023-03-15 19:39 ` [Starlink] [Rpm] [Bloat] [LibreQoS] " Dave Taht
2023-03-15 21:52 ` David Lang
2023-03-15 22:04 ` Dave Taht
2023-03-15 22:08 ` dan
2023-03-15 17:43 ` [Starlink] [Bloat] [LibreQoS] [Rpm] " Sebastian Moeller
2023-03-15 17:49 ` rjmcmahon
2023-03-15 17:53 ` [Starlink] [Rpm] [Bloat] [LibreQoS] " Dave Taht
2023-03-15 17:59 ` dan
2023-03-15 19:39 ` rjmcmahon
2023-03-17 16:38 ` [Starlink] [Rpm] " Dave Taht
2023-03-17 18:21 ` Mike Puchol
2023-03-17 19:01 ` Sebastian Moeller
2023-03-17 19:19 ` rjmcmahon
2023-03-17 20:37 ` Bruce Perens
2023-03-17 20:57 ` rjmcmahon
2023-03-17 22:50 ` Bruce Perens
2023-03-18 18:18 ` rjmcmahon
2023-03-18 19:57 ` [Starlink] [LibreQoS] " dan
2023-03-18 20:40 ` rjmcmahon
2023-03-19 10:26 ` Michael Richardson
2023-03-19 21:00 ` [Starlink] On metrics rjmcmahon
2023-03-20 0:26 ` dan
2023-03-20 3:03 ` David Lang
[not found] ` <CAJUtOOgC8O2jvT7eZ0O8nU8kCPOeCgVPTBNKaA3ZqLpJf4obJw@mail.gmail.com>
2023-03-20 21:28 ` [Starlink] [Rpm] [LibreQoS] On FiWi dan
[not found] ` <CAJUtOOhPsiC=9SM3rUUxWuh4euLbDxVqcrM6hioDykZaWYfy6Q@mail.gmail.com>
2023-03-20 22:02 ` [Starlink] On FiWi power envelope rjmcmahon
2023-03-20 23:47 ` Bruce Perens
2023-03-21 0:10 ` [Starlink] [Rpm] [LibreQoS] On FiWi Brandon Butterworth
[not found] ` <CAJUtOOhinMu4Mv9EW-PB7ef9EHWL3inpeABUqFt0UDAw47MixA@mail.gmail.com>
2023-03-21 11:26 ` [Starlink] Annoyed at 5/1 Mbps Rich Brown
2023-03-21 12:29 ` [Starlink] [Rpm] [LibreQoS] On FiWi Brandon Butterworth
2023-03-21 12:30 ` Sebastian Moeller
2023-03-21 17:42 ` rjmcmahon
2023-03-21 18:08 ` rjmcmahon
[not found] ` <CAJUtOOiMk+PBK2ZRFsZA8EFEgqfHY3Zpw9=kAkJZpePx9OzeMw@mail.gmail.com>
2023-03-21 19:58 ` rjmcmahon
2023-03-21 20:06 ` [Starlink] [Bloat] " David Lang
2023-03-25 19:39 ` [Starlink] On fiber as critical infrastructure w/Comcast chat rjmcmahon
2023-03-25 20:09 ` Bruce Perens
2023-03-25 20:47 ` rjmcmahon
2023-03-25 20:15 ` [Starlink] [Bloat] " Sebastian Moeller
2023-03-25 20:43 ` rjmcmahon
2023-03-25 21:08 ` Bruce Perens
2023-03-25 22:04 ` Robert McMahon
2023-03-25 22:50 ` dan
2023-03-25 23:21 ` Robert McMahon
2023-03-25 23:35 ` David Lang
2023-03-26 0:04 ` Robert McMahon
2023-03-26 0:07 ` Nathan Owens
2023-03-26 0:50 ` Robert McMahon
2023-03-26 8:45 ` Livingood, Jason
2023-03-26 18:54 ` rjmcmahon
2023-03-26 0:28 ` David Lang
2023-03-26 0:57 ` Robert McMahon
2023-03-25 22:57 ` Bruce Perens
2023-03-25 23:33 ` David Lang
2023-03-25 23:38 ` Robert McMahon
2023-03-25 23:20 ` David Lang
2023-03-26 18:29 ` rjmcmahon
2023-03-26 10:34 ` Sebastian Moeller
2023-03-26 18:12 ` rjmcmahon
2023-03-26 20:57 ` David Lang
2023-03-26 21:11 ` Sebastian Moeller
2023-03-26 21:26 ` David Lang
2023-03-28 17:06 ` Larry Press
2023-03-28 17:47 ` rjmcmahon
2023-03-28 18:11 ` Frantisek Borsik
2023-03-28 18:46 ` rjmcmahon
2023-03-28 20:37 ` David Lang
2023-03-28 21:31 ` rjmcmahon
2023-03-28 22:18 ` dan
2023-03-28 22:42 ` rjmcmahon
2023-03-29 8:28 ` Sebastian Moeller
2023-03-29 12:27 ` Dave Collier-Brown
2023-03-29 13:22 ` Doc Searls
2023-03-29 13:40 ` [Starlink] Enabling a production model Dave Taht
2023-03-29 14:54 ` [Starlink] [LibreQoS] " dan
2023-03-29 16:53 ` Jeremy Austin
2023-03-29 18:33 ` Sebastian Moeller
2023-03-29 17:13 ` [Starlink] [Bloat] " David Lang
2023-03-29 17:34 ` dan
2023-03-29 20:03 ` David Lang
2023-04-02 12:00 ` Sebastian Moeller
2023-03-29 17:46 ` Rich Brown
2023-03-29 19:02 ` tom
2023-03-29 19:08 ` Dave Taht
2023-03-29 19:31 ` tom
2023-03-29 19:11 ` Dave Collier-Brown
2023-04-02 11:39 ` Sebastian Moeller
2023-03-29 13:46 ` [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat Frantisek Borsik
2023-03-29 14:57 ` [Starlink] [LibreQoS] " Dave Taht
2023-03-29 19:23 ` Sebastian Moeller
2023-03-29 19:02 ` [Starlink] " rjmcmahon
2023-03-29 19:37 ` dan
2023-03-25 20:27 ` [Starlink] " rjmcmahon
2023-03-17 23:15 ` [Starlink] [Bloat] [Rpm] On FiWi David Lang
2023-03-14 18:05 ` [Starlink] " Steve Stroh
2023-03-13 19:33 ` [Starlink] [Rpm] [LibreQoS] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA dan
2023-03-13 19:52 ` Jeremy Austin
2023-03-13 21:00 ` Sebastian Moeller
2023-03-13 21:27 ` dan
2023-03-14 9:11 ` Sebastian Moeller
2023-03-13 20:45 ` Sebastian Moeller
2023-03-13 21:02 ` [Starlink] When do you drop? Always! Dave Taht
2023-03-13 21:42 ` Ulrich Speidel
2023-03-13 16:04 ` [Starlink] UnderBloat on fiber and wisps Dave Taht
2023-03-13 16:09 ` Sebastian Moeller
2023-03-13 16:35 ` Mike Puchol
2023-01-09 20:49 ` [Starlink] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA Dave Taht
2023-01-09 19:13 ` [Starlink] [Rpm] " rjmcmahon
2023-01-09 19:47 ` Sebastian Moeller
2023-01-11 18:32 ` Rodney W. Grimes
2023-01-11 20:01 ` Sebastian Moeller
2023-01-11 20:09 ` rjmcmahon
2023-01-12 8:14 ` Sebastian Moeller
2023-01-12 17:49 ` Robert McMahon
2023-01-09 20:20 ` Dave Taht
2023-01-09 20:46 ` rjmcmahon
2023-01-09 20:59 ` Dave Taht [this message]
2023-01-09 21:06 ` rjmcmahon
2023-01-09 21:18 ` rjmcmahon
2023-01-10 17:36 ` David P. Reed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/starlink.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAA93jw5FhWH53m2y7=vtFXw_mhJKLs97Fg1LzKb2G-9vaufCyg@mail.gmail.com' \
--to=dave.taht@gmail.com \
--cc=Jason_Livingood@comcast.com \
--cc=bloat@lists.bufferbloat.net \
--cc=dpreed@deepplum.com \
--cc=libreqos@lists.bufferbloat.net \
--cc=mike.reynolds@netforecast.com \
--cc=rjmcmahon@rjmcmahon.com \
--cc=rpm@lists.bufferbloat.net \
--cc=starlink@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox