From: Dave Taht <dave.taht@gmail.com>
To: rjmcmahon <rjmcmahon@rjmcmahon.com>
Cc: "Livingood, Jason" <Jason_Livingood@comcast.com>,
Rpm <rpm@lists.bufferbloat.net>,
mike.reynolds@netforecast.com,
libreqos <libreqos@lists.bufferbloat.net>,
"David P. Reed" <dpreed@deepplum.com>,
starlink@lists.bufferbloat.net,
bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] [Rpm] [Starlink] Researchers Seeking Probe Volunteers in USA
Date: Mon, 9 Jan 2023 12:20:58 -0800 [thread overview]
Message-ID: <CAA93jw5U7e29TGVK4BzOLVnUPkb3q4mF+SB7wAe36bAdkhYaaQ@mail.gmail.com> (raw)
In-Reply-To: <412c00f23a6cfef61ecbf0fd9b6f3069@rjmcmahon.com>
The DC that so graciously loaned us 3 machines for the testbed (thx
equinix!), does support ptp, but we have not configured it yet. In ntp
tests between these hosts we seem to be within 500us, and certainly
50us would be great, in the future.
I note that in all my kvetching about the new tests' needing
validation today... I kind of elided that I'm pretty happy with
iperf2's new tests that landed last august, and are now appearing in
linux package managers around the world. I hope more folk use them.
(sorry robert, it's been a long time since last august!)
Our new testbed has multiple setups. In one setup - basically the
machine name is equal to a given ISP plan, and a key testing point is
looking at the differences between the FCC 25-3 and 100/20 plans in
the real world. However at our scale (25gbit) it turned out that
emulating the delay realistically has problematic.
Anyway, here's a 25/3 result for iperf (other results and iperf test
type requests gladly accepted)
root@lqos:~# iperf -6 --trip-times -c c25-3 -e -i 1
------------------------------------------------------------
Client connecting to c25-3, TCP port 5001 with pid 2146556 (1 flows)
Write buffer size: 131072 Byte
TOS set to 0x0 (Nagle on)
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 1] local fd77::3%bond0.4 port 59396 connected with fd77::1:2 port
5001 (trip-times) (sock=3) (icwnd/mss/irtt=13/1428/948) (ct=1.10 ms)
on 2023-01-09 20:13:37 (UTC)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT(var) NetPwr
[ 1] 0.0000-1.0000 sec 3.25 MBytes 27.3 Mbits/sec 26/0 0
19K/6066(262) us 562
[ 1] 1.0000-2.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
15K/4671(207) us 673
[ 1] 2.0000-3.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
13K/5538(280) us 568
[ 1] 3.0000-4.0000 sec 3.12 MBytes 26.2 Mbits/sec 25/0 0
16K/6244(355) us 525
[ 1] 4.0000-5.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
19K/6152(216) us 511
[ 1] 5.0000-6.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
22K/6764(529) us 465
[ 1] 6.0000-7.0000 sec 3.12 MBytes 26.2 Mbits/sec 25/0 0
15K/5918(605) us 554
[ 1] 7.0000-8.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
18K/5178(327) us 608
[ 1] 8.0000-9.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
19K/5758(473) us 546
[ 1] 9.0000-10.0000 sec 3.00 MBytes 25.2 Mbits/sec 24/0 0
16K/6141(280) us 512
[ 1] 0.0000-10.0952 sec 30.6 MBytes 25.4 Mbits/sec 245/0
0 19K/5924(491) us 537
On Mon, Jan 9, 2023 at 11:13 AM rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>
> My biggest barrier is the lack of clock sync by the devices, i.e. very
> limited support for PTP in data centers and in end devices. This limits
> the ability to measure one way delays (OWD) and most assume that OWD is
> 1/2 and RTT which typically is a mistake. We know this intuitively with
> airplane flight times or even car commute times where the one way time
> is not 1/2 a round trip time. Google maps & directions provide a time
> estimate for the one way link. It doesn't compute a round trip and
> divide by two.
>
> For those that can get clock sync working, the iperf 2 --trip-times
> options is useful.
>
> --trip-times
> enable the measurement of end to end write to read latencies (client
> and server clocks must be synchronized)
>
> Bob
> > I have many kvetches about the new latency under load tests being
> > designed and distributed over the past year. I am delighted! that they
> > are happening, but most really need third party evaluation, and
> > calibration, and a solid explanation of what network pathologies they
> > do and don't cover. Also a RED team attitude towards them, as well as
> > thinking hard about what you are not measuring (operations research).
> >
> > I actually rather love the new cloudflare speedtest, because it tests
> > a single TCP connection, rather than dozens, and at the same time folk
> > are complaining that it doesn't find the actual "speed!". yet... the
> > test itself more closely emulates a user experience than speedtest.net
> > does. I am personally pretty convinced that the fewer numbers of flows
> > that a web page opens improves the likelihood of a good user
> > experience, but lack data on it.
> >
> > To try to tackle the evaluation and calibration part, I've reached out
> > to all the new test designers in the hope that we could get together
> > and produce a report of what each new test is actually doing. I've
> > tweeted, linked in, emailed, and spammed every measurement list I know
> > of, and only to some response, please reach out to other test designer
> > folks and have them join the rpm email list?
> >
> > My principal kvetches in the new tests so far are:
> >
> > 0) None of the tests last long enough.
> >
> > Ideally there should be a mode where they at least run to "time of
> > first loss", or periodically, just run longer than the
> > industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
> > there! It's really bad science to optimize the internet for 20
> > seconds. It's like optimizing a car, to handle well, for just 20
> > seconds.
> >
> > 1) Not testing up + down + ping at the same time
> >
> > None of the new tests actually test the same thing that the infamous
> > rrul test does - all the others still test up, then down, and ping. It
> > was/remains my hope that the simpler parts of the flent test suite -
> > such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
> > tests would provide calibration to the test designers.
> >
> > we've got zillions of flent results in the archive published here:
> > https://blog.cerowrt.org/post/found_in_flent/
> > ps. Misinformation about iperf 2 impacts my ability to do this.
>
> > The new tests have all added up + ping and down + ping, but not up +
> > down + ping. Why??
> >
> > The behaviors of what happens in that case are really non-intuitive, I
> > know, but... it's just one more phase to add to any one of those new
> > tests. I'd be deliriously happy if someone(s) new to the field
> > started doing that, even optionally, and boggled at how it defeated
> > their assumptions.
> >
> > Among other things that would show...
> >
> > It's the home router industry's dirty secret than darn few "gigabit"
> > home routers can actually forward in both directions at a gigabit. I'd
> > like to smash that perception thoroughly, but given our starting point
> > is a gigabit router was a "gigabit switch" - and historically been
> > something that couldn't even forward at 200Mbit - we have a long way
> > to go there.
> >
> > Only in the past year have non-x86 home routers appeared that could
> > actually do a gbit in both directions.
> >
> > 2) Few are actually testing within-stream latency
> >
> > Apple's rpm project is making a stab in that direction. It looks
> > highly likely, that with a little more work, crusader and
> > go-responsiveness can finally start sampling the tcp RTT, loss and
> > markings, more directly. As for the rest... sampling TCP_INFO on
> > windows, and Linux, at least, always appeared simple to me, but I'm
> > discovering how hard it is by delving deep into the rust behind
> > crusader.
> >
> > the goresponsiveness thing is also IMHO running WAY too many streams
> > at the same time, I guess motivated by an attempt to have the test
> > complete quickly?
> >
> > B) To try and tackle the validation problem:ps. Misinformation about
> > iperf 2 impacts my ability to do this.
>
> >
> > In the libreqos.io project we've established a testbed where tests can
> > be plunked through various ISP plan network emulations. It's here:
> > https://payne.taht.net (run bandwidth test for what's currently hooked
> > up)
> >
> > We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
> > leverage with that, so I don't have to nat the various emulations.
> > (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
> > to see more test designers setup a testbed like this to calibrate
> > their own stuff.
> >
> > Presently we're able to test:
> > flent
> > netperf
> > iperf2
> > iperf3
> > speedtest-cli
> > crusader
> > the broadband forum udp based test:
> > https://github.com/BroadbandForum/obudpst
> > trexx
> >
> > There's also a virtual machine setup that we can remotely drive a web
> > browser from (but I didn't want to nat the results to the world) to
> > test other web services.
> > _______________________________________________
> > Rpm mailing list
> > Rpm@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/rpm
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
next prev parent reply other threads:[~2023-01-09 20:21 UTC|newest]
Thread overview: 168+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <mailman.2651.1672779463.1281.starlink@lists.bufferbloat.net>
[not found] ` <1672786712.106922180@apps.rackspace.com>
[not found] ` <F4CA66DA-516C-438A-8D8A-5F172E5DFA75@cable.comcast.com>
2023-01-09 15:26 ` [Bloat] " Dave Taht
2023-01-09 17:00 ` Sebastian Moeller
2023-01-09 17:04 ` [Bloat] [LibreQoS] " Jeremy Austin
2023-01-09 18:33 ` Dave Taht
2023-01-09 18:54 ` [Bloat] [EXTERNAL] " Livingood, Jason
2023-01-09 19:19 ` [Bloat] [Rpm] " rjmcmahon
2023-01-09 19:56 ` [Bloat] [LibreQoS] " dan
2023-01-09 21:00 ` rjmcmahon
2023-03-13 10:02 ` [Bloat] [Rpm] [LibreQoS] " Sebastian Moeller
2023-03-13 15:08 ` [Bloat] [Starlink] [Rpm] [LibreQoS] [EXTERNAL] " Jeremy Austin
2023-03-13 15:50 ` Sebastian Moeller
2023-03-13 16:06 ` Dave Taht
2023-03-13 16:19 ` Sebastian Moeller
2023-03-13 16:12 ` dan
2023-03-13 16:36 ` Sebastian Moeller
2023-03-13 17:26 ` dan
2023-03-13 17:37 ` Jeremy Austin
2023-03-13 18:34 ` Sebastian Moeller
2023-03-13 18:14 ` Sebastian Moeller
2023-03-13 18:42 ` rjmcmahon
2023-03-13 18:51 ` Sebastian Moeller
2023-03-13 19:32 ` rjmcmahon
2023-03-13 20:00 ` Sebastian Moeller
2023-03-13 20:28 ` rjmcmahon
2023-03-14 4:27 ` [Bloat] On FiWi rjmcmahon
2023-03-14 11:10 ` [Bloat] [Starlink] " Mike Puchol
2023-03-14 16:54 ` [Bloat] [Rpm] " Robert McMahon
2023-03-14 17:06 ` Robert McMahon
2023-03-14 17:11 ` Sebastian Moeller
2023-03-14 17:35 ` Robert McMahon
2023-03-14 17:54 ` [Bloat] [LibreQoS] " dan
2023-03-14 18:14 ` Robert McMahon
2023-03-14 19:18 ` dan
2023-03-14 19:30 ` Dave Taht
2023-03-14 20:06 ` rjmcmahon
2023-03-14 19:30 ` rjmcmahon
2023-03-14 23:30 ` [Bloat] [Starlink] [LibreQoS] [Rpm] " Bruce Perens
2023-03-15 0:11 ` Robert McMahon
2023-03-15 5:20 ` Bruce Perens
2023-03-15 16:17 ` [Bloat] [Rpm] [Starlink] [LibreQoS] " Aaron Wood
2023-03-15 17:05 ` Bruce Perens
2023-03-15 17:44 ` rjmcmahon
2023-03-15 19:22 ` David Lang
2023-03-15 17:32 ` [Bloat] [Starlink] [LibreQoS] [Rpm] " rjmcmahon
2023-03-15 17:42 ` dan
2023-03-15 19:33 ` David Lang
2023-03-15 19:39 ` [Bloat] [Rpm] [Starlink] [LibreQoS] " Dave Taht
2023-03-15 21:52 ` David Lang
2023-03-15 22:04 ` Dave Taht
2023-03-15 22:08 ` dan
2023-03-15 17:43 ` [Bloat] [Starlink] [LibreQoS] [Rpm] " Sebastian Moeller
2023-03-15 17:49 ` rjmcmahon
2023-03-15 17:53 ` [Bloat] [Rpm] [Starlink] [LibreQoS] " Dave Taht
2023-03-15 17:59 ` dan
2023-03-15 19:39 ` rjmcmahon
2023-03-17 16:38 ` [Bloat] [Rpm] [Starlink] " Dave Taht
2023-03-17 18:21 ` Mike Puchol
2023-03-17 19:01 ` [Bloat] [Starlink] [Rpm] " Sebastian Moeller
2023-03-17 19:19 ` [Bloat] [Rpm] [Starlink] " rjmcmahon
2023-03-17 20:37 ` [Bloat] [Starlink] [Rpm] " Bruce Perens
2023-03-17 20:57 ` rjmcmahon
2023-03-17 22:50 ` Bruce Perens
2023-03-18 18:18 ` rjmcmahon
2023-03-18 19:57 ` [Bloat] [LibreQoS] " dan
2023-03-18 20:40 ` rjmcmahon
2023-03-19 10:26 ` [Bloat] [Starlink] [LibreQoS] " Michael Richardson
2023-03-19 21:00 ` [Bloat] On metrics rjmcmahon
2023-03-20 0:26 ` dan
2023-03-20 3:03 ` [Bloat] [Starlink] " David Lang
2023-03-20 20:46 ` [Bloat] [Rpm] [Starlink] [LibreQoS] On FiWi Frantisek Borsik
2023-03-20 21:28 ` dan
2023-03-20 21:38 ` Frantisek Borsik
2023-03-20 22:02 ` [Bloat] On FiWi power envelope rjmcmahon
2023-03-20 23:47 ` [Bloat] [Starlink] " Bruce Perens
2023-03-21 0:10 ` [Bloat] [Starlink] [Rpm] [LibreQoS] On FiWi Brandon Butterworth
2023-03-21 5:21 ` Frantisek Borsik
2023-03-21 11:26 ` [Bloat] Annoyed at 5/1 Mbps Rich Brown
2023-03-21 12:31 ` [Bloat] [Starlink] " Sebastian Moeller
2023-03-21 12:53 ` Rich Brown
2023-03-21 15:22 ` Jan Ceuleers
2023-03-21 18:33 ` Sebastian Moeller
2023-03-21 17:22 ` dan
2023-03-21 19:04 ` Sebastian Moeller
2023-03-23 18:23 ` dan
2023-03-21 12:29 ` [Bloat] [Starlink] [Rpm] [LibreQoS] On FiWi Brandon Butterworth
2023-03-21 12:30 ` [Bloat] [Rpm] [Starlink] " Sebastian Moeller
2023-03-21 17:42 ` rjmcmahon
2023-03-21 18:08 ` rjmcmahon
2023-03-21 18:51 ` Frantisek Borsik
2023-03-21 19:58 ` rjmcmahon
2023-03-21 20:06 ` David Lang
2023-03-25 19:39 ` [Bloat] On fiber as critical infrastructure w/Comcast chat rjmcmahon
2023-03-25 20:09 ` [Bloat] [Starlink] " Bruce Perens
2023-03-25 20:47 ` rjmcmahon
2023-03-25 20:15 ` [Bloat] " Sebastian Moeller
2023-03-25 20:43 ` rjmcmahon
2023-03-25 21:08 ` [Bloat] [Starlink] " Bruce Perens
2023-03-25 22:04 ` Robert McMahon
2023-03-25 22:50 ` dan
2023-03-25 23:21 ` Robert McMahon
2023-03-25 23:35 ` David Lang
2023-03-26 0:04 ` Robert McMahon
2023-03-26 0:07 ` Nathan Owens
2023-03-26 0:50 ` Robert McMahon
2023-03-26 8:45 ` Livingood, Jason
2023-03-26 18:54 ` rjmcmahon
2023-03-26 0:28 ` David Lang
2023-03-26 0:57 ` Robert McMahon
2023-03-25 22:57 ` Bruce Perens
2023-03-25 23:33 ` David Lang
2023-03-25 23:38 ` Robert McMahon
2023-03-25 23:20 ` David Lang
2023-03-26 18:29 ` rjmcmahon
2023-03-26 10:34 ` [Bloat] " Sebastian Moeller
2023-03-26 18:12 ` rjmcmahon
2023-03-26 20:57 ` David Lang
2023-03-26 21:11 ` Sebastian Moeller
2023-03-26 21:26 ` David Lang
2023-03-28 17:06 ` [Bloat] [Starlink] " Larry Press
2023-03-28 17:47 ` rjmcmahon
2023-03-28 18:11 ` Frantisek Borsik
2023-03-28 18:46 ` rjmcmahon
2023-03-28 20:37 ` David Lang
2023-03-28 21:31 ` rjmcmahon
2023-03-28 22:18 ` dan
2023-03-28 22:42 ` rjmcmahon
2023-03-29 8:28 ` Sebastian Moeller
[not found] ` <a2857ec4-a6ea-e9eb-cf99-17ef7ea08ef2@indexexchange.com>
[not found] ` <716ECAAD-E2EE-4647-9E73-D60BF8BF9C1E@searls.com>
2023-03-29 13:40 ` [Bloat] Enabling a production model Dave Taht
2023-03-29 14:54 ` [Bloat] [LibreQoS] " dan
2023-03-29 16:53 ` Jeremy Austin
2023-03-29 18:33 ` [Bloat] [Starlink] " Sebastian Moeller
2023-03-29 17:13 ` [Bloat] " David Lang
2023-03-29 17:34 ` dan
2023-03-29 20:03 ` David Lang
2023-03-29 17:46 ` [Bloat] [Starlink] " Rich Brown
2023-03-29 19:02 ` tom
2023-03-29 19:08 ` Dave Taht
2023-03-29 19:31 ` tom
2023-03-29 19:11 ` Dave Collier-Brown
2023-03-29 13:46 ` [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat Frantisek Borsik
2023-03-29 14:57 ` [Bloat] [LibreQoS] " Dave Taht
2023-03-29 19:23 ` Sebastian Moeller
2023-03-29 19:02 ` [Bloat] " rjmcmahon
2023-03-29 19:37 ` dan
2023-03-25 20:27 ` [Bloat] " rjmcmahon
2023-03-17 23:15 ` [Bloat] [Starlink] [Rpm] On FiWi David Lang
2023-03-13 19:33 ` [Bloat] [Starlink] [Rpm] [LibreQoS] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA dan
2023-03-13 19:52 ` Jeremy Austin
2023-03-13 21:00 ` Sebastian Moeller
2023-03-13 21:27 ` dan
2023-03-14 9:11 ` Sebastian Moeller
2023-03-13 20:45 ` Sebastian Moeller
2023-03-13 21:02 ` [Bloat] When do you drop? Always! Dave Taht
2023-03-13 16:04 ` [Bloat] UnderBloat on fiber and wisps Dave Taht
2023-03-13 16:09 ` Sebastian Moeller
2023-03-13 18:22 ` [Bloat] Offtopic: passive ping. was: Researchers Seeking Probe Volunteers in USA Dave Collier-Brown
2023-03-13 18:50 ` Sebastian Moeller
2023-03-13 19:48 ` Dave Collier-Brown
2023-01-09 20:49 ` [Bloat] [EXTERNAL] Re: [Starlink] " Dave Taht
2023-01-09 19:13 ` [Bloat] [Rpm] " rjmcmahon
2023-01-09 19:47 ` [Bloat] [Starlink] [Rpm] " Sebastian Moeller
2023-01-09 20:20 ` Dave Taht [this message]
2023-01-09 20:46 ` [Bloat] [Rpm] [Starlink] " rjmcmahon
2023-01-09 20:59 ` Dave Taht
2023-01-09 21:06 ` rjmcmahon
2023-01-09 21:18 ` rjmcmahon
2023-01-09 21:02 ` [Bloat] [Starlink] [Rpm] " Dick Roy
2023-01-10 17:36 ` [Bloat] [Rpm] [Starlink] " David P. Reed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAA93jw5U7e29TGVK4BzOLVnUPkb3q4mF+SB7wAe36bAdkhYaaQ@mail.gmail.com \
--to=dave.taht@gmail.com \
--cc=Jason_Livingood@comcast.com \
--cc=bloat@lists.bufferbloat.net \
--cc=dpreed@deepplum.com \
--cc=libreqos@lists.bufferbloat.net \
--cc=mike.reynolds@netforecast.com \
--cc=rjmcmahon@rjmcmahon.com \
--cc=rpm@lists.bufferbloat.net \
--cc=starlink@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox