From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 7E1963B29E; Thu, 12 Jan 2023 03:22:52 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.de; s=s31663417; t=1673511761; bh=pb9mfhNDH0fUtJXai7RlNKvuhjyub1/dIkhEH2MschA=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=b64QzD2oEbQC0/Guds/xjgSbXvRwhoaGtAJljnr/k+RRQV0a3/qXAoUuOHqOwoNOy Dfm7c1Vt0IgODvTDvZb1kN3ORSoXQHB0a2Z09W12NQRwgMHNNygi/RXoPBJehuGzOn n7rhduh908F/WcByLrt8bjE0KCdsfacC+/PH0OZBy2qLC/idFoPF8rBKjAzj1fsOLY mtH0OpMwGruFfEck7hjrfBVQI4R31Zc4jizl+30HIb/ic64eyqpWDJjEEBOpgsvvK6 ox6w16xSLCN0WeGvQNhEKnt1szwpayZr7KRl70VXz/1HTZTxkvUaMwuKc7tSk5PKx3 s4wiv9Uc4N9KA== X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a Received: from smtpclient.apple ([134.76.241.253]) by mail.gmx.net (mrgmx004 [212.227.17.190]) with ESMTPSA (Nemesis) id 1MybKp-1oq8Kq0Be1-00z2SI; Thu, 12 Jan 2023 09:22:41 +0100 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.1\)) From: Sebastian Moeller In-Reply-To: <5C6EFF20C9F745AC87DA199EE7ABBAD9@SRA6> Date: Thu, 12 Jan 2023 09:22:40 +0100 Cc: "Rodney W. Grimes" , mike.reynolds@netforecast.com, libreqos , "David P. Reed" , Rpm , rjmcmahon , bloat Content-Transfer-Encoding: quoted-printable Message-Id: References: <202301111832.30BIWevV030127@gndrsh.dnsmgr.net> <5C6EFF20C9F745AC87DA199EE7ABBAD9@SRA6> To: Dick Roy X-Mailer: Apple Mail (2.3696.120.41.1.1) X-Provags-ID: V03:K1:fxsRIaoTdjS783IQQfF0a1MS+KcZNq1R2tGPbBHK3rJCG5bCJ8t X9a4yMeBoFN4YwHMrDwGBT/ySHSIVnBnhcbgFiTZGQUBKRbilxN2uSQQMMdsKsTamzNxnoD I1HyZnSQ0WMoazf9Kl2QHPEO/3L36J9xFvnNxtie98Yk2AY4gY2EEey7dGW4ek87I2QksaJ TP8OXoESnjfO/cT3VHQhA== X-Spam-Flag: NO UI-OutboundReport: notjunk:1;M01:P0:QF0/x/x7M7o=;G0GUpSK5QZufqEAkI+e4HNU8Bk1 5Nl2c9fc45V+NpiUFV3Akbg+aeRGeJ23MmloiUgFSIehuSTrafDPYi+Z5GngtmRJGVeCb7cfU IrsUsWmZIcKM0jxiCpOSFZRzralLj3RDj3V0BDW5u/4tRbvhLSDgC3Tq0zlG8TmfU4hXizpVv uoiR7z5J6MjuQJw2fJc85EtoxJuDX18m0+08EgVWpIeUe+x1bzSQPG/2URjkiOK6IRIvy8IRL yImKnol8CwOSsnS3McEqZSzCD5obvyWc3Y505c9OyZX/3r1VtJVkcD58pdIDTze88Y8phsMKO ldNqVDKCm3rMLpT34T7fxe/sS9IsxbXmOAYp9AwdhruBOy2XmuRmK1cfAVmyJMLuoc4rtabwe iitqEpJzrFNHz3IfhOHO8Mp84yNam9gN5qFXtrytCQG948+YVvPlPbFxSeXP73HRgtstyZp7I iq4tesgp4VRPhZupGE8PEmS6gGsmx9IYJQuokV/3NyfqZJX1b3gjOeBSuV15xHL6ged+fg6Y2 wpXN5Zwb+3hW2NtLyvW7RXK1TSExeAB3LItLqeQlJ5QFY2Ob0Q2wPaPgfdjwgSQ+T+BM1J9VQ QFrIk1IgptrjfkFxfZdzrQqDk5VBp+J8f4IfsabOafx80Kb/i2xtM29s3Ph3Lz2uyqCYsY4hX 5AJcn3JkRVCOivgyxwMnQhAiom2PfoU7Ond1dFATRwth9BTJFBmtdW086+a/VfeN0iGyuTW/k hr3dRL8BaYKJ1GjVWm/YhaX9rcvEBB/YbQg+U2PIWbfN5S1z5FjDtfw4RQQQbz3E5Y1q25EnR HcTyOY/b8fng3gAsZRSte0uyTAQh5gqywQ5h15pmT7lEj9OaQPaiyDRyAcPLq6+y/u581+A4d YbkT/HaLoL0vQ3aHn6cEpqYWp5ve/9t+S12tBTmIfD/Q/N97oFhBO1PZybNmUYMYol63CKuFM DKDlJYrqfABgwrgcG8rqnetHn4A= Subject: Re: [Bloat] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Jan 2023 08:22:52 -0000 Hi RR, > On Jan 11, 2023, at 22:46, Dick Roy wrote: >=20 > =20 > =20 > -----Original Message----- > From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On = Behalf Of Sebastian Moeller via Starlink > Sent: Wednesday, January 11, 2023 12:01 PM > To: Rodney W. Grimes > Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos; = David P. Reed; Rpm; rjmcmahon; bloat > Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in = USA > =20 > Hi Rodney, > =20 > =20 > =20 > =20 > > On Jan 11, 2023, at 19:32, Rodney W. Grimes = wrote: > >=20 > > Hello, > >=20 > > Yall can call me crazy if you want.. but... see below [RWG] > >> Hi Bib, > >>=20 > >>=20 > >>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink = wrote: > >>>=20 > >>> My biggest barrier is the lack of clock sync by the devices, i.e. = very limited support for PTP in data centers and in end devices. This = limits the ability to measure one way delays (OWD) and most assume that = OWD is 1/2 and RTT which typically is a mistake. We know this = intuitively with airplane flight times or even car commute times where = the one way time is not 1/2 a round trip time. Google maps & directions = provide a time estimate for the one way link. It doesn't compute a round = trip and divide by two. > >>>=20 > >>> For those that can get clock sync working, the iperf 2 = --trip-times options is useful. > >>=20 > >> [SM] +1; and yet even with unsynchronized clocks one can try to = measure how latency changes under load and that can be done per = direction. Sure this is far inferior to real reliably measured OWDs, but = if life/the internet deals you lemons.... > >=20 > > [RWG] iperf2/iperf3, etc are already moving large amounts of data = back and forth, for that matter any rate test, why not abuse some of = that data and add the fundemental NTP clock sync data and = bidirectionally pass each others concept of "current time". IIRC (its = been 25 years since I worked on NTP at this level) you *should* be able = to get a fairly accurate clock delta between each end, and then use that = info and time stamps in the data stream to compute OWD's. You need to = put 4 time stamps in the packet, and with that you can compute "offset". > [RR] For this to work at a reasonable level of accuracy, the = timestamping circuits on both ends need to be deterministic and = repeatable as I recall. Any uncertainty in that process adds to = synchronization errors/uncertainties. > =20 > [SM] Nice idea. I would guess that all timeslot based access = technologies (so starlink, docsis, GPON, LTE?) all distribute "high = quality time" carefully to the "modems", so maybe all that would be = needed is to expose that high quality time to the LAN side of those = modems, dressed up as NTP server? > [RR] It=E2=80=99s not that simple! Distributing =E2=80=9Chigh-quality = time=E2=80=9D, i.e. =E2=80=9Csynchronizing all clocks=E2=80=9D does not = solve the communication problem in synchronous slotted MAC/PHYs! [SM] I happily believe you, but the same idea of "time slot" = needs to be shared by all nodes, no? So the clockss need to be = reasonably similar rate, aka synchronized (see below). > All the technologies you mentioned above are essentially P2P, not = intended for broadcast. Point is, there is a point controller (aka PoC) = often called a base station (eNodeB, gNodeB, =E2=80=A6) that actually = =E2=80=9Ccontrols everything that is necessary to control=E2=80=9D at = the UE including time, frequency and sampling time offsets, and these = are critical to get right if you want to communicate, and they are ALL = subject to the laws of physics (cf. the speed of light)! Turns out that = what is necessary for the system to function anywhere near capacity, is = for all the clocks governing transmissions from the UEs to be = =E2=80=9Cunsynchronized=E2=80=9D such that all the UE transmissions = arrive at the PoC at the same (prescribed) time! [SM] Fair enough. I would call clocks that are "in sync" albeit = with individual offsets as synchronized, but I am a layman and that = might sound offensively wrong to experts in the field. But even without = the naming my point is that all systems that depend on some idea of = shared time-base are halfway there of exposing that time to end users, = by "translating it into an NTP time source at the modem. > For some technologies, in particular 5G!, these considerations are = ESSENTIAL. Feel free to scour the 3GPP LTE 5G RLC and PHY specs if you = don=E2=80=99t believe me! J =20 [SM Far be it from me not to believe you, so thanks for the = pointers. Yet, I still think that unless different nodes of a shared = segment move at significantly different speeds, that there should be a = common "tick-duration" for all clocks even if each clock runs at an = offset... (I naively would try to implement something like that by = trying to fully synchronize clocks and maintain a local offset value to = convert from "absolute" time to "network" time, but likely because = coming from the outside I am blissfully unaware of the detail challenges = that need to be solved). Regards & Thanks Sebastian > =20 > =20 > >=20 > >>=20 > >>=20 > >>>=20 > >>> --trip-times > >>> enable the measurement of end to end write to read latencies = (client and server clocks must be synchronized) > > [RWG] --clock-skew > > enable the measurement of the wall clock difference between = sender and receiver > >=20 > >>=20 > >> [SM] Sweet! > >>=20 > >> Regards > >> Sebastian > >>=20 > >>>=20 > >>> Bob > >>>> I have many kvetches about the new latency under load tests being > >>>> designed and distributed over the past year. I am delighted! that = they > >>>> are happening, but most really need third party evaluation, and > >>>> calibration, and a solid explanation of what network pathologies = they > >>>> do and don't cover. Also a RED team attitude towards them, as = well as > >>>> thinking hard about what you are not measuring (operations = research). > >>>> I actually rather love the new cloudflare speedtest, because it = tests > >>>> a single TCP connection, rather than dozens, and at the same time = folk > >>>> are complaining that it doesn't find the actual "speed!". yet... = the > >>>> test itself more closely emulates a user experience than = speedtest.net > >>>> does. I am personally pretty convinced that the fewer numbers of = flows > >>>> that a web page opens improves the likelihood of a good user > >>>> experience, but lack data on it. > >>>> To try to tackle the evaluation and calibration part, I've = reached out > >>>> to all the new test designers in the hope that we could get = together > >>>> and produce a report of what each new test is actually doing. = I've > >>>> tweeted, linked in, emailed, and spammed every measurement list I = know > >>>> of, and only to some response, please reach out to other test = designer > >>>> folks and have them join the rpm email list? > >>>> My principal kvetches in the new tests so far are: > >>>> 0) None of the tests last long enough. > >>>> Ideally there should be a mode where they at least run to "time = of > >>>> first loss", or periodically, just run longer than the > >>>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons > >>>> there! It's really bad science to optimize the internet for 20 > >>>> seconds. It's like optimizing a car, to handle well, for just 20 > >>>> seconds. > >>>> 1) Not testing up + down + ping at the same time > >>>> None of the new tests actually test the same thing that the = infamous > >>>> rrul test does - all the others still test up, then down, and = ping. It > >>>> was/remains my hope that the simpler parts of the flent test = suite - > >>>> such as the tcp_up_squarewave tests, the rrul test, and the = rtt_fair > >>>> tests would provide calibration to the test designers. > >>>> we've got zillions of flent results in the archive published = here: > >>>> https://blog.cerowrt.org/post/found_in_flent/ > >>>> ps. Misinformation about iperf 2 impacts my ability to do this. > >>>=20 > >>>> The new tests have all added up + ping and down + ping, but not = up + > >>>> down + ping. Why?? > >>>> The behaviors of what happens in that case are really = non-intuitive, I > >>>> know, but... it's just one more phase to add to any one of those = new > >>>> tests. I'd be deliriously happy if someone(s) new to the field > >>>> started doing that, even optionally, and boggled at how it = defeated > >>>> their assumptions. > >>>> Among other things that would show... > >>>> It's the home router industry's dirty secret than darn few = "gigabit" > >>>> home routers can actually forward in both directions at a = gigabit. I'd > >>>> like to smash that perception thoroughly, but given our starting = point > >>>> is a gigabit router was a "gigabit switch" - and historically = been > >>>> something that couldn't even forward at 200Mbit - we have a long = way > >>>> to go there. > >>>> Only in the past year have non-x86 home routers appeared that = could > >>>> actually do a gbit in both directions. > >>>> 2) Few are actually testing within-stream latency > >>>> Apple's rpm project is making a stab in that direction. It looks > >>>> highly likely, that with a little more work, crusader and > >>>> go-responsiveness can finally start sampling the tcp RTT, loss = and > >>>> markings, more directly. As for the rest... sampling TCP_INFO on > >>>> windows, and Linux, at least, always appeared simple to me, but = I'm > >>>> discovering how hard it is by delving deep into the rust behind > >>>> crusader. > >>>> the goresponsiveness thing is also IMHO running WAY too many = streams > >>>> at the same time, I guess motivated by an attempt to have the = test > >>>> complete quickly? > >>>> B) To try and tackle the validation problem:ps. Misinformation = about iperf 2 impacts my ability to do this. > >>>=20 > >>>> In the libreqos.io project we've established a testbed where = tests can > >>>> be plunked through various ISP plan network emulations. It's = here: > >>>> https://payne.taht.net (run bandwidth test for what's currently = hooked > >>>> up) > >>>> We could rather use an AS number and at least a ipv4/24 and = ipv6/48 to > >>>> leverage with that, so I don't have to nat the various = emulations. > >>>> (and funding, anyone got funding?) Or, as the code is GPLv2 = licensed, > >>>> to see more test designers setup a testbed like this to calibrate > >>>> their own stuff. > >>>> Presently we're able to test: > >>>> flent > >>>> netperf > >>>> iperf2 > >>>> iperf3 > >>>> speedtest-cli > >>>> crusader > >>>> the broadband forum udp based test: > >>>> https://github.com/BroadbandForum/obudpst > >>>> trexx > >>>> There's also a virtual machine setup that we can remotely drive a = web > >>>> browser from (but I didn't want to nat the results to the world) = to > >>>> test other web services. > >>>> _______________________________________________ > >>>> Rpm mailing list > >>>> Rpm@lists.bufferbloat.net > >>>> https://lists.bufferbloat.net/listinfo/rpm > >>> _______________________________________________ > >>> Starlink mailing list > >>> Starlink@lists.bufferbloat.net > >>> https://lists.bufferbloat.net/listinfo/starlink > >>=20 > >> _______________________________________________ > >> Starlink mailing list > >> Starlink@lists.bufferbloat.net > >> https://lists.bufferbloat.net/listinfo/starlink > =20 > _______________________________________________ > Starlink mailing list > Starlink@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/starlink