From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com [IPv6:2a00:1450:4864:20::435]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id A087E3B29D for ; Fri, 13 Jan 2023 15:41:03 -0500 (EST) Received: by mail-wr1-x435.google.com with SMTP id h16so22117990wrz.12 for ; Fri, 13 Jan 2023 12:41:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=zzYRiTPorDTsocy0YWwPKM5XghiVfMw4R84JK7KCLtc=; b=F+jBkXE32PWrV2ZuD7Z5ejyB2Z7+9p31MWEcnonY3LTvaenTBE7xseuNaPl7sDSOQk 2nctujB1+Co3ybf05qmfvD0dQ30iXNdCM8EtS10p1lebOsYUcjzEIICyTErxt3bCunyL 6JHxLVFlQQraPy4qXuP0Ng6EmjX5IGxKQ05xu9IUlFEFW05lBwDqdsWtTu5H0ZaXv2ek 5LMzEOpt0kYaU6Ih/nBv/iUWkWxZ/QXzYix3fIsDXIvDayugqQQn+WgM+E5NsJntB7wC acQwlgSkpkgG/JEN1XpcUIc69N0d8iS91qVBC93XUf9Ay71CfqvCABjpH9geuqXd3Vl6 cWFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zzYRiTPorDTsocy0YWwPKM5XghiVfMw4R84JK7KCLtc=; b=HTPsPBoY+jH5ldDJ21fMYHv3+gAoZqXuLASpFl7Db7C5j1ZTNyzRnQu7p3EAj/NTkQ eEweH3iv13jlHlFLv2I1HeHMI2ZqkhEqQrCnCQgRn2t1YrwpMkIzEcHNOFuiF25jWvdn lCQlTVU7VHFfhci1Znq5q5zVPC3JyP+FK8uCFj6LKux+27vWQZ0rmDBtc776FQ4N062x otwbwJ2MtuzXgqbX1tFGKxjJ2HtC0cFNvy0SkkstP4XsNPFSZfnA8KPuC2KsR2Z/5zh1 qvnn/bTKdIY7VobetufzsP40YoAZPJGF9HWzjT2WHlPlRPyun1sPaKlF6TPGRNubtJmx NmGw== X-Gm-Message-State: AFqh2krWmB0I9zZnR7dHz1QAHVT+mPyopwc8IwBpCyFMmPTU1QRC3RfM FIO+zcaIkkF6xSvFV+zTgiuLyJ2K5aq/+4rqiUknrDiX X-Google-Smtp-Source: AMrXdXuE2TNfinBi0+6ZahjKtL9mZx1HbVHUwA/HZ9frz/wJB3PuXLPmXJoktnzkH93VrUbtw0vMouzMk5LzU+qeMuE= X-Received: by 2002:a5d:5042:0:b0:2bd:c7e2:e972 with SMTP id h2-20020a5d5042000000b002bdc7e2e972mr359999wrt.383.1673642462553; Fri, 13 Jan 2023 12:41:02 -0800 (PST) MIME-Version: 1.0 References: <89D796E75967416B9723211C183A8396@SRA6> <3696AEA5409D4303ABCBC439727A5E40@SRA6> <82c97171c68eec5970c622d69fdf36ba@jensencloud.net> In-Reply-To: <82c97171c68eec5970c622d69fdf36ba@jensencloud.net> From: Dave Taht Date: Fri, 13 Jan 2023 12:40:51 -0800 Message-ID: To: Pat Jensen Cc: Jonathan Bennett , starlink@lists.bufferbloat.net Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [Starlink] insanely great waveform result for starlink X-BeenThere: starlink@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Starlink has bufferbloat. Bad." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jan 2023 20:41:03 -0000 On Fri, Jan 13, 2023 at 12:26 PM Pat Jensen wrote: > > Dave, > > From Central California (using Starlink Los Angeles POP w/ live IPv6) > via fremont.starlink.taht.net at 12:20PM PST non-peak > Wired via 1000baseT with no other users on the network > > Min Mean Median Max Stddev > --- ---- ------ --- ------ > RTT 24.5ms 51.22ms 48.75ms 262.5ms 14.92ms > send delay 76.44ms 94.79ms 92.69ms 288.7ms 12.06ms > receive delay -54.57ms -43.58ms -44.87ms 7.67ms 6.21ms > > IPDV (jitter) 92.5=C2=B5s 4.39ms 2.98ms 80.71ms 4.26ms > send IPDV 2.06=C2=B5s 3.85ms 2.99ms 80.76ms 3.25ms > receive IPDV 0s 1.5ms 36.5=C2=B5s 49.72ms 3.18ms > > send call time 6.46=C2=B5s 34.9=C2=B5s 1.32ms 1= 5.9=C2=B5s > timer error 0s 72.1=C2=B5s 4.97ms 69.2= =C2=B5s > server proc. time 620ns 4.19=C2=B5s 647=C2=B5s 6= .34=C2=B5s Thank you pat. In general, I am almost never interested in summary statistics like these, but in plotting the long term behaviors, as nathan just did, starting with idle baseline, and then adding various loads. Could you slide him the full output of this test, since he's got the magic plotting script (my json-fu is non-existent) And then, same test, against the waveform, cloudflare, and flent loads? > duration: 5m1s (wait 787.6ms) > packets sent/received: 99879/83786 (16.11% loss) > server packets received: 83897/99879 (16.00%/0.13% loss up/down) > late (out-of-order) pkts: 1 (0.00%) > bytes sent/received: 5992740/5027160 > send/receive rate: 159.8 Kbps / 134.1 Kbps > packet length: 60 bytes > timer stats: 121/100000 (0.12%) missed, 2.40% error > > patj@air-2 ~ % curl ipinfo.io > { > "ip": "98.97.140.145", > "hostname": "customer.lsancax1.pop.starlinkisp.net", > "city": "Los Angeles", > "region": "California", > "country": "US", > "loc": "34.0522,-118.2437", > "org": "AS14593 Space Exploration Technologies Corporation", > "postal": "90009", > "timezone": "America/Los_Angeles", > "readme": "https://ipinfo.io/missingauth" > }% > > Pat > > On 2023-01-13 09:26, Dave Taht via Starlink wrote: > > packet caps would be nice... all this is very exciting news. > > > > I'd so love for one or more of y'all reporting such great uplink > > results nowadays to duplicate and re-plot the original irtt tests we > > did: > > > > irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o > > whatever.json > > > > They MUST have changed their scheduling to get such amazing uplink > > results, in addition to better queue management. > > > > (for the record, my servers are de, london, fremont, sydney, dallas, > > newark, atlanta, singapore, mumbai) > > > > There's an R and gnuplot script for plotting that output around here > > somewhere (I have largely personally put down the starlink project, > > loaning out mine) - that went by on this list... I should have written > > a blog entry so I can find that stuff again. > > > > On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink > > wrote: > >> > >> > >> On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink > >> wrote: > >>> > >>> On 13/01/2023 6:13 pm, Ulrich Speidel wrote: > >>> > > >>> > From Auckland, New Zealand, using a roaming subscription, it puts m= e > >>> > in touch with a server 2000 km away. OK then: > >>> > > >>> > > >>> > IP address: nix six. > >>> > > >>> > My thoughts shall follow later. > >>> > >>> OK, so here we go. > >>> > >>> I'm always a bit skeptical when it comes to speed tests - they're > >>> really > >>> laden with so many caveats that it's not funny. I took our new work > >>> Starlink kit home in December to give it a try and the other day > >>> finally > >>> got around to set it up. It's on a roaming subscription because our > >>> badly built-up campus really isn't ideal in terms of a clear view of > >>> the > >>> sky. Oh - and did I mention that I used the Starlink Ethernet > >>> adapter, > >>> not the WiFi? > >>> > >>> Caveat 1: Location, location. I live in a place where the best > >>> Starlink > >>> promises is about 1/3 in terms of data rate you can actually get from > >>> fibre to the home at under half of Starlink's price. Read: There are > >>> few > >>> Starlink users around. I might be the only one in my suburb. > >>> > >>> Caveat 2: Auckland has three Starlink gateways close by: Clevedon > >>> (which > >>> is at a stretch daytrip cycling distance from here), Te Hana and > >>> Puwera, > >>> the most distant of the three and about 130 km away from me as the > >>> crow > >>> flies. Read: My dishy can use any satellite that any of these three > >>> can > >>> see, and then depending on where I put it and how much of the > >>> southern > >>> sky it can see, maybe also the one in Hinds, 840 km away, although > >>> that > >>> is obviously stretching it a bit. Either way, that's plenty of > >>> options > >>> for my bits to travel without needing a lot of handovers. Why? Easy: > >>> If > >>> your nearest teleport is close by, then the set of satellites that > >>> the > >>> teleport can see and the set that you can see is almost the same, so > >>> you > >>> can essentially stick with the same satellite while it's in view for > >>> you > >>> because it'll also be in view for the teleport. Pretty much any bird > >>> above you will do. > >>> > >>> And because I don't get a lot of competition from other users in my > >>> area > >>> vying for one of the few available satellites that can see both us > >>> and > >>> the teleport, this is about as good as it gets at 37S latitude. If > >>> I'd > >>> want it any better, I'd have to move a lot further south. > >>> > >>> It'd be interesting to hear from Jonathan what the availability of > >>> home > >>> broadband is like in the Dallas area. I note that it's at a lower > >>> latitude (33N) than Auckland, but the difference isn't huge. I notice > >>> two teleports each about 160 km away, which is also not too bad. I > >>> also > >>> note Starlink availability in the area is restricted at the moment - > >>> oversubscribed? But if Jonathan gets good data rates, then that means > >>> that competition for bird capacity can't be too bad - for whatever > >>> reason. > >> > >> I'm in Southwest Oklahoma, but Dallas is the nearby Starlink gateway. > >> In cities, like Dallas, and Lawton where I live, there are good > >> broadband options. But there are also many people that live outside > >> cities, and the options are much worse. The low density userbase in > >> rural Oklahoma and Texas is probably ideal conditions for Starlink. > >>> > >>> > >>> Caveat 3: Backhaul. There isn't just one queue between me and > >>> whatever I > >>> talk to in terms of my communications. Traceroute shows about 10 hops > >>> between me and the University of Auckland via Starlink. That's 10 > >>> queues, not one. Many of them will have cross traffic. So it's a bit > >>> hard to tell where our packets really get to wait or where they get > >>> dropped. The insidious bit here is that a lot of them will be between > >>> 1 > >>> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can all > >>> turn into bottlenecks. This isn't like a narrowband GEO link of a few > >>> Mb/s where it's obvious where the dominant long latency bottleneck in > >>> your TCP connection's path is. Read: It's pretty hard to tell whether > >>> a > >>> drop in "speed" is due to a performance issue in the Starlink system > >>> or > >>> somewhere between Starlink's systems and the target system. > >>> > >>> I see RTTs here between 20 ms and 250 ms, where the physical latency > >>> should be under 15 ms. So there's clearly a bit of buffer here along > >>> the > >>> chain that occasionally fills up. > >>> > >>> Caveat 4: Handovers. Handover between birds and teleports is > >>> inevitably > >>> associated with a change in RTT and in most cases also available > >>> bandwidth. Plus your packets now arrive at a new queue on a new > >>> satellite while your TCP is still trying to respond to whatever it > >>> thought the queue on the previous bird was doing. Read: Whatever your > >>> cwnd is immediately after a handover, it's probably not what it > >>> should be. > >>> > >>> I ran a somewhat hamstrung (sky view restricted) set of four Ookla > >>> speedtest.net tests each to five local servers. Average upload rate > >>> was > >>> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP that > >>> Starlink seems to be buying its local connectivity from (Vocus Group) > >>> varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22 > >>> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates > >>> observed. In fact, they were the ISP with consistently the worst > >>> rates. > >>> > >>> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and > >>> between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating > >>> with rates. Average RTT was the same as for Vocus. > >>> > >>> Note the variation though: More or less a factor of two between > >>> highest > >>> and lowest rates for each ISP. Did MyRepublic just get lucky in my > >>> tests? Or is there something systematic behind this? Way too few > >>> tests > >>> to tell. > >>> > >>> What these tests do is establish a ballpark. > >>> > >>> I'm currently repeating tests with dish placed on a trestle closer to > >>> the heavens. This seems to have translated into fewer outages / ping > >>> losses (around 1/4 of what I had yesterday with dishy on the ground > >>> on > >>> my deck). Still good enough for a lengthy video Skype call with my > >>> folks > >>> in Germany, although they did comment about reduced video quality. > >>> But > >>> maybe that was the lighting or the different background as I wasn't > >>> in > >>> my usual spot with my laptop when I called them. > >> > >> Clear view of the sky is king for Starlink reliability. I've got my > >> dishy mounted on the back fence, looking up over an empty field, so > >> it's pretty much best-case scenario here. > >>> > >>> > >>> -- > >>> > >>> **************************************************************** > >>> Dr. Ulrich Speidel > >>> > >>> School of Computer Science > >>> > >>> Room 303S.594 (City Campus) > >>> > >>> The University of Auckland > >>> u.speidel@auckland.ac.nz > >>> http://www.cs.auckland.ac.nz/~ulrich/ > >>> **************************************************************** > >>> > >>> > >>> > >>> _______________________________________________ > >>> Starlink mailing list > >>> Starlink@lists.bufferbloat.net > >>> https://lists.bufferbloat.net/listinfo/starlink > >> > >> _______________________________________________ > >> Starlink mailing list > >> Starlink@lists.bufferbloat.net > >> https://lists.bufferbloat.net/listinfo/starlink --=20 This song goes out to all the folk that thought Stadia would work: https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-69813666656= 07352320-FXtz Dave T=C3=A4ht CEO, TekLibre, LLC