Starlink has bufferbloat. Bad.
 help / color / mirror / Atom feed
From: Dave Taht <dave.taht@gmail.com>
To: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: Re: [Starlink] Starlink ISL data
Date: Sun, 26 Mar 2023 19:32:31 -0700	[thread overview]
Message-ID: <CAA93jw4kcfXiwMw0zKAuQV3BjpWp6ZvQ2TQFbtEy9CFRN5aY2w@mail.gmail.com> (raw)
In-Reply-To: <CAA93jw6vMqcCK8TvhtvyAPs1B0-B180Xikm2pE4HN6_BQWkD8Q@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 2066 bytes --]

Nate gave me the opportunity to test a bit of ipv6 access on one of
his dishys today. I am going to give p2p a shot, also, and he
conveniently has bbr2 installed. It is of course difficult to discern
the difference between transport behaviors and their underlying
connectivity - for example the BBR2 result attached has a baseline
(idle!) latency jump of over 40ms which is hard to explain.

Despite these plots saying downloads, they were essentially uploads
from his box over the internet.

The summary of the data I have so far on this direction is:

Cubic, looks like cubic, usually, but not always. BBR rarely looks like BBR.

On Fri, Mar 24, 2023 at 2:46 PM Dave Taht <dave.taht@gmail.com> wrote:
>
> Vint just asked me a difficult question over here about the
> performance of the ISL links for my upcoming AMA next week
>
> https://twitter.com/mtaht/status/1639361656106156032
>
> And to date, we really don't know. We do know it is up!? but...
>
> has anyone managed to measure p2p ipv6 performance starlink to
> starlink over an ISL link as yet? Do we know anyone at the poles? In
> general I always look for flent and irtt data, but I'd settle for a,
> oh, call it 5-10 minute long packet capture of single iperf flow,
> running over tcp cubic (bbr would be great too).... one test in each
> direction.
>
> (in fact that would be great from any starlink terminal to any of my
> servers around the world)
>
> I have some data from a couple of you (thx ulrich in particular!), but
> I have not sat down to take it apart as I have been far, far too busy
> with libreqos and a bunch of nice, small, competent ISPs deploying
> that, to worry about fixing a billionaire's network all that much....
> but I've set aside next week to answer AMAs about everything from all
> and sundry, so if you got data, please share?
>
> --
> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
> Dave Täht CEO, TekLibre, LLC



-- 
https://www.youtube.com/watch?v=tAVwmUG21OY&t=6483s
Dave Täht CEO, TekLibre, LLC

[-- Attachment #2: tcp_ndown_-_starlink-ipv6-cubic.png --]
[-- Type: image/png, Size: 145300 bytes --]

[-- Attachment #3: tcp_ndown_-_starlink-ipv6-bbr2.png --]
[-- Type: image/png, Size: 64805 bytes --]

  reply	other threads:[~2023-03-27  2:32 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-24 21:46 Dave Taht
2023-03-27  2:32 ` Dave Taht [this message]
2023-03-27 10:44 David Fernández
2023-03-27 18:54 ` David Lang
2023-03-28  3:58   ` Ulrich Speidel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/starlink.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAA93jw4kcfXiwMw0zKAuQV3BjpWp6ZvQ2TQFbtEy9CFRN5aY2w@mail.gmail.com \
    --to=dave.taht@gmail.com \
    --cc=starlink@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox