[Starlink] Starlink and bufferbloat status?

Ben Greear greearb at candelatech.com
Fri Jul 9 15:08:17 EDT 2021


On 7/9/21 11:40 AM, David P. Reed wrote:
> Early measurements of performance of Starlink have shown significant bufferbloat, as Dave Taht has shown.
> 
> But...  Starlink is a moving target. The bufferbloat isn't a hardware issue, it should be completely manageable, starting by simple firmware changes inside the 
> Starlink system itself. For example, implementing fq_codel so that bottleneck links just drop packets according to the Best Practices RFC,
> 
> So I'm hoping this has improved since Dave's measurements. How much has it improved? What's the current maximum packet latency under full load,  Ive heard 
> anecdotally that a friend of a friend gets 84 msec. *ping times under full load*, but he wasn't using flent or some other measurement tool of good quality that 
> gives a true number.
> 
> 84 msec is not great - it's marginal for Zoom quality experience (you want latencies significantly less than 100 msec. as a rule of thumb for teleconferencing 
> quality). But it is better than Dave's measurements showed.
> 
> Now Musk bragged that his network was "low latency" unlike other high speed services, which means low end-to-end latency.  That got him permission from the FCC 
> to operate Starlink at all. His number was, I think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he probably meant just the time from the 
> ground station to the terminal through the satellite. But I regularly get 17 msec. between California and Massachusetts over the public Internet)
> 
> So 84 might be the current status. That would mean that someone at Srarlink might be paying some attention, but it is a long way from what Musk implied.
> 
> PS: I forget the number of the RFC, but the number of packets queued on an egress link should be chosen by taking the hardware bottleneck throughput of any 
> path, combined with an end-to-end Internet underlying delay of about 10 msec. to account for hops between source and destination. Lets say Starlink allocates 50 
> Mb/sec to each customer, packets are limited to 10,000 bits (1500 * 8), so the outbound queues should be limited to about 0.01 * 50,000,000 / 10,000, which 
> comes out to about 250 packets from each terminal of buffering, total, in the path from terminal to public Internet, assuming the connection to the public 
> Internet is not a problem.

There is no need to queue more than a single frame IF you can efficiently transmit a single frame and if you can be fed new frames
as quick as you want them.  Wifi cannot do either of these things, of course, and probably not the dish either,
so you will need to buffer some stuff.  For WiFi, for best throughput, you want to send larger AMPDU chains, so you may want
to buffer per TID and per user up to 64 or so frames.  That is too much buffering if you have 100 stations each using 4 tids
though, so then you start making tradeoffs of throughput vs latency, maybe force all frames to same station onto same TID for better
aggregation, etc).

There is no perfect answer to this in general.  If you are trying to just stream movies over wifi to people on a plane, then
latency matters not much at all and you use all the buffers you can.  If you have a call center using VOIP over wifi then
tput doesn't matter much and instead you optimize for latency.  And for everyone else, you pick something in
the middle.

Queueing in the AP and dish shouldn't care at all about total latency,
that is more of a TCP windowing issue.  TCP should definitely care about total latency.

And this is all my opinion of course...

Thanks,
Ben



-- 
Ben Greear <greearb at candelatech.com>
Candela Technologies Inc  http://www.candelatech.com




More information about the Starlink mailing list