From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 9CE183B2A4 for ; Fri, 9 Jul 2021 15:08:30 -0400 (EDT) X-Virus-Scanned: Proofpoint Essentials engine Received: from mx1-us1.ppe-hosted.com (unknown [10.110.51.18]) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id B45C42A0064 for ; Fri, 9 Jul 2021 19:08:29 +0000 (UTC) Received: from mail3.candelatech.com (mail2.candelatech.com [208.74.158.173]) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTP id 6B3A134007E for ; Fri, 9 Jul 2021 19:08:29 +0000 (UTC) Received: from [192.168.100.195] (50-251-239-81-static.hfc.comcastbusiness.net [50.251.239.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail3.candelatech.com (Postfix) with ESMTPSA id E8A7713C2B3 for ; Fri, 9 Jul 2021 12:08:17 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 mail3.candelatech.com E8A7713C2B3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=candelatech.com; s=default; t=1625857698; bh=Wdgh7NHEmMBdERmqaczdX7N8RXRT7rSpgZgEs2zWrOo=; h=Subject:To:References:From:Date:In-Reply-To:From; b=EHwYtF3xOeI7ePS4bhbAuI3UfyalXsTrTo4Uw6ujhqik+DC9neogRLKa2oT4lEV3Q PKvH0F7OuxXzzOdKgLJe6IITkR7+tZZTrVfyg46kvEgrA3mK5BF7uwAkX80ZwOtImR bT0MzCodS9qq6j2gwsQUnNwXlKkS40gJj9bJec6I= To: starlink@lists.bufferbloat.net References: <1625856001.74681750@apps.rackspace.com> From: Ben Greear Organization: Candela Technologies Message-ID: Date: Fri, 9 Jul 2021 12:08:17 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <1625856001.74681750@apps.rackspace.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-MDID: 1625857710-F5tLV_pd9tX1 Subject: Re: [Starlink] Starlink and bufferbloat status? X-BeenThere: starlink@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Starlink has bufferbloat. Bad." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Jul 2021 19:08:30 -0000 On 7/9/21 11:40 AM, David P. Reed wrote: > Early measurements of performance of Starlink have shown significant bufferbloat, as Dave Taht has shown. > > But...  Starlink is a moving target. The bufferbloat isn't a hardware issue, it should be completely manageable, starting by simple firmware changes inside the > Starlink system itself. For example, implementing fq_codel so that bottleneck links just drop packets according to the Best Practices RFC, > > So I'm hoping this has improved since Dave's measurements. How much has it improved? What's the current maximum packet latency under full load,  Ive heard > anecdotally that a friend of a friend gets 84 msec. *ping times under full load*, but he wasn't using flent or some other measurement tool of good quality that > gives a true number. > > 84 msec is not great - it's marginal for Zoom quality experience (you want latencies significantly less than 100 msec. as a rule of thumb for teleconferencing > quality). But it is better than Dave's measurements showed. > > Now Musk bragged that his network was "low latency" unlike other high speed services, which means low end-to-end latency.  That got him permission from the FCC > to operate Starlink at all. His number was, I think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he probably meant just the time from the > ground station to the terminal through the satellite. But I regularly get 17 msec. between California and Massachusetts over the public Internet) > > So 84 might be the current status. That would mean that someone at Srarlink might be paying some attention, but it is a long way from what Musk implied. > > PS: I forget the number of the RFC, but the number of packets queued on an egress link should be chosen by taking the hardware bottleneck throughput of any > path, combined with an end-to-end Internet underlying delay of about 10 msec. to account for hops between source and destination. Lets say Starlink allocates 50 > Mb/sec to each customer, packets are limited to 10,000 bits (1500 * 8), so the outbound queues should be limited to about 0.01 * 50,000,000 / 10,000, which > comes out to about 250 packets from each terminal of buffering, total, in the path from terminal to public Internet, assuming the connection to the public > Internet is not a problem. There is no need to queue more than a single frame IF you can efficiently transmit a single frame and if you can be fed new frames as quick as you want them. Wifi cannot do either of these things, of course, and probably not the dish either, so you will need to buffer some stuff. For WiFi, for best throughput, you want to send larger AMPDU chains, so you may want to buffer per TID and per user up to 64 or so frames. That is too much buffering if you have 100 stations each using 4 tids though, so then you start making tradeoffs of throughput vs latency, maybe force all frames to same station onto same TID for better aggregation, etc). There is no perfect answer to this in general. If you are trying to just stream movies over wifi to people on a plane, then latency matters not much at all and you use all the buffers you can. If you have a call center using VOIP over wifi then tput doesn't matter much and instead you optimize for latency. And for everyone else, you pick something in the middle. Queueing in the AP and dish shouldn't care at all about total latency, that is more of a TCP windowing issue. TCP should definitely care about total latency. And this is all my opinion of course... Thanks, Ben -- Ben Greear Candela Technologies Inc http://www.candelatech.com