From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp86.iad3a.emailsrvr.com (smtp86.iad3a.emailsrvr.com [173.203.187.86]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 238D83B29D for ; Thu, 11 Jun 2020 15:16:18 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=g001.emailsrvr.com; s=20190322-9u7zjiwi; t=1591902977; bh=iaa3LU1ZnYFbfDHWG8mtgEvvpdjfvsEOSQkMIwvT3DI=; h=Date:Subject:From:To:From; b=i9B+a2KUWiVJ59dFxPGRyHj7ZZ24mepyYZl8rAFTEPQPqozHTsZkjhxcLKEqHox5F ba6ydeBISvMO6oZAmVaZNBG61fMjdzfigasfAR2X7fNJ2CLi7b3cHrhoagaDvxdLCl Q9Mtg5MYTlnY2KBNJhAwakoHjgOqlEMZd2n53/3I= Received: from app11.wa-webapps.iad3a (relay-webapps.rsapps.net [172.27.255.140]) by smtp35.relay.iad3a.emailsrvr.com (SMTP Server) with ESMTP id 9B4105DEA; Thu, 11 Jun 2020 15:16:17 -0400 (EDT) X-Sender-Id: dpreed@deepplum.com Received: from app11.wa-webapps.iad3a (relay-webapps.rsapps.net [172.27.255.140]) by 0.0.0.0:25 (trex/5.7.12); Thu, 11 Jun 2020 15:16:17 -0400 Received: from deepplum.com (localhost.localdomain [127.0.0.1]) by app11.wa-webapps.iad3a (Postfix) with ESMTP id 70CCFA179F; Thu, 11 Jun 2020 15:16:17 -0400 (EDT) Received: by apps.rackspace.com (Authenticated sender: dpreed@deepplum.com, from: dpreed@deepplum.com) with HTTP; Thu, 11 Jun 2020 15:16:17 -0400 (EDT) X-Auth-ID: dpreed@deepplum.com Date: Thu, 11 Jun 2020 15:16:17 -0400 (EDT) From: "David P. Reed" To: "David Lang" Cc: "Jonathan Morton" , "bloat" MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_20200611151617000000_87354" Importance: Normal X-Priority: 3 (Normal) X-Type: html In-Reply-To: References: <1591891396.41838464@apps.rackspace.com> <1591901205.85717618@apps.rackspace.com> Message-ID: <1591902977.45963161@apps.rackspace.com> X-Mailer: webmail/17.3.12-RC X-Classification-ID: 3ae9e876-89fa-40e7-8637-180a2bc06fb2-1-1 Subject: Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Jun 2020 19:16:18 -0000 ------=_20200611151617000000_87354 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable =0A =0AOn Thursday, June 11, 2020 2:56pm, "David Lang" said= :=0A=0A=0A=0A> We will see, but since the answer to satellite-satellite com= munication being the=0A> bottleneck is to launch more satellites, this boil= s down to investment vs=0A> service quality. Since they are making a big de= al about the latency, I expect=0A> them to work to keep it acceptable.=0A= =0A=0AWe'll see. I should have mentioned that the ATT network actually had = adequate capacity. As did Comcast's network when it was bloated like cracy = (as Jim Gettys will verify).=0A =0AAs I have said way too often - the probl= em isn't throughput related, and can't be measured by achieved throughput, = nor can it be controlled by adding capacity alone.=0A =0AThe problem is the= lack of congestion signalling that can stop the *source* from sending more= than its share.=0A =0AThat's all that matters. I see bufferbloat in 10-100= GigE datacenter networks quite frequently (esp. Arista switches!). Some w= ould think that "fat pipes" solve this problem. It doesn't. Some think piro= rity eliminates the problem. It doesn't, unless there is congestion signall= ing in operation.=0A =0AYes, using a single TCP end-to-end connection over = an unloaded network you can get 100% throughput on an unloaded network. The= problem isn't the hardware at all. It's the switching logic that just buil= ds up queues till they are intolerably long, at which point the queues cann= ot drain, so they stay full as long as the load remains.=0A =0AIn the iPhon= e case, when a page didn't download in a flash, what do users do? Well, the= y click on the link again. Underneath it all, then, all the packets that we= re stuffed in the pipe toward that user remain queued. And a whole lot more= get shoved in. And the user keeps hitting the button. If the queue holds 2= seconds of data at the bottleneck rate, it continues to be full as long as= users keep clicking on the link.=0A =0AYou REALLY must think about this sc= enario, and get it in your mind that throughput doesn't eliminate congestio= n, especially when computers can do a lot of work on your behalf every time= you ask them.=0A =0AOne request packet - thousands of response packets, an= d no one telling the sources that they should slow down.=0A =0AFor all of t= his, there is a known fix: don't queue packets more than 2xRTTx"bottleneck = rate" in any switch anywhere. That's been in a best practice RFC forever, a= nd it is ignored almost always. Cake and other algorithms do even better, b= y queuing less than that in any bottleneck-adjacent queue.=0A =0ABut instea= d of the known fix (known ever since the first screwed up Frame Relay hops = were set to never lose a packet) is deliberately ignored, by hardware know-= it-alls.=0A ------=_20200611151617000000_87354 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

 

=0A

On Thursday, June 11, 2020 2:56pm, "David Lang" <david@lang= .hm> said:

=0A
=0A

> We will see, but since the answer to satellite-satellite= communication being the
> bottleneck is to launch more satellites,= this boils down to investment vs
> service quality. Since they are= making a big deal about the latency, I expect
> them to work to ke= ep it acceptable.

=0A

We'll see. I shoul= d have mentioned that the ATT network actually had adequate capacity. As di= d Comcast's network when it was bloated like cracy (as Jim Gettys will veri= fy).

=0A

 

=0A

As I have= said way too often - the problem isn't throughput related, and can't be me= asured by achieved throughput, nor can it be controlled by adding capacity = alone.

=0A

 

=0A

The pro= blem is the lack of congestion signalling that can stop the *source* from s= ending more than its share.

=0A

 

=0A

That's all that matters. I see bufferbloat in 10-100 GigE dat= acenter networks quite frequently (esp. Arista switches!).  Some would= think that "fat pipes" solve this problem. It doesn't. Some think pirority= eliminates the problem. It doesn't, unless there is congestion signalling = in operation.

=0A

 

=0A

= Yes, using a single TCP end-to-end connection over an unloaded network you = can get 100% throughput on an unloaded network. The problem isn't the hardw= are at all. It's the switching logic that just builds up queues till they a= re intolerably long, at which point the queues cannot drain, so they stay f= ull as long as the load remains.

=0A

 

=0AIn the iPhone case, when a page didn't download in a fla= sh, what do users do? Well, they click on the link again. Underneath it all= , then, all the packets that were stuffed in the pipe toward that user rema= in queued. And a whole lot more get shoved in. And the user keeps hitting t= he button. If the queue holds 2 seconds of data at the bottleneck rate, it = continues to be full as long as users keep clicking on the link.

=0A

 

=0A

You REALLY must think ab= out this scenario, and get it in your mind that throughput doesn't eliminat= e congestion, especially when computers can do a lot of work on your behalf= every time you ask them.

=0A

 

=0A

One request packet - thousands of response packets, and no one= telling the sources that they should slow down.

=0A

 

=0A

For all of this, there is a known fix: d= on't queue packets more than 2xRTTx"bottleneck rate" in any switch anywhere= . That's been in a best practice RFC forever, and it is ignored almost alwa= ys. Cake and other algorithms do even better, by queuing less than that in = any bottleneck-adjacent queue.

=0A

 

=0A

But instead of the known fix (known ever since the first s= crewed up Frame Relay hops were set to never lose a packet) is deliberately= ignored, by hardware know-it-alls.

=0A

 

= =0A
------=_20200611151617000000_87354--