From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mailout-de.gmx.net (mailout-de.gmx.net [213.165.64.23]) by huchra.bufferbloat.net (Postfix) with SMTP id F0EA0201A87 for ; Sat, 30 Apr 2011 12:23:22 -0700 (PDT) Received: (qmail invoked by alias); 30 Apr 2011 19:25:35 -0000 Received: from unknown (EHLO srichardlxp2) [213.143.107.142] by mail.gmx.net (mp071) with SMTP; 30 Apr 2011 21:25:35 +0200 X-Authenticated: #20720068 X-Provags-ID: V01U2FsdGVkX19DuqNnGhdDZtNoJYjvpw04DV8DGLV4a7oJRDfo59 fTirSydUzxYypp Message-ID: From: "Richard Scheffenegger" To: References: <4DB70FDA.6000507@mti-systems.com> Date: Sat, 30 Apr 2011 21:18:51 +0200 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6090 X-Y-GMX-Trusted: 0 Subject: [Bloat] Goodput fraction w/ AQM vs bufferbloat X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 30 Apr 2011 19:23:23 -0000 I'm curious, has anyone done some simulations to check if the following qualitative statement holds true, and if, what the quantitative effect is: With bufferbloat, the TCP congestion control reaction is unduely delayed. When it finally happens, the tcp stream is likely facing a "burst loss" event - multiple consecutive packets get dropped. Worse yet, the sender with the lowest RTT across the bottleneck will likely start to retransmit while the (tail-drop) queue is still overflowing. And a lost retransmission means a major setback in bandwidth (except for Linux with bulk transfers and SACK enabled), as the standard (RFC documented) behaviour asks for a RTO (1sec nominally, 200-500 ms typically) to recover such a lost retransmission... The second part (more important as an incentive to the ISPs actually), how does the fraction of goodput vs. throughput change, when AQM schemes are deployed, and TCP CC reacts in a timely manner? Small ISPs have to pay for their upstream volume, regardless if that is "real" work (goodput) or unneccessary retransmissions. When I was at a small cable ISP in switzerland last week, surely enough bufferbloat was readily observable (17ms -> 220ms after 30 sec of a bulk transfer), but at first they had the "not our problem" view, until I started discussing burst loss / retransmissions / goodput vs throughput - with the latest point being a real commercial incentive to them. (They promised to check if AQM would be available in the CPE / CMTS, and put latency bounds in their tenders going forward). Best regards, Richard