From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from alln-iport-7.cisco.com (alln-iport-7.cisco.com [173.37.142.94]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "alln-iport.cisco.com", Issuer "HydrantID SSL ICA G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id BDCF821F219 for ; Mon, 27 Apr 2015 07:19:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=2709; q=dns/txt; s=iport; t=1430144408; x=1431354008; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=7Sckj2U4WQ4dbai//P6avAl+SmeDyRPU997ml3ZlRgw=; b=NzfxEU/o7hr6pEAwfCLPvwWjqNKb7hR0ALsZ2wTua75H1wxaSOTvmMh0 qRFO6DrVqzxQqEOl3arJXEHQlE9ZaQFZTtFweTeMRMJuXF8od4EvPg9JX YPftw1fQugWnZl0SRs5BM4Ct2/OUn8st8wb6nbFJKanX+tgPxOzwYKfpm c=; X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A0BxBABjRD5V/4ENJK1cgwxTXAXGOgmBSAqGBAKBMjgUAQEBAQEBAYEKhCABAQEDAQEBAWsLBQcEAgEIEQQBAQsdBycBChQJCAIEAQ0FCIgbCA3GdQEBAQEBAQEBAQEBAQEBAQEBAQEBARMEiziEOhoxBwaDEYEWBYsrhiqEAYQImBEjg3RvgUSBAAEBAQ X-IronPort-AV: E=Sophos;i="5.11,657,1422921600"; d="scan'208";a="144861063" Received: from alln-core-9.cisco.com ([173.36.13.129]) by alln-iport-7.cisco.com with ESMTP; 27 Apr 2015 14:19:28 +0000 Received: from xhc-rcd-x05.cisco.com (xhc-rcd-x05.cisco.com [173.37.183.79]) by alln-core-9.cisco.com (8.14.5/8.14.5) with ESMTP id t3REJRcR009232 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Mon, 27 Apr 2015 14:19:27 GMT Received: from xmb-aln-x05.cisco.com ([169.254.11.175]) by xhc-rcd-x05.cisco.com ([173.37.183.79]) with mapi id 14.03.0195.001; Mon, 27 Apr 2015 09:19:27 -0500 From: "Bill Ver Steeg (versteb)" To: =?iso-8859-1?Q?Toke_H=F8iland-J=F8rgensen?= , Paolo Valente Thread-Topic: [Bloat] bufferbloat effects on throughput Thread-Index: AQHQgOo6fhxFJNGGcUKMr8/2jlOe7J1g41jw Date: Mon, 27 Apr 2015 14:19:25 +0000 Message-ID: References: <3E2406CD-0938-4C1F-B171-247CBB5E4C7D@unimore.it> <87zj5u2aho.fsf@toke.dk> <2B5B39C9-A33D-46ED-84C6-56F237284B21@unimore.it> <87pp6p22ho.fsf@toke.dk> <87lhhd20a2.fsf@toke.dk> In-Reply-To: <87lhhd20a2.fsf@toke.dk> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.82.210.114] Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: bloat Subject: Re: [Bloat] bufferbloat effects on throughput X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Apr 2015 14:20:12 -0000 The other area in which throughput suffers is when one tries to do bunch of= small transactions on a congested link. Think of a web page that does a se= ries of HTTP gets of small pieces of data (let's say each object is about 1= 0 packets in size). Let's say the gets are from different HTTP servers. The= client has do a bunch of DNS resolutions (3+ RTT each), open a bunch of TC= P sessions (3+ RTT each), send a bunch of HTTP gets (1RTT each) and get the= data (~2 RTT for the 10 packets), then close each session (4+ RTT). So tha= t is about 15 RTTs per JPEG. For discussion, let's say the client fetches t= hem sequentially rather than in parallel. I know, SPDY does this better - b= uts let's say this is a legacy client, or let's say that there are interdep= endencies and you have to fetch them sequentially. Let's compare the time it takes to display the web pages on a link with 50 = ms of delay (20 ms speed of light and 30 ms of buffering) to the time it ta= kes to display the web pages on a link with 200 ms of delay (20 ms speed o= f light and 30 ms of buffering). So, we have 300 RTTs before we display the= completed web page. 300 * 50ms =3D=3D 1.5 seconds. 300 * 200ms =3D 6 secon= ds. If we were to use a "big buffer tail drop" example with 2 second RTTs, = we would get 10 minutes to show the page. As we all know, there is a lot of work on the client/server to make web sur= fing better. IW10, SPDY, pacing and the like all aim to reduce the number o= f RTTs. The buffer management algorithms aim to reduce the RTTs. They work = together to provide better throughput when mice travers a congested link. Bill VerSteeg =20 -----Original Message----- From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.buffe= rbloat.net] On Behalf Of Toke H=F8iland-J=F8rgensen Sent: Monday, April 27, 2015 9:01 AM To: Paolo Valente Cc: bloat Subject: Re: [Bloat] bufferbloat effects on throughput Paolo Valente writes: > One question: how can one be sure (if it is possible) that the=20 > fluctuation of the throughput of a TCP flow on a given node is caused=20 > by bufferbloat issues in the node, and not by other factors (such as,=20 > e.g., systematic drops in some other nodes along the path followed by=20 > the flow, with the drops possibly even caused by different reasons=20 > than bufferbloat)? You can't, and it might. However, if you measure a performance degradation = that goes away when the link is idle, consider that a hint... :) -Toke _______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat