From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-x22e.google.com (mail-oi0-x22e.google.com [IPv6:2607:f8b0:4003:c06::22e]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 6A4F621F18E for ; Mon, 27 Apr 2015 10:28:24 -0700 (PDT) Received: by oift201 with SMTP id t201so95162415oif.3 for ; Mon, 27 Apr 2015 10:28:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=ECPxB03+Qk07dw32VGBfamsx0bQvAcKbVtp0xqCGlp4=; b=b8oF5yha9tgbvUjilIT9z7MY/wgXCmNKWavQdlsb/ODAz9ABBep8w0nVFcjGutkyKC L9eyNWYuecPAfvaOXTlgcqa/wMeGy2FimeOqJj2aX6dUgQk4V3DLNYa3wtLM7eHOlwIU kAszp9dSQkPjiQAEFLL0WDLKTOKxQJx/ahrk7IgkrdM+581DREq29Rd6hoqepoSJth+Y Axom6Fe+OK1w49cFyMLDGirgfTScnYS2SdweeD9A4dwpwrKfFs8k9cdA6g5wBSpnEUfy r7KT9QvmkDKGoytenDoAxQZLoH/C0jI4VwSA9rg7FA6qaGYQyjKTuQ7JLqqxK81rvNmi jyFg== MIME-Version: 1.0 X-Received: by 10.202.223.131 with SMTP id w125mr10314613oig.108.1430155702065; Mon, 27 Apr 2015 10:28:22 -0700 (PDT) Received: by 10.202.71.139 with HTTP; Mon, 27 Apr 2015 10:28:22 -0700 (PDT) In-Reply-To: References: <3E2406CD-0938-4C1F-B171-247CBB5E4C7D@unimore.it> <87zj5u2aho.fsf@toke.dk> <2B5B39C9-A33D-46ED-84C6-56F237284B21@unimore.it> <87pp6p22ho.fsf@toke.dk> <87lhhd20a2.fsf@toke.dk> Date: Mon, 27 Apr 2015 10:28:22 -0700 Message-ID: From: Dave Taht To: "Bill Ver Steeg (versteb)" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: bloat Subject: Re: [Bloat] bufferbloat effects on throughput X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Apr 2015 17:29:00 -0000 Too many people are also discounting the extra RTTs SSL negotiation takes, and you got a couple other things wrong here. On Mon, Apr 27, 2015 at 7:19 AM, Bill Ver Steeg (versteb) wrote: > The other area in which throughput suffers is when one tries to do bunch = of small transactions on a congested link. Think of a web page that does a = series of HTTP gets of small pieces of data (let's say each object is about= 10 packets in size). Let's say the gets are from different HTTP servers. T= he client has do a bunch of DNS resolutions (3+ RTT each), DNS is usually a 10-20ms or shorter RTT to the ISP, and on a cache hit, under 16ms on cheap hardware, locally. namebench is a pretty good tool for looking at what it takes to resolve DNS, and also of late I have been trying to get good measurements of DNSSEC w/edns0 (which is looking very poor) I would like it if WAY more people took a hard look at DNS traffic characteristics, and I wasn't. >open a bunch of TCP sessions (3+ RTT each), + SSL neg >send a bunch of HTTP gets (1RTT each) and get the data (~2 RTT for the 10 = packets), then close each session (4+ RTT). So that is about 15 RTTs per JP= EG. Historically connection close is transparent to the application. I recall at least one ad service provider that actually ignored the complex close state entirely and just blasted the data out, attempted a close, and moved on. Also the first real data packet contains the header info for the jpeg which helps the web reflow engine. So I would not count close as part of your calculations. >For discussion, let's say the client fetches them sequentially rather than= in parallel. >I know, SPDY does this better - buts let's say this is a legacy client, or= let's say that there are interdependencies and you have to fetch them sequ= entially. > > Let's compare the time it takes to display the web pages on a link with 5= 0 ms of delay (20 ms speed of light and 30 ms of buffering) to the time it = takes to display the web pages on a link with 200 ms of delay (20 ms speed= of light and 30 ms of buffering). So, we have 300 RTTs before we display t= he completed web page. 300 * 50ms =3D=3D 1.5 seconds. 300 * 200ms =3D 6 sec= onds. If we were to use a "big buffer tail drop" example with 2 second RTTs= , we would get 10 minutes to show the page. > > As we all know, there is a lot of work on the client/server to make web s= urfing better. IW10, SPDY, pacing and the like all aim to reduce the number= of RTTs. The buffer management algorithms aim to reduce the RTTs. They wor= k together to provide better throughput when mice travers a congested link. > > > Bill VerSteeg > > -----Original Message----- > From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.buf= ferbloat.net] On Behalf Of Toke H=C3=B8iland-J=C3=B8rgensen > Sent: Monday, April 27, 2015 9:01 AM > To: Paolo Valente > Cc: bloat > Subject: Re: [Bloat] bufferbloat effects on throughput > > Paolo Valente writes: > >> One question: how can one be sure (if it is possible) that the >> fluctuation of the throughput of a TCP flow on a given node is caused >> by bufferbloat issues in the node, and not by other factors (such as, >> e.g., systematic drops in some other nodes along the path followed by >> the flow, with the drops possibly even caused by different reasons >> than bufferbloat)? > > You can't, and it might. However, if you measure a performance degradatio= n that goes away when the link is idle, consider that a hint... :) > > -Toke > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat --=20 Dave T=C3=A4ht Open Networking needs **Open Source Hardware** https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67