[Bloat] bufferbloat effects on throughput
Bill Ver Steeg (versteb)
versteb at cisco.com
Mon Apr 27 15:51:07 EDT 2015
Dave-
Yup - depending on network/endpoint configuration SSL will take extra RTTs and DNS/close may not add as many RTTs. If the name is locally cached, it will not take the bloated hop. If the name is not cached (or the cache is on the other side of the bloat), it will take the hit. You are probably right about the close being non-blocking, at least on modern systems. I do recall some older embedded code that actually had to re-use the socket descriptors (and thus occasionally had to block waiting for the close to complete), but that is ancient history.
So, your mileage may vary from the example. In any event, bloat is bad for mice flows because there are lots of RTTs.
Bvs
-----Original Message-----
From: Dave Taht [mailto:dave.taht at gmail.com]
Sent: Monday, April 27, 2015 1:28 PM
To: Bill Ver Steeg (versteb)
Cc: Toke Høiland-Jørgensen; Paolo Valente; bloat
Subject: Re: [Bloat] bufferbloat effects on throughput
Too many people are also discounting the extra RTTs SSL negotiation takes, and you got a couple other things wrong here.
On Mon, Apr 27, 2015 at 7:19 AM, Bill Ver Steeg (versteb) <versteb at cisco.com> wrote:
> The other area in which throughput suffers is when one tries to do
> bunch of small transactions on a congested link. Think of a web page
> that does a series of HTTP gets of small pieces of data (let's say
> each object is about 10 packets in size). Let's say the gets are from
> different HTTP servers. The client has do a bunch of DNS resolutions
> (3+ RTT each),
DNS is usually a 10-20ms or shorter RTT to the ISP, and on a cache hit, under 16ms on cheap hardware, locally.
namebench is a pretty good tool for looking at what it takes to resolve DNS, and also of late I have been trying to get good measurements of DNSSEC w/edns0 (which is looking very poor)
I would like it if WAY more people took a hard look at DNS traffic characteristics, and I wasn't.
>open a bunch of TCP sessions (3+ RTT each),
+ SSL neg
>send a bunch of HTTP gets (1RTT each) and get the data (~2 RTT for the 10 packets), then close each session (4+ RTT). So that is about 15 RTTs per JPEG.
Historically connection close is transparent to the application. I recall at least one ad service provider that actually ignored the complex close state entirely and just blasted the data out, attempted a close, and moved on.
Also the first real data packet contains the header info for the jpeg which helps the web reflow engine.
So I would not count close as part of your calculations.
>For discussion, let's say the client fetches them sequentially rather than in parallel.
>I know, SPDY does this better - buts let's say this is a legacy client, or let's say that there are interdependencies and you have to fetch them sequentially.
>
> Let's compare the time it takes to display the web pages on a link with 50 ms of delay (20 ms speed of light and 30 ms of buffering) to the time it takes to display the web pages on a link with 200 ms of delay (20 ms speed of light and 30 ms of buffering). So, we have 300 RTTs before we display the completed web page. 300 * 50ms == 1.5 seconds. 300 * 200ms = 6 seconds. If we were to use a "big buffer tail drop" example with 2 second RTTs, we would get 10 minutes to show the page.
>
> As we all know, there is a lot of work on the client/server to make web surfing better. IW10, SPDY, pacing and the like all aim to reduce the number of RTTs. The buffer management algorithms aim to reduce the RTTs. They work together to provide better throughput when mice travers a congested link.
>
>
> Bill VerSteeg
>
> -----Original Message-----
> From: bloat-bounces at lists.bufferbloat.net
> [mailto:bloat-bounces at lists.bufferbloat.net] On Behalf Of Toke
> Høiland-Jørgensen
> Sent: Monday, April 27, 2015 9:01 AM
> To: Paolo Valente
> Cc: bloat
> Subject: Re: [Bloat] bufferbloat effects on throughput
>
> Paolo Valente <paolo.valente at unimore.it> writes:
>
>> One question: how can one be sure (if it is possible) that the
>> fluctuation of the throughput of a TCP flow on a given node is caused
>> by bufferbloat issues in the node, and not by other factors (such as,
>> e.g., systematic drops in some other nodes along the path followed by
>> the flow, with the drops possibly even caused by different reasons
>> than bufferbloat)?
>
> You can't, and it might. However, if you measure a performance
> degradation that goes away when the link is idle, consider that a
> hint... :)
>
> -Toke
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
More information about the Bloat
mailing list