* [Bloat] bufferbloat effects on throughput @ 2015-04-27 8:59 Paolo Valente 2015-04-27 9:20 ` Toke Høiland-Jørgensen 0 siblings, 1 reply; 9+ messages in thread From: Paolo Valente @ 2015-04-27 8:59 UTC (permalink / raw) To: bloat Hi, if I am not missing anything, the information provided on buffebloat.net, as well as in the documents mentioned on the site, seems to focus mainly on latency issues. Is this because high latency is the only serious consequence of bufferbloat? Or are there important consequences in terms of throughput or throughput fluctuations too? If there are, could anyone please point me to further reading on these aspects? Thanks, Paolo ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] bufferbloat effects on throughput 2015-04-27 8:59 [Bloat] bufferbloat effects on throughput Paolo Valente @ 2015-04-27 9:20 ` Toke Høiland-Jørgensen 2015-04-27 12:01 ` Paolo Valente 0 siblings, 1 reply; 9+ messages in thread From: Toke Høiland-Jørgensen @ 2015-04-27 9:20 UTC (permalink / raw) To: Paolo Valente; +Cc: bloat Paolo Valente <paolo.valente@unimore.it> writes: > If there are, could anyone please point me to further reading on these > aspects? Bufferbloat can definitely adversely affect throughput in some cases. Mainly because it causes throughput to oscillate: when the queue fills, a lot of data can be dropped at once, causing throughput to drop, which takes a while to recover. This can degrade aggregate throughput. Having smart queueing smoothes out the traffic, so the oscillations are lower and average throughput thus better. The effect is most visible when you have several flows sharing a link: when the (FIFO) queue fills, they will tend to all experience drops at once, and so all slow down. -Toke ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] bufferbloat effects on throughput 2015-04-27 9:20 ` Toke Høiland-Jørgensen @ 2015-04-27 12:01 ` Paolo Valente 2015-04-27 12:13 ` Toke Høiland-Jørgensen 0 siblings, 1 reply; 9+ messages in thread From: Paolo Valente @ 2015-04-27 12:01 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: bloat Il giorno 27/apr/2015, alle ore 11:20, Toke Høiland-Jørgensen <toke@toke.dk> ha scritto: > Paolo Valente <paolo.valente@unimore.it> writes: > >> If there are, could anyone please point me to further reading on these >> aspects? > > Bufferbloat can definitely adversely affect throughput in some cases. > Mainly because it causes throughput to oscillate: when the queue fills, > a lot of data can be dropped at once, causing throughput to drop, which > takes a while to recover. This can degrade aggregate throughput. Having > smart queueing smoothes out the traffic, so the oscillations are lower > and average throughput thus better. > > The effect is most visible when you have several flows sharing a link: > when the (FIFO) queue fills, they will tend to all experience drops at > once, and so all slow down. > Thanks. So, if I understood correctly, average throughput may or may not be affected, but large throughput fluctuations will always occur in the presence of bufferbloat. Sorry for my usual refrain, but … any pointers to tests, results, papers and the like? Thanks, Paolo > -Toke -- Paolo Valente Algogroup Dipartimento di Fisica, Informatica e Matematica Via Campi, 213/B 41125 Modena - Italy homepage: http://algogroup.unimore.it/people/paolo/ ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] bufferbloat effects on throughput 2015-04-27 12:01 ` Paolo Valente @ 2015-04-27 12:13 ` Toke Høiland-Jørgensen 2015-04-27 12:45 ` Paolo Valente 0 siblings, 1 reply; 9+ messages in thread From: Toke Høiland-Jørgensen @ 2015-04-27 12:13 UTC (permalink / raw) To: Paolo Valente; +Cc: bloat Paolo Valente <paolo.valente@unimore.it> writes: > Thanks. So, if I understood correctly, average throughput may or may > not be affected, but large throughput fluctuations will always occur > in the presence of bufferbloat. I'm always wary of saying 'always', but I'd hazard an 'often' ;) > Sorry for my usual refrain, but … any pointers to tests, results, > papers and the like? Hmm, not sure if there's any papers dealing specifically with this. However, it's quite easy to provoke this behaviour. Compare, for instance, http://files.toke.dk/bufferbloat/rrul-pfifo_fast-all_scaled.pdf with http://files.toke.dk/bufferbloat/rrul-fq_codel-all_scaled.pdf The two top graphs on each are throughput (download and upload respectively). For the aggregate behaviour, I had some data on that in my presentation at the IETF in Hawaii: http://www.ietf.org/proceedings/91/slides/slides-91-iccrg-4.pdf -Toke ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] bufferbloat effects on throughput 2015-04-27 12:13 ` Toke Høiland-Jørgensen @ 2015-04-27 12:45 ` Paolo Valente 2015-04-27 13:01 ` Toke Høiland-Jørgensen 0 siblings, 1 reply; 9+ messages in thread From: Paolo Valente @ 2015-04-27 12:45 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: bloat Il giorno 27/apr/2015, alle ore 14:13, Toke Høiland-Jørgensen <toke@toke.dk> ha scritto: > Paolo Valente <paolo.valente@unimore.it> writes: > >> Thanks. So, if I understood correctly, average throughput may or may >> not be affected, but large throughput fluctuations will always occur >> in the presence of bufferbloat. > > I'm always wary of saying 'always', but I'd hazard an 'often' ;) > >> Sorry for my usual refrain, but … any pointers to tests, results, >> papers and the like? > > Hmm, not sure if there's any papers dealing specifically with this. > However, it's quite easy to provoke this behaviour. Compare, for > instance, > > http://files.toke.dk/bufferbloat/rrul-pfifo_fast-all_scaled.pdf > > with > > http://files.toke.dk/bufferbloat/rrul-fq_codel-all_scaled.pdf > > The two top graphs on each are throughput (download and upload > respectively). > Thanks. The results shown in your graphs seem unmistakable … One question: how can one be sure (if it is possible) that the fluctuation of the throughput of a TCP flow on a given node is caused by bufferbloat issues in the node, and not by other factors (such as, e.g., systematic drops in some other nodes along the path followed by the flow, with the drops possibly even caused by different reasons than bufferbloat)? Thanks, Paolo > For the aggregate behaviour, I had some data on that in my presentation > at the IETF in Hawaii: > > http://www.ietf.org/proceedings/91/slides/slides-91-iccrg-4.pdf > > -Toke -- Paolo Valente Algogroup Dipartimento di Fisica, Informatica e Matematica Via Campi, 213/B 41125 Modena - Italy homepage: http://algogroup.unimore.it/people/paolo/ ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] bufferbloat effects on throughput 2015-04-27 12:45 ` Paolo Valente @ 2015-04-27 13:01 ` Toke Høiland-Jørgensen 2015-04-27 14:19 ` Bill Ver Steeg (versteb) 0 siblings, 1 reply; 9+ messages in thread From: Toke Høiland-Jørgensen @ 2015-04-27 13:01 UTC (permalink / raw) To: Paolo Valente; +Cc: bloat Paolo Valente <paolo.valente@unimore.it> writes: > One question: how can one be sure (if it is possible) that the > fluctuation of the throughput of a TCP flow on a given node is caused > by bufferbloat issues in the node, and not by other factors (such as, > e.g., systematic drops in some other nodes along the path followed by > the flow, with the drops possibly even caused by different reasons > than bufferbloat)? You can't, and it might. However, if you measure a performance degradation that goes away when the link is idle, consider that a hint... :) -Toke ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] bufferbloat effects on throughput 2015-04-27 13:01 ` Toke Høiland-Jørgensen @ 2015-04-27 14:19 ` Bill Ver Steeg (versteb) 2015-04-27 17:28 ` Dave Taht 0 siblings, 1 reply; 9+ messages in thread From: Bill Ver Steeg (versteb) @ 2015-04-27 14:19 UTC (permalink / raw) To: Toke Høiland-Jørgensen, Paolo Valente; +Cc: bloat The other area in which throughput suffers is when one tries to do bunch of small transactions on a congested link. Think of a web page that does a series of HTTP gets of small pieces of data (let's say each object is about 10 packets in size). Let's say the gets are from different HTTP servers. The client has do a bunch of DNS resolutions (3+ RTT each), open a bunch of TCP sessions (3+ RTT each), send a bunch of HTTP gets (1RTT each) and get the data (~2 RTT for the 10 packets), then close each session (4+ RTT). So that is about 15 RTTs per JPEG. For discussion, let's say the client fetches them sequentially rather than in parallel. I know, SPDY does this better - buts let's say this is a legacy client, or let's say that there are interdependencies and you have to fetch them sequentially. Let's compare the time it takes to display the web pages on a link with 50 ms of delay (20 ms speed of light and 30 ms of buffering) to the time it takes to display the web pages on a link with 200 ms of delay (20 ms speed of light and 30 ms of buffering). So, we have 300 RTTs before we display the completed web page. 300 * 50ms == 1.5 seconds. 300 * 200ms = 6 seconds. If we were to use a "big buffer tail drop" example with 2 second RTTs, we would get 10 minutes to show the page. As we all know, there is a lot of work on the client/server to make web surfing better. IW10, SPDY, pacing and the like all aim to reduce the number of RTTs. The buffer management algorithms aim to reduce the RTTs. They work together to provide better throughput when mice travers a congested link. Bill VerSteeg -----Original Message----- From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Toke Høiland-Jørgensen Sent: Monday, April 27, 2015 9:01 AM To: Paolo Valente Cc: bloat Subject: Re: [Bloat] bufferbloat effects on throughput Paolo Valente <paolo.valente@unimore.it> writes: > One question: how can one be sure (if it is possible) that the > fluctuation of the throughput of a TCP flow on a given node is caused > by bufferbloat issues in the node, and not by other factors (such as, > e.g., systematic drops in some other nodes along the path followed by > the flow, with the drops possibly even caused by different reasons > than bufferbloat)? You can't, and it might. However, if you measure a performance degradation that goes away when the link is idle, consider that a hint... :) -Toke _______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] bufferbloat effects on throughput 2015-04-27 14:19 ` Bill Ver Steeg (versteb) @ 2015-04-27 17:28 ` Dave Taht 2015-04-27 19:51 ` Bill Ver Steeg (versteb) 0 siblings, 1 reply; 9+ messages in thread From: Dave Taht @ 2015-04-27 17:28 UTC (permalink / raw) To: Bill Ver Steeg (versteb); +Cc: bloat Too many people are also discounting the extra RTTs SSL negotiation takes, and you got a couple other things wrong here. On Mon, Apr 27, 2015 at 7:19 AM, Bill Ver Steeg (versteb) <versteb@cisco.com> wrote: > The other area in which throughput suffers is when one tries to do bunch of small transactions on a congested link. Think of a web page that does a series of HTTP gets of small pieces of data (let's say each object is about 10 packets in size). Let's say the gets are from different HTTP servers. The client has do a bunch of DNS resolutions (3+ RTT each), DNS is usually a 10-20ms or shorter RTT to the ISP, and on a cache hit, under 16ms on cheap hardware, locally. namebench is a pretty good tool for looking at what it takes to resolve DNS, and also of late I have been trying to get good measurements of DNSSEC w/edns0 (which is looking very poor) I would like it if WAY more people took a hard look at DNS traffic characteristics, and I wasn't. >open a bunch of TCP sessions (3+ RTT each), + SSL neg >send a bunch of HTTP gets (1RTT each) and get the data (~2 RTT for the 10 packets), then close each session (4+ RTT). So that is about 15 RTTs per JPEG. Historically connection close is transparent to the application. I recall at least one ad service provider that actually ignored the complex close state entirely and just blasted the data out, attempted a close, and moved on. Also the first real data packet contains the header info for the jpeg which helps the web reflow engine. So I would not count close as part of your calculations. >For discussion, let's say the client fetches them sequentially rather than in parallel. >I know, SPDY does this better - buts let's say this is a legacy client, or let's say that there are interdependencies and you have to fetch them sequentially. > > Let's compare the time it takes to display the web pages on a link with 50 ms of delay (20 ms speed of light and 30 ms of buffering) to the time it takes to display the web pages on a link with 200 ms of delay (20 ms speed of light and 30 ms of buffering). So, we have 300 RTTs before we display the completed web page. 300 * 50ms == 1.5 seconds. 300 * 200ms = 6 seconds. If we were to use a "big buffer tail drop" example with 2 second RTTs, we would get 10 minutes to show the page. > > As we all know, there is a lot of work on the client/server to make web surfing better. IW10, SPDY, pacing and the like all aim to reduce the number of RTTs. The buffer management algorithms aim to reduce the RTTs. They work together to provide better throughput when mice travers a congested link. > > > Bill VerSteeg > > -----Original Message----- > From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Toke Høiland-Jørgensen > Sent: Monday, April 27, 2015 9:01 AM > To: Paolo Valente > Cc: bloat > Subject: Re: [Bloat] bufferbloat effects on throughput > > Paolo Valente <paolo.valente@unimore.it> writes: > >> One question: how can one be sure (if it is possible) that the >> fluctuation of the throughput of a TCP flow on a given node is caused >> by bufferbloat issues in the node, and not by other factors (such as, >> e.g., systematic drops in some other nodes along the path followed by >> the flow, with the drops possibly even caused by different reasons >> than bufferbloat)? > > You can't, and it might. However, if you measure a performance degradation that goes away when the link is idle, consider that a hint... :) > > -Toke > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat -- Dave Täht Open Networking needs **Open Source Hardware** https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67 ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] bufferbloat effects on throughput 2015-04-27 17:28 ` Dave Taht @ 2015-04-27 19:51 ` Bill Ver Steeg (versteb) 0 siblings, 0 replies; 9+ messages in thread From: Bill Ver Steeg (versteb) @ 2015-04-27 19:51 UTC (permalink / raw) To: Dave Taht; +Cc: bloat Dave- Yup - depending on network/endpoint configuration SSL will take extra RTTs and DNS/close may not add as many RTTs. If the name is locally cached, it will not take the bloated hop. If the name is not cached (or the cache is on the other side of the bloat), it will take the hit. You are probably right about the close being non-blocking, at least on modern systems. I do recall some older embedded code that actually had to re-use the socket descriptors (and thus occasionally had to block waiting for the close to complete), but that is ancient history. So, your mileage may vary from the example. In any event, bloat is bad for mice flows because there are lots of RTTs. Bvs -----Original Message----- From: Dave Taht [mailto:dave.taht@gmail.com] Sent: Monday, April 27, 2015 1:28 PM To: Bill Ver Steeg (versteb) Cc: Toke Høiland-Jørgensen; Paolo Valente; bloat Subject: Re: [Bloat] bufferbloat effects on throughput Too many people are also discounting the extra RTTs SSL negotiation takes, and you got a couple other things wrong here. On Mon, Apr 27, 2015 at 7:19 AM, Bill Ver Steeg (versteb) <versteb@cisco.com> wrote: > The other area in which throughput suffers is when one tries to do > bunch of small transactions on a congested link. Think of a web page > that does a series of HTTP gets of small pieces of data (let's say > each object is about 10 packets in size). Let's say the gets are from > different HTTP servers. The client has do a bunch of DNS resolutions > (3+ RTT each), DNS is usually a 10-20ms or shorter RTT to the ISP, and on a cache hit, under 16ms on cheap hardware, locally. namebench is a pretty good tool for looking at what it takes to resolve DNS, and also of late I have been trying to get good measurements of DNSSEC w/edns0 (which is looking very poor) I would like it if WAY more people took a hard look at DNS traffic characteristics, and I wasn't. >open a bunch of TCP sessions (3+ RTT each), + SSL neg >send a bunch of HTTP gets (1RTT each) and get the data (~2 RTT for the 10 packets), then close each session (4+ RTT). So that is about 15 RTTs per JPEG. Historically connection close is transparent to the application. I recall at least one ad service provider that actually ignored the complex close state entirely and just blasted the data out, attempted a close, and moved on. Also the first real data packet contains the header info for the jpeg which helps the web reflow engine. So I would not count close as part of your calculations. >For discussion, let's say the client fetches them sequentially rather than in parallel. >I know, SPDY does this better - buts let's say this is a legacy client, or let's say that there are interdependencies and you have to fetch them sequentially. > > Let's compare the time it takes to display the web pages on a link with 50 ms of delay (20 ms speed of light and 30 ms of buffering) to the time it takes to display the web pages on a link with 200 ms of delay (20 ms speed of light and 30 ms of buffering). So, we have 300 RTTs before we display the completed web page. 300 * 50ms == 1.5 seconds. 300 * 200ms = 6 seconds. If we were to use a "big buffer tail drop" example with 2 second RTTs, we would get 10 minutes to show the page. > > As we all know, there is a lot of work on the client/server to make web surfing better. IW10, SPDY, pacing and the like all aim to reduce the number of RTTs. The buffer management algorithms aim to reduce the RTTs. They work together to provide better throughput when mice travers a congested link. > > > Bill VerSteeg > > -----Original Message----- > From: bloat-bounces@lists.bufferbloat.net > [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Toke > Høiland-Jørgensen > Sent: Monday, April 27, 2015 9:01 AM > To: Paolo Valente > Cc: bloat > Subject: Re: [Bloat] bufferbloat effects on throughput > > Paolo Valente <paolo.valente@unimore.it> writes: > >> One question: how can one be sure (if it is possible) that the >> fluctuation of the throughput of a TCP flow on a given node is caused >> by bufferbloat issues in the node, and not by other factors (such as, >> e.g., systematic drops in some other nodes along the path followed by >> the flow, with the drops possibly even caused by different reasons >> than bufferbloat)? > > You can't, and it might. However, if you measure a performance > degradation that goes away when the link is idle, consider that a > hint... :) > > -Toke > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat -- Dave Täht Open Networking needs **Open Source Hardware** https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67 ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2015-04-27 19:51 UTC | newest] Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2015-04-27 8:59 [Bloat] bufferbloat effects on throughput Paolo Valente 2015-04-27 9:20 ` Toke Høiland-Jørgensen 2015-04-27 12:01 ` Paolo Valente 2015-04-27 12:13 ` Toke Høiland-Jørgensen 2015-04-27 12:45 ` Paolo Valente 2015-04-27 13:01 ` Toke Høiland-Jørgensen 2015-04-27 14:19 ` Bill Ver Steeg (versteb) 2015-04-27 17:28 ` Dave Taht 2015-04-27 19:51 ` Bill Ver Steeg (versteb)
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox