> graphs, ~5% of connections to "residential" hosts exhibit added delays > of >=400 milliseconds, a delay that is certainly noticeable and would > make interactive applications (gaming, voip etc) pretty much unusable. Note the paper does not work in units of *connections* in section 2, but rather in terms of *RTT samples*. So, nearly 5% of the RTT samples add >= 400msec to the base delay measured for the given remote (in the "residential" case). (I am not disagreeing that 400msec of added delay would be noticeable. I am simply stating what the data actually shows.) > Now, I may be jumping to conclusions here, but I couldn't find anything > about how their samples were distributed. (I don't follow this comment ... distributed in what fashion?) > It would be interesting if a large-scale test like this could flush > out how big a percentage of hosts do occasionally experience > bufferbloat, and how many never do. I agree and this could be done with our data. (In general, we could go much deeper into the data on hand ... the paper is an initial foray.) allman