On Mon, 7 Jul 2014, Aaron Wood wrote: > > In talking with a friend over the weekend that moves data around for the > national labs (on links at rates like 10Gbps), we ended up having a rather > interesting discussion about just how radically different the problem > spaces are vs. what he's seen in the bufferbloat community. > > They have few flows, long lived, and are trying to push >1Gbps per flow, > across the continent (or from Europe to the US), with inherent delays on > the order of 100ms. TCP under these conditions is, from his reports, > incredibly fragile, where even a tiny packet error rate stops TCP for > saturating the link (since it can't tell the difference between congestion > and a non-congestion-related dropped packet). > > And suddenly the "every packet is precious" mode of thought becomes crystal > clear. Part of the answer for them is "don't do that" :-) There are many tools around to move massive amounts of data through multiple parallel TCP connections so that if one connection gets a lost packet, it doesn't kill the entire flow, only the one connection slows down (if it's true congestion, then all the flows will be slowed) David Lang > Clearly they are trying to solve different problems, yet they do have > congestion events, when new flows are added to the network. > > Has anyone used fq_codel (or it's friends) in scenarios like this? fq is > fairly new (2 years?) and I can't find much about it and high bandwidth > links in my searches. > > Given that their problems aren't those that fq is trying to solve, I > wouldn't expect it, but curious to see if anyone has any research on it. > > Thanks, > Aaron >