[Bloat] Best practices for paced TCP on Linux?

Neil Davies neil.davies at pnsol.com
Sat Apr 7 04:54:30 PDT 2012


Hi

Yep - you might well be right. I first fell across this sort of thing helping the guys
with the ATLAS experiment on the LHC several years ago.

The issue, as best as we could capture it - we hit "commercial confidence"
walls inside network and manufacturer suppliers, was the the following.

The issue was that with each "window round trip cycle"  the volume of data
was doubling  - they had opened the window size up to the level where, between 
the two critical cycles, the increase in the number of packets in flight were several 
hundred - this caused massive burst loss at an intermediate point on the network.

The answer was rather simple - calculate the amount of buffering needed to achieve
say 99% of the "theoretical" throughput (this took some measurement as to exactly what 
that was) and limit the sender to that.

This eliminated the massive burst (the window had closed) and the system would
approach the true maximum throughput and then stay there.

This, given the nature of use of these transfer, was a practical suggestion - they were
going to use these systems for years analysing the LHC collisions at remote sites.

Sometimes the right thing to do is to *not* push the system into its unpredictable
region of operation.

Neil


On 6 Apr 2012, at 22:37, Steinar H. Gunderson wrote:

> Hi,
> 
> This is only related to bloat, so bear with me if it's not 100% on-topic;
> I guess the list is about the best place on the Internet to get a reasonble
> answer for this anyway :-)
> 
> Long story short, I have a Linux box (running 3.2.0 or so) with a 10Gbit/sec
> interface, streaming a large amount of video streams to external users,
> at 1Mbit/sec, 3Mbit/sec or 5Mbit/sec (different values). Unfortunately, even
> though there is no congestion in _our_ network (we have 190 Gbit/sec free!),
> some users are complaining that they can't keep the stream up.
> 
> My guess is that this is because at 10Gbit/sec, we are crazy bursty, and
> somewhere along the line, there will be devices doing down conversion without
> enough buffers (for instance, I've seen drop behavior on Cisco 2960-S in a
> very real ISP network on 10->1 Gbit/sec down conversion, and I doubt it's the
> worst offender here).
> 
> Is there anything I can do about this on my end? I looked around for paced
> TCP implementations, but couldn't find anything current. Can I somehow shape
> each TCP stream to 10Mbit/sec or so each with a combination of SFQ and TBF?
> (SFQRED?)
> 
> I'm not very well versed in tc, so anything practical would be very much
> appreciated. Bonus points if we won't have to patch the kernel.
> 
> /* Steinar */
> -- 
> Homepage: http://www.sesse.net/
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



More information about the Bloat mailing list