> Erm, what exactly are you trying to show here? I wanted to generically isolate the 'geometry' to expect when scaling the TCP advertising mem limits. And I must say, it looks a little askant. Might worth a look at the data structures there. Parallelizing socket buffers would be rather application specific, I guess. Hm ... > As far as I can tell from > the last (1-flow) plot, you are saturating the link in all the tests > (and indeed the BDP for a 1Gbps with 2ms RTT is around 250kb), which > means that the TCP flow is limited by cwin and not rwin; so I'm not sure > you are really testing what you say you are. That's correct. Excuses for that one, I was decepted by the latency plots where I surmised to perceive the (reno ~ cubic) saw-toth and the virtualisation was adding a tbf instance on one of the links. All that perfectly explains why I wasn't influencing bandwith at all. > I'm not sure why the latency varies with the different tests, though; > you sure there's not something else varying? Have you tried running > Flent with the --socket-stats option and taking a look at the actual Reran/updated (including --socket-stats) everything and that's now way more comforting. There are not such performance rifts anymore as perceived beforehand - 'no low hanging fruit'. For the per link/peering tuning I mentioned, that's appearing to be more application specific logic (syscall based) then I'd amend. Thanks for having had a look. -- Besten Gruß Matthias Tafelmeier