Hello,
thanks for the quick feedback! It's not even TCP, that is what is confusing me.
I attached two images. In this run, I am literally just pushing
UDP packets at 10mbit per second. The first run is kernel 4.18.
You can see that I am pushing in 10mbit on two interfaces and emit
10 mbit on the eth3.
Kernel 4.19 on the other hand looks like this:
Optimal distribution 5/5 mbit on each interface. I am quite confused.
I was hoping you guys might know, but it seems to be unrelated to cake or the bufferbloat changes. I might just ask the netdev guys.
Fabian
On Thu, 22 Nov 2018 12:28:36 -0800 Fabian Ruffy <fruffy@cs.ubc.ca> wrote:Hello, this is a somewhat esoteric question. I am trying to actually force bufferbloat in an emulation setup I am using. I set up a dumbbell topology and push traffic through it, causing congestion at the central link. I use this setup to compare congestion avoidance algorithms such as DCTCP to other solutions. This has worked nicely with the 4.18 kernel. However, after upgrading to 4.19 I cannot reproduce bufferbloat anymore. The traffic (even UDP packets) is perfectly rate limited and I never see any congestion happening. This is great, but in practice it prevents me from prototyping algorithms. My interface configuration for bottlenecked links is: qdisc tbf 5: dev OBcbnsw1-eth2 root refcnt 2 rate 10Mbit burst 15000b lat 12.0msSent 6042 bytes 51 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc netem 10: dev OBcbnsw1-eth2 parent 5:1 limit 500 Sent 6042 bytes 51 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0I have the suspicion that it is related to the CAKE changes in the 4.19 kernel, but I am not exactly sure. I am not using tc cake at all. Do you maybe know what could cause this behavior? Apologies if this is the wrong mailing list.More likely it is a combination of TCP small queues and pacing support. To emulate a network you need to have an intermediate box, otherwise the local feedback in TCP will defeat what you are trying to do.