Thanks Max and Stephen,
>> From my experience (experimented with it last year), it still behaves
>> weirdly. You can use htb for the shaping and you can create delay using
>> netem by using it on another (virtual) host on a link that does not have
>> any other qdiscs and where the link is not the bottleneck.
Yes, I did some quick tests and saw that this works:
> sudo tc qdisc add dev eth0 root handle 2: tbf rate 200Mbit burst 4542 limit 500000
> sudo tc qdisc add dev eth0 parent 2: handle 3: fq_codel
but this does not work:
> sudo tc qdisc add dev eth0 root handle 1: netem delay 20ms limit 10000000
> sudo tc qdisc add dev eth0 parent 1: handle 2: tbf rate 200Mbit burst 4542 limit 500000
> sudo tc qdisc add dev eth0 parent 2: handle 3: fq_codel
Adding netem delay to a separate host or adding it to the ingress qdisc
(as described by Dave) does the trick.
> Yes, netem has expectations about how inner qdisc behaves,
> and other qdisc used as inner have expectations about how/when packets
> are sent. There is a mismatch, not sure if it is fixable with in the
> architecture of how Linux queue disciplines operate.
For the sake of completeness, here's the quote from the netem man page:
> Combining netem with other qdisc is possible but may not always work because netem use skb control block to set delays.
:-)
>
> The best way is to use netem on an intermediate hop and not expect
> it to work perfectly when used on an endpoint. The same is true of Dummynet
> and other network emulators; they are uses as man-in-the-middle systems.
Still I guess it can be tricky if netem delay + tbf shaping + fq_codel
shall be used altogether.
This paper mentions that the order tbf/netem or netem/tbf matters
(section 4.1.1), but does not mention fq_codel:
https://atlas.cs.uni-tuebingen.de/~menth/papers/Menth17-Sub-2.pdf
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat