[Bloat] cake + ipv6
Daniel Sterling
sterling.daniel at gmail.com
Mon Sep 28 02:22:51 EDT 2020
I guess the reason I'm surprised is I'm confused about the following:
For UDP streams that use <1mbit down , should I expect cake in ingress
mode to keep those at low latency even in the face of a constantly
full queue / backlog, using "besteffort" but also using host
isolation?
By backlog I mean what I see with tc -s qdisc. I assume that's the
total of all individual flow backlogs, right?
I'm guessing no, but I'm wondering why not. Let's say we have some
hosts doing downloads as fast as they can with as many connections as
they can, and another separate host doing "light" UDP (xbox FPS
traffic).
So that's, say, 4 hosts constantly filling their queue's backlog -- it
will always have as many bytes as the rtt setting allows -- so by
default up to 100ms of bytes at my bandwidth setting, per flow, right?
Or is that per host?
And then another host (the xbox) that will have a constant flow that
doesn't really respond to shaping hints -- it's going to have a steady
state of packets that it wants to receive and send no matter what. It
might go from "high" to "low" update resolution (e.g. 256kbit to
128bkit), but that's about it. It will always want about 256kbit down
and 128kbit up with v4 UDP.
Normally that stream will have an rtt of < 50ms. Sometimes, e.g.
in-between rounds of the same game (thus the same UDP flow), the
server might let the rtt spike to 100+ ms since nothing much needs to
be sent between rounds.
But once the new round starts, we'll want low latency again.
Is it at all possible that cake, seeing the UDP stream is no longer
demanding low latency (in-between rounds), it figures it can let its
rtt stay at 100+ms per its rtt settings, even after the new round
starts and the xbox wants low latency again?
That is, since every host wants some traffic, and most if not all the
queues / backlogs will always be filled, is it possible that once a
flow allows its rtt to rise, cake won't let it back down again until
there's a lull?
As I said, I solved this by giving xbox traffic absolute first
priority with the "prio" qdisc. Obviously this means the xbox traffic
can starve everything else given a malicious flow, but that's not
likely to happen and if it does, I will notice.
Does my theory about cake having pathological behaviour for gaming
flows while the backlog is full hold any water?
Thanks,
Dan
On Wed, Sep 23, 2020 at 9:07 PM Daniel Sterling
<sterling.daniel at gmail.com> wrote:
>
> On Wed, Sep 23, 2020 at 2:13 PM Jonathan Morton <chromatix99 at gmail.com> wrote:
> > It fits my mental model, yes, though obviously the ideal would be to recognise that the xbox is a singular machine. Are you seeing a larger disparity than that? If so, is it even larger than four connections would justify without host-fairness?
>
> Thanks Jonathan,
>
> Unfortunately I fear I've been running an improperly configured cake
> all this time-- leave it to me to take a piece of cake and mess it up
> :) I should say, even with it being possibly misconfigured, it has
> still worked amazingly well for me! I have no end of gratitude for
> y'alls work --
>
> So taking your advice, with my cake properly configured, the answer is
> no -- it looks like host-fairness is working correctly -- with a
> notable exception -- read on for details...
>
> I was using these probably wrong settings:
>
> WAN: besteffort bandwidth $UPBANDWIDTH internet nat egress ethernet
> LAN: besteffort bandwidth $BANDWIDTH internet nat ingress ethernet
>
> So, I was using triple-isolate instead of the correct src/dest
> isolation options per NIC, and I was using "nat" on the LAN side,
> which may have been further breaking host isolation as per the openwrt
> wiki:
>
> "don't add the nat option on LAN interfaces (this option should only
> be used in the WAN interface) or Per-Host Isolation stops working" -
> https://openwrt.org/docs/guide-user/network/traffic-shaping/sqm-details
>
> I changed my configuration to:
>
> $WAN handle 1: root cake besteffort bandwidth $UPBANDWIDTH internet
> nat egress dual-srchost ethernet
> $LAN handle 1: root cake besteffort bandwidth $BANDWIDTH internet
> ingress dual-dsthost ethernet
>
> So:
>
> root at OpenWrt:~# tc -s qdisc | grep cake
> qdisc cake 1: dev eth1 root refcnt 2 bandwidth 20Mbit besteffort
> dual-srchost nat nowash no-ack-filter split-gso rtt 100.0ms noatm
> overhead 38 mpu 84
> qdisc cake 1: dev eth0 root refcnt 2 bandwidth 40Mbit besteffort
> dual-dsthost nonat nowash ingress no-ack-filter split-gso rtt 100.0ms
> noatm overhead 38 mpu 84
>
> WAN:
> * nat option enabled
> * egress option (default) enabled
> * dual-srchost
>
> LAN:
> * nat option disabled (default)
> * ingress option enabled
> * dual-dsthost
>
> This worked well, but I ran into a bizarre issue:
>
> My xbox FPS gaming UDP stream would start at a low latency (even while
> other systems were doing bulk downloads), but after some time, the
> latency went from a steady 30ms (+/- <10ms for jitter) to over 100ms !
> 100ms is of course notably the default cake rtt parameter. Resetting
> cake (tc del and then tc add again) fixed this; the same stream's
> latency went back down to about 30ms.
>
> This gaming stream uses well under 512kbps down and less than half
> that up, normally.
>
> This is worrisome! My use-case may be an edge case, as I have AT&T
> gigabit fiber but I limit it to 40mbit with cake, since empirically
> that's the highest throughput I've found that I can use while also
> maintaining the low jitter / latency over wifi. So certainly the WAN
> NIC is getting a lot of traffic that cake on the LAN NIC is throwing
> away. That is, there is basically always a large number of bytes in
> LAN cake's backlog.
>
> For now I've solved this by using the "prio" tc qdisc such that xbox
> traffic bypasses cake, and all other traffic goes thru cake.
>
> Is this a known issue at all? I was quite surprised to see this
> behaviour. Let me know if I can help debug this. It likely only
> happens when multiple other hosts are doing bulk downloads / video
> streaming.
>
> Thanks,
> Dan
More information about the Bloat
mailing list