[Bloat] Congestion control with FQ-Codel/Cake with Multicast?

Holland, Jake jholland at akamai.com
Thu May 23 17:43:42 EDT 2024


Hi Linus,

I did do some multicast tests with an OpenWRT that was
recent when I did them in about 2019-2020 IIRC.

I did not look at any FQ/CAKE while doing it.  I wouldn't call
FQ a viable, fair congestion control for this, though I do expect
it would isolate the damage to flows that share the queue with
the multicast stream, which at least helps prevent congestion
from breaking the network.

However, I'd describe that behavior as more like FQ providing
damage control against uncontrolled flows.  While it would be a
helpful mitigation, it's not a solution.

The work I did on the problem was mostly captured in the CBACC
draft (expired since the project was killed and I haven't managed
to take it up in my free time, but I still think it's a promising
approach):
https://www.ietf.org/archive/id/draft-ietf-mboned-cbacc-04.html

You probably need to do the congestion control in the app (see
https://www.rfc-editor.org/rfc/rfc8085.html#section-4.1 for the
general approaches that make sense, and CBACC lists several
specific ones in the references), but CBACC tries to provide a way
you can maintain network safety in the presence of misbehaving
apps that don't do it well (it would also disincentivize apps from
trying to cheat and causing harm to the network, which maybe
also should have been mentioned but I don't think was).

Also worth noting is that on the default OpenWRT install I saw at
the time, it did a layer 2 conversion to unicast for multicast packets
(tho I believe it can be turned off if you are talking something like
a stadium where you have a lot of users that really should be
getting the packets as layer 2 broadcast, as opposed to a home or
something with just a few users sharing the same wifi and watching
the same content).

There's a link in https://www.rfc-editor.org/rfc/rfc9119.html#section-4.6.2
about that, but basically at the Ethernet and WiFi layer, it uses IGMP/MLD
snooping to make the multicast IP packets go to one or more unicast MAC
addresses of the receivers that are joined, so it's actually using unicast as
far as WiFi or Ethernet is concerned, while still being multicast at the IP
layer (everything upstream of the router can still share the bandwidth
utilization, and the app still joins the multicast channel).
(PS: That whole RFC 9119 might be worth a read, for what you're doing.)

I agree with your conclusion that FQ systems would see different
streams as separate queues and each one could be independently
overloaded, which is among the reasons I don't think FQ can be
viewed as a solution here (though as a mitigation for the damage
I'd expect it's a good thing to have in place).

To your 2nd question, I don't see snooping SDP as a viable path
for some kind of in-network congestion control either, I think
it'll have to be explicitly exposed in general (hence the CBACC+
DORMS approach proposed).

Anything you want to deploy at any scale is going to have to have
the SDP encrypted, I expect, so I would consider SDP-snooping a
non-starter for something like that.  In an enterprise or a closed
network, I guess it could maybe work, but there you can also just
control the streams by their IPs so you still probably don't need
SDP snooping (and a lot of enterprises wouldn't want unencrypted
SDP either--at least 80% of the security people I've met will veto it,
and at least 60% of those will have good reasons at top of mind).

I love RaptorQ and it has performed amazingly well for me so it
sounds to me like you're on the best known path for loss recovery,
but I don't know how best to put it into SDP, I never did much work
on that part.  I guess my general advice is to decide what level of
loss recovery you want to provide by default and just send FEC for
that to everyone (like at 1% or so), and maybe a separate channel
that can support another higher threshold (maybe like another 3-5%)
for networks that are persistently noisy or something, and anything
higher provide via an HTTP endpoint if individuals need more than
that once in a while.  And make those tunable without restarting
the stream.  And write up your findings :)

Best of luck and I hope that's helpful.

-Jake

On 5/21/24, 7:56 AM, "Bloat on behalf of Linus Lüssing via Bloat" <bloat-bounces at lists.bufferbloat.net <mailto:bloat-bounces at lists.bufferbloat.net> on behalf of bloat at lists.bufferbloat.net <mailto:bloat at lists.bufferbloat.net>> wrote:


Hi,


In the past, flooding a network with multicast packets
was usually the easiest way to jam a (wifi) network,
as IPv6/UDP multicast in contrast to TCP has no native
congestion control. And broadcast/multicast packets on Wifi
would have a linear instead of exponential backoff time compared
to unicast for CSMA-CA, as far as I know.


I was wondering, have there been any tests with multicast on a
recent OpenWrt with FQ-Codel or Cake, do these queueing machanisms
work as a viable, fair congestion control option for multicast,
too? Has anyone looked at / tested this?


Second question: I'm sending an IPv6 multicast
UDP/RTP audio stream with RaptorQ [0] for forward error correction
with gstreamer [1]. I'm using separate IPv6 multicast addresses
for the original and RaptorQ streams, so that a user I might join
individually depending on their network quality. I might also add
more streams for the same data but at lower codec bitrates, as well
as additional RaptorQ streams with different settings in the future. 
I guess FQ-Codel/Cake in OpenWrt would see them as individual
sessions, due to differing IPv6 multicast destination addresses?
Anything I could/should do that they would be seen as one session,
to avoid that they would get an unfairly higher share of the
available bandwidth? What would be an ideal, automized approach,
snooping SDP [2] from SAP [3] messages maybe? (does anyone know how
RaptorQ should be encoded in SDP alongside the original RTP payload?)


Regards,
Linus




[0]: https://www.rfc-editor.org/rfc/rfc6330 
[1]: $ gst-launch-1.0 \
rtpbin name=rtp \
fec-encoders='fec,0="raptorqenc\ mtu=400\ repair-packets=15\ repair-window=500\ symbol-size=192";' \
pulsesrc device="radio-station-source-01" \
! audio/x-raw,channels=2 ! audioconvert ! audioresample \
! opusenc ! queue ! rtpopuspay \
! rtp.send_rtp_sink_0 rtp.send_rtp_src_0 \
! udpsink host=ff7e:240:2001:67c:2d50:0:545f:5800 port=5000 ttl-mc=64 rtp.send_fec_src_0_0 \
! udpsink host=ff7e:240:2001:67c:2d50:0:545f:5802 port=5002 ttl-mc=64 async=false
[2]: https://datatracker.ietf.org/doc/html/rfc8866
[3]: https://datatracker.ietf.org/doc/html/rfc2974





More information about the Bloat mailing list