General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] Congestion control with FQ-Codel/Cake with Multicast?
@ 2024-05-21 14:56 Linus Lüssing
  2024-05-21 15:25 ` Linus Lüssing
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Linus Lüssing @ 2024-05-21 14:56 UTC (permalink / raw)
  To: bloat

Hi,

In the past, flooding a network with multicast packets
was usually the easiest way to jam a (wifi) network,
as IPv6/UDP multicast in contrast to TCP has no native
congestion control. And broadcast/multicast packets on Wifi
would have a linear instead of exponential backoff time compared
to unicast for CSMA-CA, as far as I know.

I was wondering, have there been any tests with multicast on a
recent OpenWrt with FQ-Codel or Cake, do these queueing machanisms
work as a viable, fair congestion control option for multicast,
too? Has anyone looked at / tested this?

Second question: I'm sending an IPv6 multicast
UDP/RTP audio stream with RaptorQ [0] for forward error correction
with gstreamer [1]. I'm using separate IPv6 multicast addresses
for the original and RaptorQ streams, so that a user I might join
individually depending on their network quality. I might also add
more streams for the same data but at lower codec bitrates, as well
as additional RaptorQ streams with different settings in the future. 
I guess FQ-Codel/Cake in OpenWrt would see them as individual
sessions, due to differing IPv6 multicast destination addresses?
Anything I could/should do that they would be seen as one session,
to avoid that they would get an unfairly higher share of the
available bandwidth? What would be an ideal, automized approach,
snooping SDP [2] from SAP [3] messages maybe? (does anyone know how
RaptorQ should be encoded in SDP alongside the original RTP payload?)

Regards,
Linus


[0]: https://www.rfc-editor.org/rfc/rfc6330
[1]: $ gst-launch-1.0 \
    rtpbin name=rtp \
    fec-encoders='fec,0="raptorqenc\ mtu=400\ repair-packets=15\ repair-window=500\ symbol-size=192";' \
    pulsesrc device="radio-station-source-01" \
    ! audio/x-raw,channels=2 ! audioconvert ! audioresample \
    ! opusenc ! queue ! rtpopuspay \
    ! rtp.send_rtp_sink_0 rtp.send_rtp_src_0 \
    ! udpsink host=ff7e:240:2001:67c:2d50:0:545f:5800 port=5000 ttl-mc=64 rtp.send_fec_src_0_0 \
    ! udpsink host=ff7e:240:2001:67c:2d50:0:545f:5802 port=5002 ttl-mc=64 async=false
[2]: https://datatracker.ietf.org/doc/html/rfc8866
[3]: https://datatracker.ietf.org/doc/html/rfc2974

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Bloat] Congestion control with FQ-Codel/Cake with Multicast?
  2024-05-21 14:56 [Bloat] Congestion control with FQ-Codel/Cake with Multicast? Linus Lüssing
@ 2024-05-21 15:25 ` Linus Lüssing
  2024-05-23 21:43 ` Holland, Jake
  2024-05-24 13:58 ` Toke Høiland-Jørgensen
  2 siblings, 0 replies; 6+ messages in thread
From: Linus Lüssing @ 2024-05-21 15:25 UTC (permalink / raw)
  To: bloat

And two more things that would be nice, but which are probably not
implemented yet (but I would love to get other people's opinions
on this):

1) On congestion, prefer dropping RaptorQ RTP packets instead of the
original RTP packets?
(the original RTP packets should have a smaller delay and higher
probability for decoding the stream?)

2) Avoid sending a RaptorQ RTP packet from a WiFi AP to an STA in
3 address mode (so where the STA/client is known not to be
bridged) if according RTP packet(s) from the original RTP stream
were successfully transmitted / had a confirming WiFi ACK.

Regards, Linus


On Tue, May 21, 2024 at 04:56:18PM +0200, Linus Lüssing via Bloat wrote:
> Hi,
> 
> In the past, flooding a network with multicast packets
> was usually the easiest way to jam a (wifi) network,
> as IPv6/UDP multicast in contrast to TCP has no native
> congestion control. And broadcast/multicast packets on Wifi
> would have a linear instead of exponential backoff time compared
> to unicast for CSMA-CA, as far as I know.
> 
> I was wondering, have there been any tests with multicast on a
> recent OpenWrt with FQ-Codel or Cake, do these queueing machanisms
> work as a viable, fair congestion control option for multicast,
> too? Has anyone looked at / tested this?
> 
> Second question: I'm sending an IPv6 multicast
> UDP/RTP audio stream with RaptorQ [0] for forward error correction
> with gstreamer [1]. I'm using separate IPv6 multicast addresses
> for the original and RaptorQ streams, so that a user I might join
> individually depending on their network quality. I might also add
> more streams for the same data but at lower codec bitrates, as well
> as additional RaptorQ streams with different settings in the future. 
> I guess FQ-Codel/Cake in OpenWrt would see them as individual
> sessions, due to differing IPv6 multicast destination addresses?
> Anything I could/should do that they would be seen as one session,
> to avoid that they would get an unfairly higher share of the
> available bandwidth? What would be an ideal, automized approach,
> snooping SDP [2] from SAP [3] messages maybe? (does anyone know how
> RaptorQ should be encoded in SDP alongside the original RTP payload?)
> 
> Regards,
> Linus
> 
> 
> [0]: https://www.rfc-editor.org/rfc/rfc6330
> [1]: $ gst-launch-1.0 \
>     rtpbin name=rtp \
>     fec-encoders='fec,0="raptorqenc\ mtu=400\ repair-packets=15\ repair-window=500\ symbol-size=192";' \
>     pulsesrc device="radio-station-source-01" \
>     ! audio/x-raw,channels=2 ! audioconvert ! audioresample \
>     ! opusenc ! queue ! rtpopuspay \
>     ! rtp.send_rtp_sink_0 rtp.send_rtp_src_0 \
>     ! udpsink host=ff7e:240:2001:67c:2d50:0:545f:5800 port=5000 ttl-mc=64 rtp.send_fec_src_0_0 \
>     ! udpsink host=ff7e:240:2001:67c:2d50:0:545f:5802 port=5002 ttl-mc=64 async=false
> [2]: https://datatracker.ietf.org/doc/html/rfc8866
> [3]: https://datatracker.ietf.org/doc/html/rfc2974
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Bloat] Congestion control with FQ-Codel/Cake with Multicast?
  2024-05-21 14:56 [Bloat] Congestion control with FQ-Codel/Cake with Multicast? Linus Lüssing
  2024-05-21 15:25 ` Linus Lüssing
@ 2024-05-23 21:43 ` Holland, Jake
  2024-05-23 23:15   ` Holland, Jake
  2024-05-25 11:15   ` Jonathan Morton
  2024-05-24 13:58 ` Toke Høiland-Jørgensen
  2 siblings, 2 replies; 6+ messages in thread
From: Holland, Jake @ 2024-05-23 21:43 UTC (permalink / raw)
  To: Linus Lüssing, bloat

Hi Linus,

I did do some multicast tests with an OpenWRT that was
recent when I did them in about 2019-2020 IIRC.

I did not look at any FQ/CAKE while doing it.  I wouldn't call
FQ a viable, fair congestion control for this, though I do expect
it would isolate the damage to flows that share the queue with
the multicast stream, which at least helps prevent congestion
from breaking the network.

However, I'd describe that behavior as more like FQ providing
damage control against uncontrolled flows.  While it would be a
helpful mitigation, it's not a solution.

The work I did on the problem was mostly captured in the CBACC
draft (expired since the project was killed and I haven't managed
to take it up in my free time, but I still think it's a promising
approach):
https://www.ietf.org/archive/id/draft-ietf-mboned-cbacc-04.html

You probably need to do the congestion control in the app (see
https://www.rfc-editor.org/rfc/rfc8085.html#section-4.1 for the
general approaches that make sense, and CBACC lists several
specific ones in the references), but CBACC tries to provide a way
you can maintain network safety in the presence of misbehaving
apps that don't do it well (it would also disincentivize apps from
trying to cheat and causing harm to the network, which maybe
also should have been mentioned but I don't think was).

Also worth noting is that on the default OpenWRT install I saw at
the time, it did a layer 2 conversion to unicast for multicast packets
(tho I believe it can be turned off if you are talking something like
a stadium where you have a lot of users that really should be
getting the packets as layer 2 broadcast, as opposed to a home or
something with just a few users sharing the same wifi and watching
the same content).

There's a link in https://www.rfc-editor.org/rfc/rfc9119.html#section-4.6.2
about that, but basically at the Ethernet and WiFi layer, it uses IGMP/MLD
snooping to make the multicast IP packets go to one or more unicast MAC
addresses of the receivers that are joined, so it's actually using unicast as
far as WiFi or Ethernet is concerned, while still being multicast at the IP
layer (everything upstream of the router can still share the bandwidth
utilization, and the app still joins the multicast channel).
(PS: That whole RFC 9119 might be worth a read, for what you're doing.)

I agree with your conclusion that FQ systems would see different
streams as separate queues and each one could be independently
overloaded, which is among the reasons I don't think FQ can be
viewed as a solution here (though as a mitigation for the damage
I'd expect it's a good thing to have in place).

To your 2nd question, I don't see snooping SDP as a viable path
for some kind of in-network congestion control either, I think
it'll have to be explicitly exposed in general (hence the CBACC+
DORMS approach proposed).

Anything you want to deploy at any scale is going to have to have
the SDP encrypted, I expect, so I would consider SDP-snooping a
non-starter for something like that.  In an enterprise or a closed
network, I guess it could maybe work, but there you can also just
control the streams by their IPs so you still probably don't need
SDP snooping (and a lot of enterprises wouldn't want unencrypted
SDP either--at least 80% of the security people I've met will veto it,
and at least 60% of those will have good reasons at top of mind).

I love RaptorQ and it has performed amazingly well for me so it
sounds to me like you're on the best known path for loss recovery,
but I don't know how best to put it into SDP, I never did much work
on that part.  I guess my general advice is to decide what level of
loss recovery you want to provide by default and just send FEC for
that to everyone (like at 1% or so), and maybe a separate channel
that can support another higher threshold (maybe like another 3-5%)
for networks that are persistently noisy or something, and anything
higher provide via an HTTP endpoint if individuals need more than
that once in a while.  And make those tunable without restarting
the stream.  And write up your findings :)

Best of luck and I hope that's helpful.

-Jake

On 5/21/24, 7:56 AM, "Bloat on behalf of Linus Lüssing via Bloat" <bloat-bounces@lists.bufferbloat.net <mailto:bloat-bounces@lists.bufferbloat.net> on behalf of bloat@lists.bufferbloat.net <mailto:bloat@lists.bufferbloat.net>> wrote:


Hi,


In the past, flooding a network with multicast packets
was usually the easiest way to jam a (wifi) network,
as IPv6/UDP multicast in contrast to TCP has no native
congestion control. And broadcast/multicast packets on Wifi
would have a linear instead of exponential backoff time compared
to unicast for CSMA-CA, as far as I know.


I was wondering, have there been any tests with multicast on a
recent OpenWrt with FQ-Codel or Cake, do these queueing machanisms
work as a viable, fair congestion control option for multicast,
too? Has anyone looked at / tested this?


Second question: I'm sending an IPv6 multicast
UDP/RTP audio stream with RaptorQ [0] for forward error correction
with gstreamer [1]. I'm using separate IPv6 multicast addresses
for the original and RaptorQ streams, so that a user I might join
individually depending on their network quality. I might also add
more streams for the same data but at lower codec bitrates, as well
as additional RaptorQ streams with different settings in the future. 
I guess FQ-Codel/Cake in OpenWrt would see them as individual
sessions, due to differing IPv6 multicast destination addresses?
Anything I could/should do that they would be seen as one session,
to avoid that they would get an unfairly higher share of the
available bandwidth? What would be an ideal, automized approach,
snooping SDP [2] from SAP [3] messages maybe? (does anyone know how
RaptorQ should be encoded in SDP alongside the original RTP payload?)


Regards,
Linus




[0]: https://www.rfc-editor.org/rfc/rfc6330 
[1]: $ gst-launch-1.0 \
rtpbin name=rtp \
fec-encoders='fec,0="raptorqenc\ mtu=400\ repair-packets=15\ repair-window=500\ symbol-size=192";' \
pulsesrc device="radio-station-source-01" \
! audio/x-raw,channels=2 ! audioconvert ! audioresample \
! opusenc ! queue ! rtpopuspay \
! rtp.send_rtp_sink_0 rtp.send_rtp_src_0 \
! udpsink host=ff7e:240:2001:67c:2d50:0:545f:5800 port=5000 ttl-mc=64 rtp.send_fec_src_0_0 \
! udpsink host=ff7e:240:2001:67c:2d50:0:545f:5802 port=5002 ttl-mc=64 async=false
[2]: https://datatracker.ietf.org/doc/html/rfc8866
[3]: https://datatracker.ietf.org/doc/html/rfc2974




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Bloat] Congestion control with FQ-Codel/Cake with Multicast?
  2024-05-23 21:43 ` Holland, Jake
@ 2024-05-23 23:15   ` Holland, Jake
  2024-05-25 11:15   ` Jonathan Morton
  1 sibling, 0 replies; 6+ messages in thread
From: Holland, Jake @ 2024-05-23 23:15 UTC (permalink / raw)
  To: Linus Lüssing, bloat

Hi Linus,

One correction on that last message:

After sending and re-reading I belatedly realized it's completely viable to
snoop SDP over SAP if you wanted to.  So I guess it's not a non-starter if
you don't care about solving other kinds of multicast traffic in addition to
the ones in that can announce SDP to the SAP channels you can watch.

I'm not sure why it wouldn't be better to make a separate service that
coordinates the channels in use since the SAP channels will probably have
to do that anyway, but on reflection I think snooping SDP is probably
more viable than I gave it credit for.  My caution there would be that in
a network where multicast can get delivered and people are using it, I
would think someone might start doing SDP outside the SAP channels at
the app level, or maybe doing other kinds of multicast traffic.  So I'd
imagine it only can lead to a partial solution, but it could be useful.

Apologies for my confusion.

Best regards,
Jake



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Bloat] Congestion control with FQ-Codel/Cake with Multicast?
  2024-05-21 14:56 [Bloat] Congestion control with FQ-Codel/Cake with Multicast? Linus Lüssing
  2024-05-21 15:25 ` Linus Lüssing
  2024-05-23 21:43 ` Holland, Jake
@ 2024-05-24 13:58 ` Toke Høiland-Jørgensen
  2 siblings, 0 replies; 6+ messages in thread
From: Toke Høiland-Jørgensen @ 2024-05-24 13:58 UTC (permalink / raw)
  To: Linus Lüssing, bloat

Linus Lüssing via Bloat <bloat@lists.bufferbloat.net> writes:

> I was wondering, have there been any tests with multicast on a
> recent OpenWrt with FQ-Codel or Cake, do these queueing machanisms
> work as a viable, fair congestion control option for multicast,
> too? Has anyone looked at / tested this?

FQ-CoDel or CAKE on the interface probably wouldn't help much (there's a
reason we put it into the WiFi stack instead of the qdisc layer).

However, AQL in the WiFi stack could control the multicast/broadcast
queue. It does not currently do so, however, but Felix sent an RFC patch
to enable this back in February. So this may turn up at some point in
the future in a WiFi stack near you:

https://lore.kernel.org/r/20240209184730.69589-1-nbd@nbd.name

-Toke

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Bloat] Congestion control with FQ-Codel/Cake with Multicast?
  2024-05-23 21:43 ` Holland, Jake
  2024-05-23 23:15   ` Holland, Jake
@ 2024-05-25 11:15   ` Jonathan Morton
  1 sibling, 0 replies; 6+ messages in thread
From: Jonathan Morton @ 2024-05-25 11:15 UTC (permalink / raw)
  To: Holland, Jake; +Cc: Linus Lüssing, bloat

> On 24 May, 2024, at 12:43 am, Holland, Jake via Bloat <bloat@lists.bufferbloat.net> wrote:
> 
> I agree with your conclusion that FQ systems would see different
> streams as separate queues and each one could be independently
> overloaded, which is among the reasons I don't think FQ can be
> viewed as a solution here (though as a mitigation for the damage
> I'd expect it's a good thing to have in place).

Cake has the facility to override the built-in flow and tin classification using custom filter rules.  Look in the tc-cake manpage under "OVERRIDING CLASSIFICATION".  This could be applicable to multicast traffic in two ways:

1: Assign all traffic with a multicast-range IP address to the Video tin.  Since Cake schedules by tin first, and only then by host and/or flow, this should successfully keep multicast traffic from obliterating best-effort and Voice tin traffic.

2: Assign all multicast traffic to a single flow ID (eg. zero), without reassigning the tin.  This will cause it all to be treated like a single flow, giving the FQ mechanisms something to bite on.

 - Jonathan Morton

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-05-25 11:15 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-21 14:56 [Bloat] Congestion control with FQ-Codel/Cake with Multicast? Linus Lüssing
2024-05-21 15:25 ` Linus Lüssing
2024-05-23 21:43 ` Holland, Jake
2024-05-23 23:15   ` Holland, Jake
2024-05-25 11:15   ` Jonathan Morton
2024-05-24 13:58 ` Toke Høiland-Jørgensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox