From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.aperture-lab.de (mail.aperture-lab.de [116.203.183.178]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 9B2713B2A4 for ; Tue, 21 May 2024 11:25:09 -0400 (EDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 7810C3EFB1 for ; Tue, 21 May 2024 17:25:07 +0200 (CEST) Date: Tue, 21 May 2024 17:25:06 +0200 From: Linus =?utf-8?Q?L=C3=BCssing?= To: bloat@lists.bufferbloat.net Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Last-TLS-Session-Version: TLSv1.3 Subject: Re: [Bloat] Congestion control with FQ-Codel/Cake with Multicast? X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 21 May 2024 15:25:09 -0000 And two more things that would be nice, but which are probably not implemented yet (but I would love to get other people's opinions on this): 1) On congestion, prefer dropping RaptorQ RTP packets instead of the original RTP packets? (the original RTP packets should have a smaller delay and higher probability for decoding the stream?) 2) Avoid sending a RaptorQ RTP packet from a WiFi AP to an STA in 3 address mode (so where the STA/client is known not to be bridged) if according RTP packet(s) from the original RTP stream were successfully transmitted / had a confirming WiFi ACK. Regards, Linus On Tue, May 21, 2024 at 04:56:18PM +0200, Linus Lüssing via Bloat wrote: > Hi, > > In the past, flooding a network with multicast packets > was usually the easiest way to jam a (wifi) network, > as IPv6/UDP multicast in contrast to TCP has no native > congestion control. And broadcast/multicast packets on Wifi > would have a linear instead of exponential backoff time compared > to unicast for CSMA-CA, as far as I know. > > I was wondering, have there been any tests with multicast on a > recent OpenWrt with FQ-Codel or Cake, do these queueing machanisms > work as a viable, fair congestion control option for multicast, > too? Has anyone looked at / tested this? > > Second question: I'm sending an IPv6 multicast > UDP/RTP audio stream with RaptorQ [0] for forward error correction > with gstreamer [1]. I'm using separate IPv6 multicast addresses > for the original and RaptorQ streams, so that a user I might join > individually depending on their network quality. I might also add > more streams for the same data but at lower codec bitrates, as well > as additional RaptorQ streams with different settings in the future. > I guess FQ-Codel/Cake in OpenWrt would see them as individual > sessions, due to differing IPv6 multicast destination addresses? > Anything I could/should do that they would be seen as one session, > to avoid that they would get an unfairly higher share of the > available bandwidth? What would be an ideal, automized approach, > snooping SDP [2] from SAP [3] messages maybe? (does anyone know how > RaptorQ should be encoded in SDP alongside the original RTP payload?) > > Regards, > Linus > > > [0]: https://www.rfc-editor.org/rfc/rfc6330 > [1]: $ gst-launch-1.0 \ > rtpbin name=rtp \ > fec-encoders='fec,0="raptorqenc\ mtu=400\ repair-packets=15\ repair-window=500\ symbol-size=192";' \ > pulsesrc device="radio-station-source-01" \ > ! audio/x-raw,channels=2 ! audioconvert ! audioresample \ > ! opusenc ! queue ! rtpopuspay \ > ! rtp.send_rtp_sink_0 rtp.send_rtp_src_0 \ > ! udpsink host=ff7e:240:2001:67c:2d50:0:545f:5800 port=5000 ttl-mc=64 rtp.send_fec_src_0_0 \ > ! udpsink host=ff7e:240:2001:67c:2d50:0:545f:5802 port=5002 ttl-mc=64 async=false > [2]: https://datatracker.ietf.org/doc/html/rfc8866 > [3]: https://datatracker.ietf.org/doc/html/rfc2974 > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat