[Cerowrt-devel] [Cake] mo bettah open source multi-party videoconferncing in an age of bloated uplinks?
Dave Taht
dave.taht at gmail.com
Fri Mar 27 15:36:57 EDT 2020
Of interest given some of what you say below, there is a huge
discussion on netdev about how to best implement
hardware offloads for network slicing:
https://www.spinics.net/lists/netdev/msg638836.html
Me, I always rolled my eyes up at all the network virtualization stuff
and ran from the room, screaming, given ow much I care about low
latency. The udp vs tcp offload split has been nightmare enough. That
said, to this day I lack a clear idea how any multi-tenant dc
operation really works, I've generally assumed it was policers, and
have deployed sqm (now cake) instead on everything in the cloud that
seemed to need it.
On Fri, Mar 27, 2020 at 12:00 PM David P. Reed <dpreed at deepplum.com> wrote:
>
> Congestion control for real-time video is quite different than for streaming. Streaming really is dealt with by a big enough (multi-second) buffering, and can in principle work great over TCP (if debloated).
>
> UDP congestion control MUST be end-to-end and done in the application layer, which is usually outside the OS kernel. This makes it tricky, because you end up with latency variation due to eh OS's process scheduler that is on the order of magnitude of the real-time requirements for air-to-air or light-to-light response (meaning the physical transition from sound or picture to and from the transducer).
>
> This creates a godawful mess when trying to do an app. Whether in WebRTC (peer to peer UDP) or in a Linux userspace app, the scheduler has huge variance in delay.
>
> Now getting rid of bloat currently requires TCP to respond to congestion signalling. UDP in the kernel doesn't do that, and it doesn't tell userspace much either (you can try to detect packet drops in userspace, but coding that up is quite hard because the schdulers get in the way of measurement, and forget about ECN being seen in userspace)
>
> This is OS architecture messiness, not a layer 2 or 3 issue.
>
> I've thought about this a lot. Here's my thoughts:
>
> I hate putting things in the kernel! It's insecure. But what this says is that for very historical and stupid reasons (related to the ideas of early timesharing systems like Unix and Multics) folks try to make real-time algorithms look like ordinary "processes" whose notion of controlling temporal behavior is abstracted away.
>
> So:
> 1. We really should rethink how timing-sensitive algorithms are expressed, and it isn't gonna be good to base them on semaphores and threads that run at random rates. That means a very different OS conceptual framework. Can this share with, say, the Linux we know and love - yes, the hardware can be shared. One should be able to dedicate virtual processors that are not running Linux processes, but instead another computational model (dataflow?).
> An example of this (though clunky and unsupported by good tools) is in FreeBSD, it's called *netgraph*. It's a structured way to write reactive algorithms that are demand or arrival driven. It also has some security issues, and since it is heavily based on passing mbufs around it's really quirky. But I have found it useful for the kind of things that need to get done in teleconferencing voice and video.
>
> 2. EBPF is interesting, because it is more secure, and is again focused on running code at kernel level, event-driven. I think it would be a seriously difficult lift to get it to the point where one could program the networked media processing in BPF.
>
> 3. One of the nice things about KVM (hardware virtualization) is that potentially it lets different low level machine models share a common machine. It occurs to me that using VIRTIO network devices and some kind of VIRTIO media processing devices, that a KVM virtual machine could be hooked up to the packet-level networking drivers in the end device, isolating the teleconferencing from the rest of the endpoint OS, and creating the right kind of near-bare--metal environment for managing the timing of network packets and the paths to the screen and audio that would be simple and clean and tightly scheduled. KVM could "own" one or more of the physical cores during the teleconference.
>
> You can see, though, that this isn't just a "network protocol design" problem. This is only partly a network protocol issue, but one that is coupled with the architecture of the end systems.
>
> I reminisce a little bit thinking back to the 1970's and 80's when TCP/IP and UDP/IP were being designed. Sadly, it was one of the big problems of communicating between the OS community and the protocol community that the OS community couldn't think outside the "timesharing" system box, and the protocol community thought of networking like phone calls (sessions). This is where the need for control of timing and buffering got lost. The timesharing folks largely thought of networks as for reliable timeless sequential "streams" of data that had no particular urgency. The network protocol folks were focused on ARQ.
> Only a few of us cared about end-to-end latency bounds (where ends meant keyboard click or audio sample to screen display change or speaker motion). The packet speech guys did, but most networking guys wanted to toss them under the bus as annoying. And those of us doing distributed multinode algorithms did, but the remote login and FTP guys were skeptical that would ever matter.
>
> It's the latency, stupid. Not the reliability, nor the consistency, nor throughput. Unless both the OS and the path are focused on minimizing latency, a vast set of applications will suck. Unfortunately, both the OS and network communities are *stuck* in a world where latency is uncontrollable, and there are no tools for getting it better.
>
>
>
> On Friday, March 27, 2020 1:27pm, "Dave Taht" <dave.taht at gmail.com> said:
>
> > sort of an outgrowth of this convo:
> >
> > https://lwn.net/SubscriberLink/815751/786d161d06a90f0e/
> >
> > I imagine worldwide videoconferencing quality could be much better if
> > we could convince more folk to
> > finally install sqm or upgrade to a working docsis 3.1 solution, etc.
> > Maybe some rag somewhere will finally pick up on bufferbloat solutions
> > and run with it? Or we can write some articles? Or reach out to school
> > systems? Or?
> >
> > I've been fiddling with jitsi, and am about to give freeswitch a try.
> > Last I looked freeswitch's otherwise pretty nifty conference bridge
> > didn't dynamically adjust at all due to e2e signalling, but that was
> > years ago. (?)
> >
> > I have to admit that p2p multiparty videoconferencing seems more
> > plausible in a de-bufferbloated age, but
> > haven't explored what tools are available. (?)
> >
> > There's also been this somewhat entertaining convo on the ietf mbone
> > list: https://mailarchive.ietf.org/arch/msg/mboned/2thFQk_IYn38XCZBQavhUmOd6tk/
> >
> > Around me there has been this huge interest in "streaming". The user
> > agreement for these (see restream.io's) is scary - and the copyright
> > police have control... but I am very happy to report that even a
> > couple really lousy long distance fq_codel'd ath9k links work *really*
> > well (with facebook's implementation), where a non fq_codeled link
> > (ath10k) failed miserably... and setting up a reflector in nginx also
> > failed miserably.
> >
> > Anyone working on the ath10k AQL backport for openwrt as yet?
> >
> > --
> > Make Music, Not War
> >
> > Dave Täht
> > CTO, TekLibre, LLC
> > http://www.teklibre.com
> > Tel: 1-831-435-0729
> > _______________________________________________
> > Cake mailing list
> > Cake at lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cake
> >
>
>
--
Make Music, Not War
Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-435-0729
More information about the Cerowrt-devel
mailing list