[Cake] mo bettah open source multi-party videoconferncing in an age of bloated uplinks?
David P. Reed
dpreed at deepplum.com
Sat Mar 28 19:15:40 EDT 2020
Regarding EDF.
I've been pushing folks to move latency sensitive computing in ALL OS's to a version of EDF since about 1976. This was when I was in grad school working on distributed computing on LANs. In fact, it is where I got the idea for my Ph.D. thesis (completed in 1978) which pointed out a bigger idea - that getting ACID consistency [ACID hadn't been invented then as a term, we called it atomic actions] on data in a distributed system being processed by concurrent distributed transactions can be done by using timestamps that behave like the "deadlines" in EDF. In fact, the scheduling of code in my thesis was a generalized version of EDF, approximated because of the impossibility of perfect synchronization.
The Croquet system, which was a real-time edge based decentralized system, with no central server, that we demonstrated with a Second-Life style virtual world that wored entirely on a set of laptops that could be across the country from each other was based on an OS implemented in a variant of the Squeak programming language, where the scheduling and object model was not process based, but message based with replicated computation synchronized via a shared "timestamp" that was used for execution scheduling (essentially distributed EDF). The latency requirements for this distributed virtual world were on the order of 100 msec. simultaneity for mouse clicks affecting all participating nodes across the country in a virtual 3D world, with sound, etc.
Croquet was built in 2 years by 3 people (starting from scratch). And scheduling was never a problem, nor was variable network delay (our protocol was based on UDP frames synchronized by the same timestamps used to synchronize each object method execution.
The operating system model is one I created within that modified Squeak environment as part of its base "interpreter", which wasn't a loop, but a scheduler using EDF.
To make this work properly, the programming model has to be unified around this kind of scheduling.
And here's why I am mentioning this. To put EDF *only* into the networking stack, but leave the userspace applicaiton living with the stupid Linux timesharing system scheduler, optimized for people typing commands on terminals every few seconds and running batch compilation is the *worst of all possible ways to use EDF*.
Because it creates a huge mess bridging those two ideas.
Croquet is a much more complicated thing that a teleconferencing system, because it actually lets end users write simple programs that control the user interactive experience, 30 frames per second across the entire US, replicated on each computer, in the Squaak variant of Smalltalk. And we did it with 3 coders in a couple of years. (yes, they are sckilled people - me, David A. Smith, and the late Andreas Raab, who died way too young).
In contrast, trying to bridge between EDF and regular Linux processes running under the ordinary scheduler, even with "nice" and all kinds of hacks, just to do a video conferencing system with fixed, non-programmable behavior, would take far more design, far more lines of code, etc.
So this is why I think timesharing OS's are really obsolescent for modern distributed interactive systems. Yeah, "rsync" and "git" are nice for batch replication of files. ANd yeah, EDF can help make them perform faster in their file transferring.
But to make an immersive, real-time experience (which is what computing today is all about, on all time scales, even in the servers other than HPC) it is ALL wrong, and incrementally patching little pieced of Linux ain't gonna get there. Windows or BSD (macOS) ain't gonna do it either.
I'm old. Why is Linux living in the idea space of operating systems that precededed networking, distributed computing, media sharing?
My opinion, and it is only an opinion based on experience, is that it really is time for networking to stop focusing on file transfers, and OS's to stop focusing on timesharing behavior. The world is "live" and time-based. It may not be hard-real-time. But latency is what matters.
Since networking will remain separate from OS's, the interface concepts in both really need to be matched to get to that future.
It's why I pushed so hard for UDP, not reliable in-order streams alone. And in my view, though no one every implemented it, those UDP packets will be carring times, essential for synchronization of coordinated operations at all the endpoints of the computation.
I'd love to see that happen before this old guy dies. I think it will make it a whole lot easier to make networked programs work.
Decentralization isn't "blockchain". My thesis, in 1978, talked about one way to decentralize computation, not just data structures. And timing is critical.
Sorry for the rant. I'm tired of waiting for "backwards compatibility" with Unix version 1 to allow us to go forward. To me, Linux is a great version of a subset of the operating systems I worked on in the early 1970's. And little more.
On Saturday, March 28, 2020 3:58pm, "Toke Høiland-Jørgensen" <toke at redhat.com> said:
> Dave Taht <dave.taht at gmail.com> writes:
>
>>> So: 1. We really should rethink how timing-sensitive algorithms are
>>> expressed, and it isn't gonna be good to base them on semaphores and
>>> threads that run at random rates. That means a very different OS
>>> conceptual framework. Can this share with, say, the Linux we know and
>>> love - yes, the hardware can be shared. One should be able to
>>> dedicate virtual processors that are not running Linux processes, but
>>> instead another computational model (dataflow?).
>>
>> Linux switched to an EDF model for networking in 5.0
>
> Not entirely. There's EDT scheduling, and the TCP stack is mostly
> switched over, I think. But as always, Linux evolves piecemal :)
>
>>> 2. EBPF is interesting, because it is more secure, and is again
>>> focused on running code at kernel level, event-driven. I think it
>>> would be a seriously difficult lift to get it to the point where one
>>> could program the networked media processing in BPF.
>>
>> But there is huge demand for it, so people are writing way more in it
>> than i ever ever thought possible... or desirable.
>
> Tell me about it.
>
> We have seen a bit of interest for combining eBPF with realtime, though.
> With the upstreaming of the realtime code, support has landed for
> running eBPF even on realtime kernels. And we're starting to see a bit
> of interest for looking specifically at latency bounds for network
> processing (for TSN), including XDP. Nothing concrete yet, though.
>
> -Toke
>
>
More information about the Cake
mailing list