[Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims

David P. Reed dpreed at deepplum.com
Sun Jun 14 11:40:54 EDT 2020


On Saturday, June 13, 2020 9:17pm, "David Lang" <david at lang.hm> said:
> > The lockdown has shown that actual low-latency e2e communication matters.

> > The gaming community has known this for awhile.
> 
> how has the lockdown shown this? video conferencing is seldom e2e
 
Well, it's seldom peer-to-peer (and for good reason, the number of streams to each endpoint would grow linearly in complexity, if not bandwidth, in peer-to-peer implementations of conferencing, and quickly become unusable. In principle, one could have the video/audio sources transmit multiple resolution versions of their camera/mic capture to each destination, and each destination could composite screens and mix audio itself, with a tightly coupled "decentralized" control algorithm.)
 
But, nonetheless, the application server architecture of Zoom and WebEx are pretty distributed on the conference-server end, though it definitely needs higher capacity than each endpoint, And it *is* end-to-end at the network level. It would be relatively simple to scatter this load out into many more conference-server endpoints, because of the basic e2e argument that separates the IP layer from the application layer. Blizzard Entertainment pioneered this kind of solution - scattering its gaming servers out close to the edge, and did so in an "end-to-end" way.
 
With a system like starlink it seems important to me to distinguish peer-to-peer from end-to-end, something I have had a hard time explaining to people since 1978 when the first end-to-end arguments had their impact on the Internet design. Yes, I'm a big fan of moving function to the human-located endpoints where possible. But I also fought against multicasting in the routers/switches, because very few applications benefit from multi-casting of packets alone by the network. Instead, almost all multi-endpoint systems need to coordinate, and that coordination is often best done (most scalably) by a network of endpoints that do the coordinated functions needed for a system. However, deciding what those functions must be, to put them in the basic routers seems basically wrong - it blocks evolution of the application functionality, and puts lots of crap in the transport network that is at best suboptimal, ,and at worst gets actively in the way. (Billing by the packet in each link being the classic example of a "feature" that destroyed the Bell System architecture as a useful technology).

> 
> and starlink will do very well with e2e communications, but the potential
> bottlenecks (and therefor potential buffering) aren't going to show up in e2e
> communications, they will show up where lots of endpoints are pulling data from
> servers not directly connected to starlink.
 
I hope neither Starlink or the applications using it choose to "optimize" themselves for their first usage. That would be suicidal - it's what killed Iridium, which could ONLY carry 14.4 kb/sec per connection, by design. Optimized for compressed voice only. That's why Negroponte and Papert and I couldn't use it to build 2B1, and went with Tachyon, despite Iridium being available for firesale prices and Nicholas's being on Motorola's board. Of course 2B1 was way too early in the satellite game, back in the 1990's. Interesting story there.

> 
> David Lang
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20200614/e69930b5/attachment-0001.html>


More information about the Bloat mailing list