[Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims

Michael Richardson mcr at sandelman.ca
Sun Jun 14 11:57:26 EDT 2020


David P. Reed <dpreed at deepplum.com> wrote:
    >> > The lockdown has shown that actual low-latency e2e communication matters.

    >> > The gaming community has known this for awhile.
    >>
    >> how has the lockdown shown this? video conferencing is seldom e2e

    > Well, it's seldom peer-to-peer (and for good reason, the number of
    > streams to each endpoint would grow linearly in complexity, if not
    > bandwidth, in peer-to-peer implementations of conferencing, and quickly
    > become unusable. In principle, one could have the video/audio sources
    > transmit multiple resolution versions of their camera/mic capture to
    > each destination, and each destination could composite screens and mix
    > audio itself, with a tightly coupled "decentralized" control
    > algorithm.)

JITSI, whereby, and bluejeans are all p2p for example.  There are n+1 webrtc
streams (+1 because server).  It's a significantly better experience.
Yes, it doesn't scale to large groups.  So what?

My Karate class of ~15 people uses Zoom ... it is TERRIBLE in so many ways.
All that command and control, yet my Sensei can't take a question while demonstrating.
With all the other services, at least I can lock my view on him.

My nieces and my mom and I are not a 100 person conference.
It's more secure, lower latency, more resilient and does not require quite so
much management BS to operate.

    > But, nonetheless, the application server architecture of Zoom and WebEx
    > are pretty distributed on the conference-server end, though it
    > definitely needs higher capacity than each endpoint, And it *is*
    > end-to-end at the network level. It would be relatively simple to
    > scatter this load out into many more conference-server endpoints,
    > because of the basic e2e argument that separates the IP layer from the
    > application layer. Blizzard Entertainment pioneered this kind of
    > solution - scattering its gaming servers out close to the edge, and did
    > so in an "end-to-end" way.

Yup.

    > With a system like starlink it seems important to me to distinguish
    > peer-to-peer from end-to-end, something I have had a hard time
    > explaining to people since 1978 when the first end-to-end arguments had
    > their impact on the Internet design. Yes, I'm a big fan of moving
    > function to the human-located endpoints where possible. But I also
    > fought against multicasting in the routers/switches, because very few
    > applications benefit from multi-casting of packets alone by the
    > network. Instead, almost all multi-endpoint systems need to coordinate,
    > and that coordination is often best done (most scalably) by a network
    > of endpoints that do the coordinated functions needed for a
    > system.

I see your point.  You jump from e2e vs p2p to multicast, and I think that
there might be an intermediate part of the argument that I've missed.

    > However, deciding what those functions must be, to put them in
    > the basic routers seems basically wrong - it blocks evolution of the
    > application functionality, and puts lots of crap in the transport
    > network that is at best suboptimal, ,and at worst gets actively in the
    > way. (Billing by the packet in each link being the classic example of a
    > "feature" that destroyed the Bell System architecture as a useful
    > technology).

I'd like to go the other way: while I don't want to bring back the Bell
System architecture, where only the network could innovate, I do think that
being able to bill by the packet is an important feature that I think we now
have the crypto and CPU power to do right.
Consider the affect on spam and DDoS that such a thing would have.
We don't even have to bill for good packets :-)
There could be a bounty that every packet comes from, and if it is rejected,
then the bounty is collected.

    >> and starlink will do very well with e2e communications, but the potential
    >> bottlenecks (and therefor potential buffering) aren't going to show up in e2e
    >> communications, they will show up where lots of endpoints are pulling data from
    >> servers not directly connected to starlink.

    > I hope neither Starlink or the applications using it choose to
    > "optimize" themselves for their first usage. That would be suicidal -
    > it's what killed Iridium, which could ONLY carry 14.4 kb/sec per
    > connection, by design. Optimized for compressed voice only. That's why
    > Negroponte and Papert and I couldn't use it to build 2B1, and went with
    > Tachyon, despite Iridium being available for firesale prices and
    > Nicholas's being on Motorola's board. Of course 2B1 was way too early
    > in the satellite game, back in the 1990's. Interesting story there.

I agree: they need to have the ability to support a variety of services,
particularly ones that we have no clue about.

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
]     mcr at sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 487 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20200614/f128eb9c/attachment.sig>


More information about the Bloat mailing list