General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
@ 2020-06-11 16:03 David P. Reed
  2020-06-11 16:14 ` Jonathan Morton
  0 siblings, 1 reply; 22+ messages in thread
From: David P. Reed @ 2020-06-11 16:03 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 7568 bytes --]


So, what do you think the latency (including bloat in the satellites) will be? My guess is > 2000 msec, based on the experience with Apple on ATT Wireless back when it was rolled out (at 10 am, in each of 5 cities I tested, repeatedly with smokeping, for 24 hour periods, the ATT Wireless access network experienced ping time grew to 2000 msec., and then to 4000 by mid day - true lag-under-load, with absolutely zero lost packets!)
 
I get that SpaceX is predicting low latency by estimating physical distance and perfect routing in their LEO constellation. Possibly it is feasible to achieve this if there is zero load over a fixed path. But networks aren't physical, though hardware designers seem to think they are.
 
Anyone know ANY reason to expect better from Musk's clown car parade?
 
 
On Thursday, June 11, 2020 6:17am, "Dewayne Hendricks" <dewayne@warpspeed.com> said:



> [Note: This item comes from friend Robert Berger. DLH]
> 
> Ajit Pai caves to SpaceX but is still skeptical of Musk’s latency claims
> SpaceX wins FCC funding battle but must prove it can deliver low latencies.
> By JON BRODKIN
> Jun 10 2020
> <https://arstechnica.com/tech-policy/2020/06/ajit-pai-caves-to-spacex-but-is-still-skeptical-of-musks-latency-claims/>
> 
> The Federal Communications Commission has reversed course on whether to let SpaceX
> and other satellite providers apply for rural-broadband funding as low-latency
> providers. But Chairman Ajit Pai said companies like SpaceX will have to prove
> they can offer low latencies, as the FCC does not plan to "fund untested
> technologies."
> 
> Pai's original proposal classified SpaceX and all other satellite operators as
> high-latency providers for purposes of the funding distribution, saying the
> companies haven't proven they can deliver latencies below the FCC standard of
> 100ms. Pai's plan to shut satellite companies out of the low-latency category
> would have put them at a disadvantage in a reverse auction that will distribute
> $16 billion from the Rural Digital Opportunity Fund (RDOF).
> 
> But SpaceX is launching low-Earth-orbit (LEO) satellites in altitudes ranging from
> 540km to 570km, a fraction of the 35,000km used with geostationary satellites,
> providing much lower latency than traditional satellite service. SpaceX told the
> FCC that its Starlink service will easily clear the 100ms cutoff, and FCC
> Commissioner Michael O'Rielly urged Pai to let LEO companies apply in the
> low-latency tier.
> 
> The FCC voted to approve the updated auction rules yesterday. The final order
> isn't public yet, but it's clear from statements by Pai and other commissioners
> that SpaceX and other LEO companies will be allowed to apply in the low-latency
> tier. The satellite companies won't gain automatic entry into the low-latency
> tier, but they will be given a chance to prove that they can deliver latencies
> below 100ms.
> 
> "I am grateful to the chairman for agreeing to expand eligibility for the
> low-latency performance tier and change language that was prejudicial to certain
> providers," O'Rielly said at yesterday's FCC meeting. "While a technology-neutral
> policy across the board may have been more effective in promoting innovation and
> maximizing the value of ratepayer investments, I recognize that a balancing act
> was necessary to reach the current disposition."
> 
> Pai: FCC will apply “very close scrutiny”
> 
> Pai said that he agreed to the change "at the request of one of my fellow
> commissioners." The final rules "don't entirely close the door on low-Earth orbit
> satellite providers bidding in the low-latency tier," Pai continued. "However, it
> is also important to keep in mind the following point: The purpose of the Rural
> Digital Opportunity Fund is to ensure that Americans have access to broadband, no
> matter where they live. It is not a technology incubator to fund untested
> technologies. And we will not allow taxpayer funding to be wasted. A new
> technology may sound good in theory and look great on paper. But this
> multi-billion-dollar broadband program will require 't's to be crossed—not
> fingers. So any such application will be given very close scrutiny."
> 
> When contacted by Ars today, Pai's office confirmed that "the commission modified
> the draft to permit LEO service providers to apply to bid in the low-latency tier
> instead of limiting them to the high-latency tier, and staff will be closely
> reviewing all applications to ensure they can meet the FCC's performance
> requirements for service providers."
> 
> SpaceX is aiming to provide service later this year, and CEO Elon Musk has saidthe
> company is aiming for latency below 20ms.
> 
> Commissioner Geoffrey Starks supported the low-latency change in his statement:
> 
> I appreciate Commissioner O'Rielly's work in revising this Public Notice to
> eliminate the categorical bar on low-Earth orbit satellite systems bidding in the
> low-latency tier, especially now that we have evidence in the record that those
> systems can meet the 100-millisecond latency standard. At the same time, I see no
> need for the Public Notice's predictive judgments about the merits of short-form
> applications from low-Earth orbit satellite operators. As I have stated
> previously, next-generation satellite broadband holds tremendous technological
> promise for addressing the digital divide and is led by strong American companies
> with a lengthy record of success. Commission staff should evaluate those
> applications on their own merits.
> 
> Pai and O'Rielly are Republicans, while Starks is part of the FCC's Democratic
> minority.
> 
> SpaceX excluded from gigabit tier
> 
> The $16 billion in phone and broadband subsidies will be distributed in a reverse
> auction scheduled to begin on October 29. ISPs can seek funding in census blocks
> where no provider offers home-Internet speeds of at least 25Mbps downstream and
> 3Mbps upstream. The $16 billion will be distributed over 10 years, so ISPs that
> get funded will collect a total of about $1.6 billion a year and face requirements
> to deploy broadband service to a certain number of homes and businesses.
> 
> Pai's auction rules also shut SpaceX and other satellite operators out of applying
> for funding in the gigabit tier. SpaceX could still apply for funding in the
> 100Mbps-and-below categories, but the auction will prioritize applications in the
> gigabit category. SpaceX has said in the past that it would offer gigabit speeds,
> but the company seems to have only objected to the latency restriction.
> 
> The commissioners' statements did not mention any change to the policy excluding
> all satellite providers from the gigabit tier. The gigabit tier requires 1Gbps
> download speeds and 500Mbps upload speeds, which in practice may restrict the
> category primarily to fiber-to-the-home providers or cable companies that adopt
> full-duplex DOCSIS technology.
> 
> Pai said the FCC is allowing fixed-wireless and DSL providers to apply in the
> gigabit tier but said that "commission staff will conduct a careful, case-by-case
> review of applications to ensure that bidders will be able to meet required
> performance obligations." There's apparently still no allowance for LEO-satellite
> providers to bid in the gigabit tier.
> 
> [snip]
> 
> Dewayne-Net RSS Feed: http://dewaynenet.wordpress.com/feed/
> Twitter: https://twitter.com/wa8dzp
> 
> 
> 

[-- Attachment #2: Type: text/html, Size: 9359 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-11 16:03 [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims David P. Reed
@ 2020-06-11 16:14 ` Jonathan Morton
  2020-06-11 18:46   ` David P. Reed
  2020-06-14 11:23   ` Roland Bless
  0 siblings, 2 replies; 22+ messages in thread
From: Jonathan Morton @ 2020-06-11 16:14 UTC (permalink / raw)
  To: David P. Reed; +Cc: Dave Taht, bloat

> On 11 Jun, 2020, at 7:03 pm, David P. Reed <dpreed@deepplum.com> wrote:
> 
> So, what do you think the latency (including bloat in the satellites) will be? My guess is > 2000 msec, based on the experience with Apple on ATT Wireless back when it was rolled out (at 10 am, in each of 5 cities I tested, repeatedly with smokeping, for 24 hour periods, the ATT Wireless access network experienced ping time grew to 2000 msec., and then to 4000 by mid day - true lag-under-load, with absolutely zero lost packets!)
>  
> I get that SpaceX is predicting low latency by estimating physical distance and perfect routing in their LEO constellation. Possibly it is feasible to achieve this if there is zero load over a fixed path. But networks aren't physical, though hardware designers seem to think they are.
>  
> Anyone know ANY reason to expect better from Musk's clown car parade?

Speaking strictly from a theoretical perspective, I don't see any reason why they shouldn't be able to offer latency that is "normally" below 100ms (to a regional PoP, not between two arbitrary points on the globe).  The satellites will be much closer to any given ground station than a GEO satellite, the latter typically adding 500ms to the path due mostly to physical distance.  All that is needed is to keep queue delays reasonably under control, and there's any number of AQMs that can help with that.  Clearly ATT Wireless did not perform any bufferbloat mitigation at all.

I have no insight or visibility into anything they're *actually* doing, though.  Can anyone dig up anything about that?

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-11 16:14 ` Jonathan Morton
@ 2020-06-11 18:46   ` David P. Reed
  2020-06-11 18:56     ` David Lang
  2020-06-12 15:30     ` Michael Richardson
  2020-06-14 11:23   ` Roland Bless
  1 sibling, 2 replies; 22+ messages in thread
From: David P. Reed @ 2020-06-11 18:46 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: Dave Taht, bloat

[-- Attachment #1: Type: text/plain, Size: 4073 bytes --]


On Thursday, June 11, 2020 12:14pm, "Jonathan Morton" <chromatix99@gmail.com> said:
 

> > On 11 Jun, 2020, at 7:03 pm, David P. Reed <dpreed@deepplum.com>
> wrote:
> >
> > So, what do you think the latency (including bloat in the satellites) will
> be? My guess is > 2000 msec, based on the experience with Apple on ATT Wireless
> back when it was rolled out (at 10 am, in each of 5 cities I tested, repeatedly
> with smokeping, for 24 hour periods, the ATT Wireless access network experienced
> ping time grew to 2000 msec., and then to 4000 by mid day - true lag-under-load,
> with absolutely zero lost packets!)
> >
> > I get that SpaceX is predicting low latency by estimating physical distance
> and perfect routing in their LEO constellation. Possibly it is feasible to achieve
> this if there is zero load over a fixed path. But networks aren't physical, though
> hardware designers seem to think they are.
> >
> > Anyone know ANY reason to expect better from Musk's clown car parade?
> 
> Speaking strictly from a theoretical perspective, I don't see any reason why they
> shouldn't be able to offer latency that is "normally" below 100ms (to a regional
> PoP, not between two arbitrary points on the globe). The satellites will be much
> closer to any given ground station than a GEO satellite, the latter typically
> adding 500ms to the path due mostly to physical distance. All that is needed is
> to keep queue delays reasonably under control, and there's any number of AQMs that
> can help with that. Clearly ATT Wireless did not perform any bufferbloat
> mitigation at all.
> 
> I have no insight or visibility into anything they're *actually* doing, though. 
> Can anyone dig up anything about that?
> 
> - Jonathan Morton
> 
> 
They seem to be radio silent on anything about their architecture. Given that they are hardware guys for the most part, and given that even the bulk of IETF membership are clueless on the congestion control topic, and given that LEO satellite constellation management for high-speed packet networks has never been demonstrated at scale, that's why I'm predicting this issue.
 
1. Why ATT's HSPA+ (4G) network back when iPhone was introduced matters: ATT consistently denied, at the VP level, that there was a problem. They blamed it at first on Apple! (John Donovan produced all this blaming rhertoric.) What was new? HSPA+ was a packet network - prior cellular was circuit-switched. Circuit switched networks don't introduce bloat into a circuit usualy - though Frame Relay could be set up so it stored rather than dropped data providing "no loss" at the cost of delay. And the actual suppliers of ATT's network were well-known companies (like Cisco and ALU) who denied that bufferbloat existed at all at IETF.
Eventually, I got access to talk to ATT's lower level Network Operations folks, who replicated my findings. But up to that point, I was a target of Donovan's smear campaign secondary to Apple.
 
2. Is there any research at all on congestion management with constantly changing routing, and its stability? Remember, TCP is tuned tightly to assume that every packet takes the identical route, and therefore it doesn't back off quickly. I believe there is no solid research on this.
 
Now if the satellite manages each flow from source to destination as a "constant bitrate" virtual circuit, like Iridium did (in their case 14.4 kb/sec was the circuit rate, great for crappy voice, bad for data), the Internet might work over a set of wired-up circuits (lke MPLS) where the circuits would be frequently rebuilt (inside the satellite constellation, transparent to the Internet) so queuing delay would be limited to endpoints of the CBR circuits.
 
But I doubt that is where they are going. Instead, I suspect they haven't thought about anything other than a packet at a time, with no thought to reporting congestion by drops or ECN.
 
And it's super easy to build up seconds of lag on TCP if you don't signal congestion. TCP just keeps opening its window, happy as a clam.

[-- Attachment #2: Type: text/html, Size: 5901 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-11 18:46   ` David P. Reed
@ 2020-06-11 18:56     ` David Lang
  2020-06-11 19:16       ` David P. Reed
  2020-06-12 15:30     ` Michael Richardson
  1 sibling, 1 reply; 22+ messages in thread
From: David Lang @ 2020-06-11 18:56 UTC (permalink / raw)
  To: David P. Reed; +Cc: Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 1462 bytes --]

On Thu, 11 Jun 2020, David P. Reed wrote:

> But I doubt that is where they are going. Instead, I suspect they haven't 
> thought about anything other than a packet at a time, with no thought to 
> reporting congestion by drops or ECN.
> 
> And it's super easy to build up seconds of lag on TCP if you don't signal 
> congestion. TCP just keeps opening its window, happy as a clam.

I expect that the bottleneck is going to be in the connection to the Internet.

starlink station to starlink station is one issue

starlink station to Internet is a different issue.

given the download heavy nature of most use, the biggest bottleneck is probably 
going to be at their internet connected uplink stations (which I do not expect 
to be the consumer stations connected to the internet, but something different)

as for the station to station communications, as I understand it, each satellite 
has 4-5 sattelite-satellite connections with one upload/download connection, so 
it's going to depend how many satellite hops the packet has to take, but there's 
a really good chance that there will be excess bandwidth available in the 
sattelite mesh and it will not be the bottleneck.

We will see, but since the answer to satellite-satellite communication being the 
bottleneck is to launch more satellites, this boils down to investment vs 
service quality. Since they are making a big deal about the latency, I expect 
them to work to keep it acceptable.

David Lang

[-- Attachment #2: Type: text/plain, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-11 18:56     ` David Lang
@ 2020-06-11 19:16       ` David P. Reed
  2020-06-11 19:28         ` David Lang
  0 siblings, 1 reply; 22+ messages in thread
From: David P. Reed @ 2020-06-11 19:16 UTC (permalink / raw)
  To: David Lang; +Cc: Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 2679 bytes --]


 
On Thursday, June 11, 2020 2:56pm, "David Lang" <david@lang.hm> said:



> We will see, but since the answer to satellite-satellite communication being the
> bottleneck is to launch more satellites, this boils down to investment vs
> service quality. Since they are making a big deal about the latency, I expect
> them to work to keep it acceptable.


We'll see. I should have mentioned that the ATT network actually had adequate capacity. As did Comcast's network when it was bloated like cracy (as Jim Gettys will verify).
 
As I have said way too often - the problem isn't throughput related, and can't be measured by achieved throughput, nor can it be controlled by adding capacity alone.
 
The problem is the lack of congestion signalling that can stop the *source* from sending more than its share.
 
That's all that matters. I see bufferbloat in 10-100 GigE datacenter networks quite frequently (esp. Arista switches!).  Some would think that "fat pipes" solve this problem. It doesn't. Some think pirority eliminates the problem. It doesn't, unless there is congestion signalling in operation.
 
Yes, using a single TCP end-to-end connection over an unloaded network you can get 100% throughput on an unloaded network. The problem isn't the hardware at all. It's the switching logic that just builds up queues till they are intolerably long, at which point the queues cannot drain, so they stay full as long as the load remains.
 
In the iPhone case, when a page didn't download in a flash, what do users do? Well, they click on the link again. Underneath it all, then, all the packets that were stuffed in the pipe toward that user remain queued. And a whole lot more get shoved in. And the user keeps hitting the button. If the queue holds 2 seconds of data at the bottleneck rate, it continues to be full as long as users keep clicking on the link.
 
You REALLY must think about this scenario, and get it in your mind that throughput doesn't eliminate congestion, especially when computers can do a lot of work on your behalf every time you ask them.
 
One request packet - thousands of response packets, and no one telling the sources that they should slow down.
 
For all of this, there is a known fix: don't queue packets more than 2xRTTx"bottleneck rate" in any switch anywhere. That's been in a best practice RFC forever, and it is ignored almost always. Cake and other algorithms do even better, by queuing less than that in any bottleneck-adjacent queue.
 
But instead of the known fix (known ever since the first screwed up Frame Relay hops were set to never lose a packet) is deliberately ignored, by hardware know-it-alls.
 

[-- Attachment #2: Type: text/html, Size: 5111 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-11 19:16       ` David P. Reed
@ 2020-06-11 19:28         ` David Lang
  2020-06-12 15:39           ` Michael Richardson
  0 siblings, 1 reply; 22+ messages in thread
From: David Lang @ 2020-06-11 19:28 UTC (permalink / raw)
  To: David P. Reed; +Cc: David Lang, Jonathan Morton, bloat

On Thu, 11 Jun 2020, David P. Reed wrote:

> On Thursday, June 11, 2020 2:56pm, "David Lang" <david@lang.hm> said:
>
>
>
>> We will see, but since the answer to satellite-satellite communication being the
>> bottleneck is to launch more satellites, this boils down to investment vs
>> service quality. Since they are making a big deal about the latency, I expect
>> them to work to keep it acceptable.
>
>
> We'll see. I should have mentioned that the ATT network actually had adequate capacity. As did Comcast's network when it was bloated like cracy (as Jim Gettys will verify).
> 
> As I have said way too often - the problem isn't throughput related, and can't be measured by achieved throughput, nor can it be controlled by adding capacity alone.
> 
> The problem is the lack of congestion signalling that can stop the *source* from sending more than its share.
> 
> That's all that matters. I see bufferbloat in 10-100 GigE datacenter networks quite frequently (esp. Arista switches!).  Some would think that "fat pipes" solve this problem. It doesn't. Some think pirority eliminates the problem. It doesn't, unless there is congestion signalling in operation.
> 
> Yes, using a single TCP end-to-end connection over an unloaded network you can get 100% throughput on an unloaded network. The problem isn't the hardware at all. It's the switching logic that just builds up queues till they are intolerably long, at which point the queues cannot drain, so they stay full as long as the load remains.
> 
> In the iPhone case, when a page didn't download in a flash, what do users do? Well, they click on the link again. Underneath it all, then, all the packets that were stuffed in the pipe toward that user remain queued. And a whole lot more get shoved in. And the user keeps hitting the button. If the queue holds 2 seconds of data at the bottleneck rate, it continues to be full as long as users keep clicking on the link.
> 
> You REALLY must think about this scenario, and get it in your mind that throughput doesn't eliminate congestion, especially when computers can do a lot of work on your behalf every time you ask them.
> 
> One request packet - thousands of response packets, and no one telling the sources that they should slow down.
> 
> For all of this, there is a known fix: don't queue packets more than 2xRTTx"bottleneck rate" in any switch anywhere. That's been in a best practice RFC forever, and it is ignored almost always. Cake and other algorithms do even better, by queuing less than that in any bottleneck-adjacent queue.
> 
> But instead of the known fix (known ever since the first screwed up Frame Relay hops were set to never lose a packet) is deliberately ignored, by hardware know-it-alls.

my point is that the if the satellite links are not the bottleneck, no queuing 
will happen there.

I expect the queuing to happen at the internet-satellite gateways, which is much 
more conventional software and if they do something sane at the gateways it will 
address the problem for pretty much everyone. They don't have to implement 
anything special on the satellites.

and if he is promising good latency, but there isn't good latency, it isn't 
going to be swept under the rug the way incumbant ISPs were able to.

David Lang

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-11 18:46   ` David P. Reed
  2020-06-11 18:56     ` David Lang
@ 2020-06-12 15:30     ` Michael Richardson
  2020-06-12 19:50       ` David P. Reed
  1 sibling, 1 reply; 22+ messages in thread
From: Michael Richardson @ 2020-06-12 15:30 UTC (permalink / raw)
  To: David P. Reed; +Cc: Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 1229 bytes --]


David P. Reed <dpreed@deepplum.com> wrote:
    > Now if the satellite manages each flow from source to destination as a
    > "constant bitrate" virtual circuit, like Iridium did (in their case
    > 14.4 kb/sec was the circuit rate, great for crappy voice, bad for
    > data), the Internet might work over a set of wired-up circuits (lke
    > MPLS) where the circuits would be frequently rebuilt (inside the
    > satellite constellation, transparent to the Internet) so queuing delay
    > would be limited to endpoints of the CBR circuits.

That's what I think they will do.
But, it might be SPRING rather than MPLS.

    > But I doubt that is where they are going. Instead, I suspect they
    > haven't thought about anything other than a packet at a time, with no
    > thought to reporting congestion by drops or ECN.

Agreed.

    > And it's super easy to build up seconds of lag on TCP if you don't
    > signal congestion. TCP just keeps opening its window, happy as a clam.

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-11 19:28         ` David Lang
@ 2020-06-12 15:39           ` Michael Richardson
  2020-06-13  5:43             ` David Lang
  0 siblings, 1 reply; 22+ messages in thread
From: Michael Richardson @ 2020-06-12 15:39 UTC (permalink / raw)
  To: David Lang, David P. Reed, Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 1839 bytes --]


David Lang <david@lang.hm> wrote:
    > my point is that the if the satellite links are not the bottleneck, no
    > queuing will happen there.

It's a mesh of satellites.

If you build it into a DODAG (RFC6550 would work well), then you will either
a bottleneck at the top of tree (where the downlink to the DC is), or you
will have significant under utilitization at the edges, which might encourage
them to buffer.

Now, the satellites are always moving, so which satellite is next to the DC
will change, and this quite possibly could be exploited such that it's
always a different buffer that you bloat, so the accumulated backlog that
David P spoke about in his message might get to drain.

But, the right way to use this mesh is, in my opinion, to have a lot of
downlinks, and ideally, to do as much e2e connection as possible.
Don't connect *to* the Internet, *become* an Internet.
That is, routing in the satellite mesh, not just creation of circuits to DCs.

Anything less, and it's just a faster Iridium.

    > I expect the queuing to happen at the internet-satellite gateways, which is
    > much more conventional software and if they do something sane at the gateways
    > it will address the problem for pretty much everyone. They don't have to
    > implement anything special on the satellites.

    > and if he is promising good latency, but there isn't good latency, it isn't
    > going to be swept under the rug the way incumbant ISPs were able to.

I would expect to use SDN to create virtual satellites which appear to be
"stationary", and then do routing on top of that.

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-12 15:30     ` Michael Richardson
@ 2020-06-12 19:50       ` David P. Reed
  2020-06-13 21:15         ` Michael Richardson
  0 siblings, 1 reply; 22+ messages in thread
From: David P. Reed @ 2020-06-12 19:50 UTC (permalink / raw)
  To: Michael Richardson; +Cc: Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 4168 bytes --]


SPRING (I thought) was just packet routing over a set of connected nodes that avoids creating routing tables. I.e. Source Packet RoutING.
Now I happen to have written one of the earliest papers on source routing, and also authored the IP source routing options, explaining the advantages of using Source Routing of several kinds. So I'm pretty familiar with Source Routing in general as a concept.
But though it does create a kind of short-term path stability as well as efficient node level switching, there are two things that affect congestion control that source routing doesn't deal with very well.
 
TCP's congestion control requires congestion signals, which are an IP header function, not the switching layer undertneath. So how will SPRING identify congestion and report it? I assume that the entry and exit from the satellite mesh touches the IP header, and can also drop packets, allowing, in principle packet drop and ECN to be provided. However, intermediate SPRING nodes may develop congestion, so they need to signal congestion "up" to the endpoints somehow, or avoid congestion entirely by never over-allocating the nodes on a path.  That requires global knowledge in SPRING, and would be a "control plane" function.
But if latency must be sustained as low,  the edges of the satellite network must respond very quickly to the changes in capacity demand. [this is why I suggested that each end-to-end flow would be restricted to CBR, which underutilizes, potentially severely, the capacity of the network if low latency guarantees are required.]
 
Maybe Musk's crew don't have ANY intention of providing low latency, or they will go for slowly varying CBR routes that only admit packets at the rate pre-committed for each path.
 
Nailed up circuits in a dynamic satellite network can be made to work, but don't do well with highly dynamic traffic, like for example QUIC/HTTP/3. I'm assuming that the dreamers inspired by SpaceX's excited believers figure that "everything" will just be fast and low latency (I call 15 msec RTT withing the NA continent low latency, some expect lower than that).
 
To me, SpaceX's satellite constellation is the modern Iridium. A concept car built by a billionaire.who hopes it will work out. (Motorola's Iridium was a billionaire's dream, which eventually didn't succeed, and was sold for scrap). We'll learn from it, and NSF doesn't have the budget, nor does the Space Force (great TV series, hope the second season is produced).
 
Iridium didn't have congestion control problems - it had battery issues so it didn't work on the dark side of earth very well - bot worked for a few very expensive phone calls at a time. But coultn't recoup its investment, helping Motorola as a company fail. Bets are great, but counting on a roulette wheel to produce 00 and pay out in one spin - yeah, I'd bet rive bucks.
 
 
On Friday, June 12, 2020 11:30am, "Michael Richardson" <mcr@sandelman.ca> said:



> 
> David P. Reed <dpreed@deepplum.com> wrote:
> > Now if the satellite manages each flow from source to destination as a
> > "constant bitrate" virtual circuit, like Iridium did (in their case
> > 14.4 kb/sec was the circuit rate, great for crappy voice, bad for
> > data), the Internet might work over a set of wired-up circuits (lke
> > MPLS) where the circuits would be frequently rebuilt (inside the
> > satellite constellation, transparent to the Internet) so queuing delay
> > would be limited to endpoints of the CBR circuits.
> 
> That's what I think they will do.
> But, it might be SPRING rather than MPLS.
> 
> > But I doubt that is where they are going. Instead, I suspect they
> > haven't thought about anything other than a packet at a time, with no
> > thought to reporting congestion by drops or ECN.
> 
> Agreed.
> 
> > And it's super easy to build up seconds of lag on TCP if you don't
> > signal congestion. TCP just keeps opening its window, happy as a clam.
> 
> --
> ] Never tell me the odds! | ipv6 mesh networks [
> ] Michael Richardson, Sandelman Software Works | IoT architect [
> ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
> 
> 

[-- Attachment #2: Type: text/html, Size: 6203 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-12 15:39           ` Michael Richardson
@ 2020-06-13  5:43             ` David Lang
  2020-06-13 18:41               ` David P. Reed
  2020-06-14  0:36               ` Michael Richardson
  0 siblings, 2 replies; 22+ messages in thread
From: David Lang @ 2020-06-13  5:43 UTC (permalink / raw)
  To: Michael Richardson; +Cc: David Lang, David P. Reed, Jonathan Morton, bloat

On Fri, 12 Jun 2020, Michael Richardson wrote:

> David Lang <david@lang.hm> wrote:
>    > my point is that the if the satellite links are not the bottleneck, no
>    > queuing will happen there.
>
> It's a mesh of satellites.
>
> If you build it into a DODAG (RFC6550 would work well), then you will either
> a bottleneck at the top of tree (where the downlink to the DC is), or you
> will have significant under utilitization at the edges, which might encourage
> them to buffer.
>
> Now, the satellites are always moving, so which satellite is next to the DC
> will change, and this quite possibly could be exploited such that it's
> always a different buffer that you bloat, so the accumulated backlog that
> David P spoke about in his message might get to drain.
>
> But, the right way to use this mesh is, in my opinion, to have a lot of
> downlinks, and ideally, to do as much e2e connection as possible.
> Don't connect *to* the Internet, *become* an Internet.
> That is, routing in the satellite mesh, not just creation of circuits to DCs.

realistically, the vast majority of the people who have the mobile endpoints are 
going to be talking to standard websites and services, and those are going to be 
on the Internet, not on starlink nodes.

'normal' traffic is highly asymmetric, but the radio links do not seem to be, so 
you can have a lot of mobile units talking to the Internet on a smallish number 
of downlinks without having a bottleneck in the 'upstream' direction from the 
mobile units.

it's the replies that are far more likely to be a problem (the 'downlink' 
direction as far as the user is concerned), but they don't have to do anything 
super special there, if they just turn on fq_codel on the uplink routers between 
the Internet and the satellites, it will do a good job of managing that link.

Is it possible to saturate that and run into grief? sure. It's possible to 
oversubscribe any service. But it's also possible to run a service without 
oversubscribing it.

> Anything less, and it's just a faster Iridium.
>
>    > I expect the queuing to happen at the internet-satellite gateways, which is
>    > much more conventional software and if they do something sane at the gateways
>    > it will address the problem for pretty much everyone. They don't have to
>    > implement anything special on the satellites.
>
>    > and if he is promising good latency, but there isn't good latency, it isn't
>    > going to be swept under the rug the way incumbant ISPs were able to.
>
> I would expect to use SDN to create virtual satellites which appear to be
> "stationary", and then do routing on top of that.

I expect that it will be more dynamic than that. given that there could be 
mobile stations anywhere, creating virtual satellites is going to be a poor 
choice for many locations. I expect it will be something along the lines of each 
satellite has a ongoing broadcast of how busy it is and the ground stations send 
based on this.

do I expect them to get it right immediately? no.

but SpaceX has a need for good communication in odd areas (like their recovery 
ships and their spacecraft), so when they do run into grief, I expect them to 
fix the problem fairly quickly, not pretend it isn't there.

Remember, Musk already sacked the starlink leadership once for being to stuck in 
'the way satellites are always built' so if it doesn't work well under load and 
they can't fix it, he will find people who can.

David Lang

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-13  5:43             ` David Lang
@ 2020-06-13 18:41               ` David P. Reed
  2020-06-14  0:03                 ` David Lang
  2020-06-14  0:36               ` Michael Richardson
  1 sibling, 1 reply; 22+ messages in thread
From: David P. Reed @ 2020-06-13 18:41 UTC (permalink / raw)
  To: David Lang; +Cc: Michael Richardson, David Lang, Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 813 bytes --]


On Saturday, June 13, 2020 1:43am, "David Lang" <david@lang.hm> said:
 

> Remember, Musk already sacked the starlink leadership once for being to stuck in
> 'the way satellites are always built' so if it doesn't work well under load and
> they can't fix it, he will find people who can.
 
He just might. Depends on who he asks. The fact that ATT literally couldn't fix its network for at least a year, and spent most of that year blaming Apple and the design of the iPhone, asking the right people isn't what arrogant organizations are good at. And firing people appeals to those who think Trump is brilliant as a manager - he appeals because he says "you're fired". I think sacking whole teams is an indication of someone who has risen to his level of incompetence, but that's just my opinion.
 
 
 

[-- Attachment #2: Type: text/html, Size: 1717 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-12 19:50       ` David P. Reed
@ 2020-06-13 21:15         ` Michael Richardson
  2020-06-13 23:02           ` Jonathan Morton
  2020-06-14  0:06           ` David Lang
  0 siblings, 2 replies; 22+ messages in thread
From: Michael Richardson @ 2020-06-13 21:15 UTC (permalink / raw)
  To: David P. Reed, Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 4582 bytes --]


David P. Reed <dpreed@deepplum.com> wrote:
    > SPRING (I thought) was just packet routing over a set of connected
    > nodes that avoids creating routing tables. I.e. Source Packet RoutING.

I'm no expert on SPRING.  I read enough to understand that I didn't care
about the debate in 6man (during RFC8200).  I now just kill-file the thread.
So, yes it is, but I think it is loose source routing, so the micro-jumps can
take redundant paths.  I could be wrong.
I think that the win for the silicon vendors is that everything is now IPv6
forwarding engine: no more MPLS, VLANs, IPv4, etc.

    > Now I happen to have written one of the earliest papers on source
    > routing, and also authored the IP source routing options, explaining
    > the advantages of using Source Routing of several kinds. So I'm pretty
    > familiar with Source Routing in general as a concept.

    > But though it does create a kind of short-term path stability as well
    > as efficient node level switching, there are two things that affect
    > congestion control that source routing doesn't deal with very well.

    > TCP's congestion control requires congestion signals, which are an IP
    > header function, not the switching layer undertneath. So how will
    > SPRING identify congestion and report it? I assume that the entry and
    > exit from the satellite mesh touches the IP header, and can also drop
    > packets, allowing, in principle packet drop and ECN to be
    > provided. However, intermediate SPRING nodes may develop congestion, so
    > they need to signal congestion "up" to the endpoints somehow, or avoid
    > congestion entirely by never over-allocating the nodes on a path.  That
    > requires global knowledge in SPRING, and would be a "control plane"
    > function.

I think that because SPRING does not (according to how some want to use it),
introduce a layer of tunnelling, the ECN bits are exactly where they need to
be.  As long as the SPRING header is correctly undone at the edge (filling
the correct destination address back in), then the end node sees exactly what
we expect.

    > But if latency must be sustained as low,  the edges of the satellite
    > network must respond very quickly to the changes in capacity
    > demand. [this is why I suggested that each end-to-end flow would be
    > restricted to CBR, which underutilizes, potentially severely, the
    > capacity of the network if low latency guarantees are required.]

Agreed.

    > Maybe Musk's crew don't have ANY intention of providing low latency, or
    > they will go for slowly varying CBR routes that only admit packets at
    > the rate pre-committed for each path.

They claim they will be able to play p2p first person shooters.
I don't know if this means e2e games, or ones that middlebox everything into
a server in a DC.  That's what I keep asking.

    > Nailed up circuits in a dynamic satellite network can be made to work,
    > but don't do well with highly dynamic traffic, like for example
    > QUIC/HTTP/3. I'm assuming that the dreamers inspired by SpaceX's
    > excited believers figure that "everything" will just be fast and low
    > latency (I call 15 msec RTT withing the NA continent low latency, some
    > expect lower than that).

Agreed.

    > To me, SpaceX's satellite constellation is the modern Iridium. A
    > concept car built by a billionaire.who hopes it will work
    > out. (Motorola's Iridium was a billionaire's dream, which eventually
    > didn't succeed, and was sold for scrap). We'll learn from it, and NSF
    > doesn't have the budget, nor does the Space Force (great TV series,
    > hope the second season is produced).

    > Iridium didn't have congestion control problems - it had battery issues
    > so it didn't work on the dark side of earth very well - bot worked for
    > a few very expensive phone calls at a time. But coultn't recoup its
    > investment, helping Motorola as a company fail. Bets are great, but
    > counting on a roulette wheel to produce 00 and pay out in one spin -
    > yeah, I'd bet rive bucks.

Right, if you overprovision the network, and under-utilize it with CBR links,
then clearly you can get quality, at a high cost.

I don't think you can make a lot of money at this, because ultimately
terrestrial providers came in and ate that lunch.

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-13 21:15         ` Michael Richardson
@ 2020-06-13 23:02           ` Jonathan Morton
  2020-06-14  0:06           ` David Lang
  1 sibling, 0 replies; 22+ messages in thread
From: Jonathan Morton @ 2020-06-13 23:02 UTC (permalink / raw)
  To: Michael Richardson; +Cc: David P. Reed, bloat

> On 14 Jun, 2020, at 12:15 am, Michael Richardson <mcr@sandelman.ca> wrote:
> 
> They claim they will be able to play p2p first person shooters.
> I don't know if this means e2e games, or ones that middlebox everything into
> a server in a DC.  That's what I keep asking.

I think P2P implies that there is *not* a central server in the loop, at least not on the latency-critical path.

But that's not how PvP multiplayer games are typically architected these days, largely due to the need to carefully manage the "fog of war" to prevent cheating; each client is supposed to receive only the information it needs to accurately render a (predicted) view of the game world from that player's perspective.  So other players that are determined by the server to be "out of sight" cannot be rendered by x-ray type cheat mods, because the information about where they are is not available.  The central server has full information and performs the appropriate filtering before replicating game state to each player.

Furthermore, in a PvP game it's wise to hide information about other players' IP addresses, as that often leads to "griefing" tactics such as a DoS attack.  If you can force an opposing player to experience lag at a crucial moment, you gain a big advantage over him.  And there are players who are perfectly happy to "grief" members of their own team; I could dig up some World of Tanks videos demonstrating that.

It might be more reasonable to implement a P2P communication strategy for a PvE game.  The central server is then only responsible for coordinating enemy movements.

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-13 18:41               ` David P. Reed
@ 2020-06-14  0:03                 ` David Lang
  0 siblings, 0 replies; 22+ messages in thread
From: David Lang @ 2020-06-14  0:03 UTC (permalink / raw)
  To: David P. Reed; +Cc: David Lang, Michael Richardson, Jonathan Morton, bloat

On Sat, 13 Jun 2020, David P. Reed wrote:

> On Saturday, June 13, 2020 1:43am, "David Lang" <david@lang.hm> said:
> 
>
>> Remember, Musk already sacked the starlink leadership once for being to stuck in
>> 'the way satellites are always built' so if it doesn't work well under load and
>> they can't fix it, he will find people who can.
> 
> He just might. Depends on who he asks. The fact that ATT literally couldn't 
> fix its network for at least a year, and spent most of that year blaming Apple 
> and the design of the iPhone, asking the right people isn't what arrogant 
> organizations are good at.

Musk has shown that he is not stuck on "this is the way we've always done 
things" and conventional wisdom. He's also shown a willingness to scrap existing 
plans and infrastructure when it is shown not to work.

> And firing people appeals to those who think Trump 
> is brilliant as a manager - he appeals because he says "you're fired". I think 
> sacking whole teams is an indication of someone who has risen to his level of 
> incompetence, but that's just my opinion.

he sacked the management team, not everyone, and when you are doing new stuff 
and you have people playing the "we'll just ignore the bosses instructions 
because we know the industry better" they deserve to get sacked.

take a look at the first pair of starlink test satellites (before the change) 
and what they are launching now. What they are doing now breaks a TON of 
'industry best practices' in satellite design, but the result is much cheaper, 
and much cheaper to lunch that allows scalability that wasn't possible with the 
old model.

traditionally, satellites are very expensive, because the cost to lauch them was 
so high that it made sense to put lots of money into each satellite. As launch 
costs plummet, you can afford a slightly higher failure rate potential to a 
drastic cost reduction.

Musk is overly optomisitic, but that's a good thing because without that 
optimisim he wouldn't even try the things he's doing. He has a solid track 
record of failing to meet his initial goal/deadline, but continuing to work and 
eventually exceeding his initial goals (even if it's a bit later than he 
planned)

I see no reason that starlink would be any different.

David Lang



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-13 21:15         ` Michael Richardson
  2020-06-13 23:02           ` Jonathan Morton
@ 2020-06-14  0:06           ` David Lang
  1 sibling, 0 replies; 22+ messages in thread
From: David Lang @ 2020-06-14  0:06 UTC (permalink / raw)
  To: Michael Richardson; +Cc: David P. Reed, Jonathan Morton, bloat

> Right, if you overprovision the network, and under-utilize it with CBR links, 
> then clearly you can get quality, at a high cost.

> I don't think you can make a lot of money at this, because ultimately 
> terrestrial providers came in and ate that lunch.

only in urban areas, there are still large parts of the US (let alone the rest 
of the world) where cell coverage is spotty and Internet options are few and 
slow. running wire/fiber is expensive and doesn't make sense if the customer 
density is too low.

wireless ISPs are partially filling that gap, but only partially

David Lang

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-13  5:43             ` David Lang
  2020-06-13 18:41               ` David P. Reed
@ 2020-06-14  0:36               ` Michael Richardson
  2020-06-14  1:17                 ` David Lang
  1 sibling, 1 reply; 22+ messages in thread
From: Michael Richardson @ 2020-06-14  0:36 UTC (permalink / raw)
  To: David Lang, David P. Reed, Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 2131 bytes --]


David Lang <david@lang.hm> wrote:
    >> David Lang <david@lang.hm> wrote:
    >> > my point is that the if the satellite links are not the bottleneck, no
    >> > queuing will happen there.
    >>
    >> It's a mesh of satellites.
    >>
    >> If you build it into a DODAG (RFC6550 would work well), then you will either
    >> a bottleneck at the top of tree (where the downlink to the DC is), or you
    >> will have significant under utilitization at the edges, which might encourage
    >> them to buffer.
    >>
    >> Now, the satellites are always moving, so which satellite is next to the DC
    >> will change, and this quite possibly could be exploited such that it's
    >> always a different buffer that you bloat, so the accumulated backlog that
    >> David P spoke about in his message might get to drain.
    >>
    >> But, the right way to use this mesh is, in my opinion, to have a lot of
    >> downlinks, and ideally, to do as much e2e connection as possible.
    >> Don't connect *to* the Internet, *become* an Internet.
    >> That is, routing in the satellite mesh, not just creation of circuits to DCs.

    > realistically, the vast majority of the people who have the mobile endpoints
    > are going to be talking to standard websites and services, and those are
    > going to be on the Internet, not on starlink nodes.

Well, as along as we continue to build NATworks on the assumption that
everyone is a consumer, not a citizen, that pattern will continue to happen.

I think that when FACEBOOK suggested such a thing, explaining how they could
accelerate everything through their servers, it was a major problem.

Had this been the attitude in 1989, then the Internet would never have
happened, and WWW would not have been a thing.

The lockdown has shown that actual low-latency e2e communication matters.
The gaming community has known this for awhile.

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-14  0:36               ` Michael Richardson
@ 2020-06-14  1:17                 ` David Lang
  2020-06-14 15:40                   ` David P. Reed
  0 siblings, 1 reply; 22+ messages in thread
From: David Lang @ 2020-06-14  1:17 UTC (permalink / raw)
  To: Michael Richardson; +Cc: David Lang, David P. Reed, Jonathan Morton, bloat

On Sat, 13 Jun 2020, Michael Richardson wrote:

> David Lang <david@lang.hm> wrote:
>    >> David Lang <david@lang.hm> wrote:
>    >> > my point is that the if the satellite links are not the bottleneck, no
>    >> > queuing will happen there.
>    >>
>    >> It's a mesh of satellites.
>    >>
>    >> If you build it into a DODAG (RFC6550 would work well), then you will either
>    >> a bottleneck at the top of tree (where the downlink to the DC is), or you
>    >> will have significant under utilitization at the edges, which might encourage
>    >> them to buffer.
>    >>
>    >> Now, the satellites are always moving, so which satellite is next to the DC
>    >> will change, and this quite possibly could be exploited such that it's
>    >> always a different buffer that you bloat, so the accumulated backlog that
>    >> David P spoke about in his message might get to drain.
>    >>
>    >> But, the right way to use this mesh is, in my opinion, to have a lot of
>    >> downlinks, and ideally, to do as much e2e connection as possible.
>    >> Don't connect *to* the Internet, *become* an Internet.
>    >> That is, routing in the satellite mesh, not just creation of circuits to DCs.
>
>    > realistically, the vast majority of the people who have the mobile endpoints
>    > are going to be talking to standard websites and services, and those are
>    > going to be on the Internet, not on starlink nodes.
>
> Well, as along as we continue to build NATworks on the assumption that
> everyone is a consumer, not a citizen, that pattern will continue to happen.
>
> I think that when FACEBOOK suggested such a thing, explaining how they could
> accelerate everything through their servers, it was a major problem.
>
> Had this been the attitude in 1989, then the Internet would never have
> happened, and WWW would not have been a thing.
>
> The lockdown has shown that actual low-latency e2e communication matters.
> The gaming community has known this for awhile.

how has the lockdown shown this? video conferencing is seldom e2e

and starlink will do very well with e2e communications, but the potential 
bottlenecks (and therefor potential buffering) aren't going to show up in e2e 
communications, they will show up where lots of endpoints are pulling data from 
servers not directly connected to starlink.

David Lang

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-11 16:14 ` Jonathan Morton
  2020-06-11 18:46   ` David P. Reed
@ 2020-06-14 11:23   ` Roland Bless
  1 sibling, 0 replies; 22+ messages in thread
From: Roland Bless @ 2020-06-14 11:23 UTC (permalink / raw)
  To: Jonathan Morton, David P. Reed; +Cc: bloat

Hi Jonathan.

On 11.06.20 at 18:14 Jonathan Morton wrote:
>> On 11 Jun, 2020, at 7:03 pm, David P. Reed <dpreed@deepplum.com> wrote:
>>
>> So, what do you think the latency (including bloat in the satellites) will be? My guess is > 2000 msec, based on the experience with Apple on ATT Wireless back when it was rolled out (at 10 am, in each of 5 cities I tested, repeatedly with smokeping, for 24 hour periods, the ATT Wireless access network experienced ping time grew to 2000 msec., and then to 4000 by mid day - true lag-under-load, with absolutely zero lost packets!)
>>  
>> I get that SpaceX is predicting low latency by estimating physical distance and perfect routing in their LEO constellation. Possibly it is feasible to achieve this if there is zero load over a fixed path. But networks aren't physical, though hardware designers seem to think they are.
>>  
>> Anyone know ANY reason to expect better from Musk's clown car parade?
> 
> Speaking strictly from a theoretical perspective, I don't see any reason why they shouldn't be able to offer latency that is "normally" below 100ms (to a regional PoP, not between two arbitrary points on the globe).  The satellites will be much closer to any given ground station than a GEO satellite, the latter typically adding 500ms to the path due mostly to physical distance.  All that is needed is to keep queue delays reasonably under control, and there's any number of AQMs that can help with that.  Clearly ATT Wireless did not perform any bufferbloat mitigation at all.
> 
> I have no insight or visibility into anything they're *actually* doing, though.  Can anyone dig up anything about that?

I think the claims about low latency are driven by lower _propagation_
delays. So for long enough distances it may be more efficient to route
up to a LEO-satellite, then use inter-satellite communication (there
is the saving due to faster light propagation in space than in fiber)
and eventually go down to earth again.
This was mainly described in Mark Handley's HotNets'18 paper here:
https://dl.acm.org/doi/10.1145/3286062.3286075
(see also http://nrg.cs.ucl.ac.uk/mjh/starlink/)

Another recent paper is here, discussing several deployment options:
Giacomo Giuliari, Tobias Klenze, Markus Legner, David Basin, Adrian
Perrig, and Ankit Singla. 2020. Internet backbones in space. SIGCOMM
Comput. Commun. Rev. 50, 1 (January 2020), 25–37.
https://dl.acm.org/doi/abs/10.1145/3390251.3390256
(Note, that there is an error in the results of Section 6.1 and Figure
2, the stretch factor was chosen by a factor of 1.5 too large; errata
is under submission).

However, queueing delay is not considered so far. Furthermore, one may
expect packet reordering due to changing inter-satellite routes etc.

Regards,
 Roland

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-14  1:17                 ` David Lang
@ 2020-06-14 15:40                   ` David P. Reed
  2020-06-14 15:57                     ` Michael Richardson
  0 siblings, 1 reply; 22+ messages in thread
From: David P. Reed @ 2020-06-14 15:40 UTC (permalink / raw)
  To: David Lang; +Cc: Michael Richardson, David Lang, Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 3312 bytes --]


On Saturday, June 13, 2020 9:17pm, "David Lang" <david@lang.hm> said:
> > The lockdown has shown that actual low-latency e2e communication matters.

> > The gaming community has known this for awhile.
> 
> how has the lockdown shown this? video conferencing is seldom e2e
 
Well, it's seldom peer-to-peer (and for good reason, the number of streams to each endpoint would grow linearly in complexity, if not bandwidth, in peer-to-peer implementations of conferencing, and quickly become unusable. In principle, one could have the video/audio sources transmit multiple resolution versions of their camera/mic capture to each destination, and each destination could composite screens and mix audio itself, with a tightly coupled "decentralized" control algorithm.)
 
But, nonetheless, the application server architecture of Zoom and WebEx are pretty distributed on the conference-server end, though it definitely needs higher capacity than each endpoint, And it *is* end-to-end at the network level. It would be relatively simple to scatter this load out into many more conference-server endpoints, because of the basic e2e argument that separates the IP layer from the application layer. Blizzard Entertainment pioneered this kind of solution - scattering its gaming servers out close to the edge, and did so in an "end-to-end" way.
 
With a system like starlink it seems important to me to distinguish peer-to-peer from end-to-end, something I have had a hard time explaining to people since 1978 when the first end-to-end arguments had their impact on the Internet design. Yes, I'm a big fan of moving function to the human-located endpoints where possible. But I also fought against multicasting in the routers/switches, because very few applications benefit from multi-casting of packets alone by the network. Instead, almost all multi-endpoint systems need to coordinate, and that coordination is often best done (most scalably) by a network of endpoints that do the coordinated functions needed for a system. However, deciding what those functions must be, to put them in the basic routers seems basically wrong - it blocks evolution of the application functionality, and puts lots of crap in the transport network that is at best suboptimal, ,and at worst gets actively in the way. (Billing by the packet in each link being the classic example of a "feature" that destroyed the Bell System architecture as a useful technology).

> 
> and starlink will do very well with e2e communications, but the potential
> bottlenecks (and therefor potential buffering) aren't going to show up in e2e
> communications, they will show up where lots of endpoints are pulling data from
> servers not directly connected to starlink.
 
I hope neither Starlink or the applications using it choose to "optimize" themselves for their first usage. That would be suicidal - it's what killed Iridium, which could ONLY carry 14.4 kb/sec per connection, by design. Optimized for compressed voice only. That's why Negroponte and Papert and I couldn't use it to build 2B1, and went with Tachyon, despite Iridium being available for firesale prices and Nicholas's being on Motorola's board. Of course 2B1 was way too early in the satellite game, back in the 1990's. Interesting story there.

> 
> David Lang
> 

[-- Attachment #2: Type: text/html, Size: 4766 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-14 15:40                   ` David P. Reed
@ 2020-06-14 15:57                     ` Michael Richardson
  2020-06-14 21:04                       ` David P. Reed
  0 siblings, 1 reply; 22+ messages in thread
From: Michael Richardson @ 2020-06-14 15:57 UTC (permalink / raw)
  To: David P. Reed, David Lang, Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 5134 bytes --]


David P. Reed <dpreed@deepplum.com> wrote:
    >> > The lockdown has shown that actual low-latency e2e communication matters.

    >> > The gaming community has known this for awhile.
    >>
    >> how has the lockdown shown this? video conferencing is seldom e2e

    > Well, it's seldom peer-to-peer (and for good reason, the number of
    > streams to each endpoint would grow linearly in complexity, if not
    > bandwidth, in peer-to-peer implementations of conferencing, and quickly
    > become unusable. In principle, one could have the video/audio sources
    > transmit multiple resolution versions of their camera/mic capture to
    > each destination, and each destination could composite screens and mix
    > audio itself, with a tightly coupled "decentralized" control
    > algorithm.)

JITSI, whereby, and bluejeans are all p2p for example.  There are n+1 webrtc
streams (+1 because server).  It's a significantly better experience.
Yes, it doesn't scale to large groups.  So what?

My Karate class of ~15 people uses Zoom ... it is TERRIBLE in so many ways.
All that command and control, yet my Sensei can't take a question while demonstrating.
With all the other services, at least I can lock my view on him.

My nieces and my mom and I are not a 100 person conference.
It's more secure, lower latency, more resilient and does not require quite so
much management BS to operate.

    > But, nonetheless, the application server architecture of Zoom and WebEx
    > are pretty distributed on the conference-server end, though it
    > definitely needs higher capacity than each endpoint, And it *is*
    > end-to-end at the network level. It would be relatively simple to
    > scatter this load out into many more conference-server endpoints,
    > because of the basic e2e argument that separates the IP layer from the
    > application layer. Blizzard Entertainment pioneered this kind of
    > solution - scattering its gaming servers out close to the edge, and did
    > so in an "end-to-end" way.

Yup.

    > With a system like starlink it seems important to me to distinguish
    > peer-to-peer from end-to-end, something I have had a hard time
    > explaining to people since 1978 when the first end-to-end arguments had
    > their impact on the Internet design. Yes, I'm a big fan of moving
    > function to the human-located endpoints where possible. But I also
    > fought against multicasting in the routers/switches, because very few
    > applications benefit from multi-casting of packets alone by the
    > network. Instead, almost all multi-endpoint systems need to coordinate,
    > and that coordination is often best done (most scalably) by a network
    > of endpoints that do the coordinated functions needed for a
    > system.

I see your point.  You jump from e2e vs p2p to multicast, and I think that
there might be an intermediate part of the argument that I've missed.

    > However, deciding what those functions must be, to put them in
    > the basic routers seems basically wrong - it blocks evolution of the
    > application functionality, and puts lots of crap in the transport
    > network that is at best suboptimal, ,and at worst gets actively in the
    > way. (Billing by the packet in each link being the classic example of a
    > "feature" that destroyed the Bell System architecture as a useful
    > technology).

I'd like to go the other way: while I don't want to bring back the Bell
System architecture, where only the network could innovate, I do think that
being able to bill by the packet is an important feature that I think we now
have the crypto and CPU power to do right.
Consider the affect on spam and DDoS that such a thing would have.
We don't even have to bill for good packets :-)
There could be a bounty that every packet comes from, and if it is rejected,
then the bounty is collected.

    >> and starlink will do very well with e2e communications, but the potential
    >> bottlenecks (and therefor potential buffering) aren't going to show up in e2e
    >> communications, they will show up where lots of endpoints are pulling data from
    >> servers not directly connected to starlink.

    > I hope neither Starlink or the applications using it choose to
    > "optimize" themselves for their first usage. That would be suicidal -
    > it's what killed Iridium, which could ONLY carry 14.4 kb/sec per
    > connection, by design. Optimized for compressed voice only. That's why
    > Negroponte and Papert and I couldn't use it to build 2B1, and went with
    > Tachyon, despite Iridium being available for firesale prices and
    > Nicholas's being on Motorola's board. Of course 2B1 was way too early
    > in the satellite game, back in the 1990's. Interesting story there.

I agree: they need to have the ability to support a variety of services,
particularly ones that we have no clue about.

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-14 15:57                     ` Michael Richardson
@ 2020-06-14 21:04                       ` David P. Reed
  2020-06-14 23:13                         ` Michael Richardson
  0 siblings, 1 reply; 22+ messages in thread
From: David P. Reed @ 2020-06-14 21:04 UTC (permalink / raw)
  To: Michael Richardson; +Cc: David Lang, Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 8149 bytes --]


I have no problem with WebRTC based videoconferencing. In fact, I think it is pretty good for 2-4 endpoints. But when you have lower cost laptops, they drag down the conferencing because of the compositing and mixing and multiple stream transmission load, even with 2-3 other participants.
 
[What do you think I and others fought for datagrams back in 1976 for? If it hadn't been for our little cabal, you would have TCP virtual circuits. Period. And it took a lot of arguing - a whole year of nearly losing - until we forced the creation of the IP datagram layer separate from TCP. It was easier to disentangle Telnet from TCP - the initial TCP was going to have "break" and "formatting" characters defined in it and "line at a time" options, etc. Because it was thought that Telnet was the primary use of the Internet, and if you wanted binary octets, you could just quote them.]
 
So, WebRTC is fine, and it works as well as it works. However, the engineering thought needed to make peer-to-peer *conferencing* scale well (up to interactive webinars with meeting rooms that can be split off, etc) is simplified by having resources other than just "peers" to do that work. Simplifying scalability is a good thing.
 
I don't think forcing everyone to do media over WebRTC is any better than forcing everyone to use centralized servers. Each approach has tradeoffs. The point of the end-to-end argument was to enable those tradeoffs.
 
As far as billing by packets, no, I hope it never happens. 70% of the opex of the Bell System was billing-related, because of micro-specific billing.
To bill, you need auditability of the bills. It's not simple at all - you can't just send billing packets to accounts associated with endpoints in an Internet. Every packet on every link is potentially (and likely) billed differently.
 
If you want the amount you pay your access provider to be proportional to traffic you generate or receive, that may be feasible. Desirable? That's another issue not relevant here. I was talking about billing every packet back to "some responsible party" from every link to reimburse the investor who built that link, bit by bit. That's what the Bell System actually did (outside a local exchange, and between states). And that's why 70% of opex was billing, which would have soared if it were by packet rather than by "call".
 
You have the Internet ONLY because of Carterfone and other decisions that forced interfaces into the Bell System that they didn't want, because it wrecked their monopoly business model. Data was charged at an incredibly high rate, separate from voice, because to bill for it cost more. The UK got the Internet only because they had to follow the US (rather than billing outrageously). Europe got it last, because PTT's were essentially government revenue generators, so the government hated the idea of losing the money.
 
 
On Sunday, June 14, 2020 11:57am, "Michael Richardson" <mcr@sandelman.ca> said:



> 
> David P. Reed <dpreed@deepplum.com> wrote:
> >> > The lockdown has shown that actual low-latency e2e communication
> matters.
> 
> >> > The gaming community has known this for awhile.
> >>
> >> how has the lockdown shown this? video conferencing is seldom e2e
> 
> > Well, it's seldom peer-to-peer (and for good reason, the number of
> > streams to each endpoint would grow linearly in complexity, if not
> > bandwidth, in peer-to-peer implementations of conferencing, and quickly
> > become unusable. In principle, one could have the video/audio sources
> > transmit multiple resolution versions of their camera/mic capture to
> > each destination, and each destination could composite screens and mix
> > audio itself, with a tightly coupled "decentralized" control
> > algorithm.)
> 
> JITSI, whereby, and bluejeans are all p2p for example. There are n+1 webrtc
> streams (+1 because server). It's a significantly better experience.
> Yes, it doesn't scale to large groups. So what?
> 
> My Karate class of ~15 people uses Zoom ... it is TERRIBLE in so many ways.
> All that command and control, yet my Sensei can't take a question while
> demonstrating.
> With all the other services, at least I can lock my view on him.
> 
> My nieces and my mom and I are not a 100 person conference.
> It's more secure, lower latency, more resilient and does not require quite so
> much management BS to operate.
> 
> > But, nonetheless, the application server architecture of Zoom and WebEx
> > are pretty distributed on the conference-server end, though it
> > definitely needs higher capacity than each endpoint, And it *is*
> > end-to-end at the network level. It would be relatively simple to
> > scatter this load out into many more conference-server endpoints,
> > because of the basic e2e argument that separates the IP layer from the
> > application layer. Blizzard Entertainment pioneered this kind of
> > solution - scattering its gaming servers out close to the edge, and did
> > so in an "end-to-end" way.
> 
> Yup.
> 
> > With a system like starlink it seems important to me to distinguish
> > peer-to-peer from end-to-end, something I have had a hard time
> > explaining to people since 1978 when the first end-to-end arguments had
> > their impact on the Internet design. Yes, I'm a big fan of moving
> > function to the human-located endpoints where possible. But I also
> > fought against multicasting in the routers/switches, because very few
> > applications benefit from multi-casting of packets alone by the
> > network. Instead, almost all multi-endpoint systems need to coordinate,
> > and that coordination is often best done (most scalably) by a network
> > of endpoints that do the coordinated functions needed for a
> > system.
> 
> I see your point. You jump from e2e vs p2p to multicast, and I think that
> there might be an intermediate part of the argument that I've missed.
> 
> > However, deciding what those functions must be, to put them in
> > the basic routers seems basically wrong - it blocks evolution of the
> > application functionality, and puts lots of crap in the transport
> > network that is at best suboptimal, ,and at worst gets actively in the
> > way. (Billing by the packet in each link being the classic example of a
> > "feature" that destroyed the Bell System architecture as a useful
> > technology).
> 
> I'd like to go the other way: while I don't want to bring back the Bell
> System architecture, where only the network could innovate, I do think that
> being able to bill by the packet is an important feature that I think we now
> have the crypto and CPU power to do right.
> Consider the affect on spam and DDoS that such a thing would have.
> We don't even have to bill for good packets :-)
> There could be a bounty that every packet comes from, and if it is rejected,
> then the bounty is collected.
> 
> >> and starlink will do very well with e2e communications, but the
> potential
> >> bottlenecks (and therefor potential buffering) aren't going to show
> up in e2e
> >> communications, they will show up where lots of endpoints are pulling
> data from
> >> servers not directly connected to starlink.
> 
> > I hope neither Starlink or the applications using it choose to
> > "optimize" themselves for their first usage. That would be suicidal -
> > it's what killed Iridium, which could ONLY carry 14.4 kb/sec per
> > connection, by design. Optimized for compressed voice only. That's why
> > Negroponte and Papert and I couldn't use it to build 2B1, and went with
> > Tachyon, despite Iridium being available for firesale prices and
> > Nicholas's being on Motorola's board. Of course 2B1 was way too early
> > in the satellite game, back in the 1990's. Interesting story there.
> 
> I agree: they need to have the ability to support a variety of services,
> particularly ones that we have no clue about.
> 
> --
> ] Never tell me the odds! | ipv6 mesh networks [
> ] Michael Richardson, Sandelman Software Works | IoT architect [
> ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
> 
> 

[-- Attachment #2: Type: text/html, Size: 10931 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims
  2020-06-14 21:04                       ` David P. Reed
@ 2020-06-14 23:13                         ` Michael Richardson
  0 siblings, 0 replies; 22+ messages in thread
From: Michael Richardson @ 2020-06-14 23:13 UTC (permalink / raw)
  To: David P. Reed; +Cc: David Lang, Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 1888 bytes --]


David P. Reed <dpreed@deepplum.com> wrote:
    > I have no problem with WebRTC based videoconferencing. In fact, I think
    > it is pretty good for 2-4 endpoints. But when you have lower cost
    > laptops, they drag down the conferencing because of the compositing and
    > mixing and multiple stream transmission load, even with 2-3 other
    > participants.

As for "low-end" --- that definition is changing  :-)
The advantage of compositing locally is that the end point can actually do
what the end user wants.  Not the other way around.  Smart end points, rather
than smart networks.

    > [What do you think I and others fought for datagrams back in 1976 for?

Yeah, I wasn't old enough to watch, but I did read of that debate.

    > As far as billing by packets, no, I hope it never happens. 70% of the
    > opex of the Bell System was billing-related, because of micro-specific
    > billing.

I'm not so young (at 49, being a precocious teenager in 1984) as to not understand this :-)

In fact, with VoIP business systems,  there are still ridiculous attempts to
do billing.  A lot of effort for very little ROI.

But, it turns out that the billing data let you get at other interesting results:
  like when are your busy times,
  and do you have enough of the right sales representatives present?
  how many days after that first 30C day do the calls to the pool store start?
  (I did have to figure that out...)

I think that we can come up with different models that would help with the
endless undesireable packets.  It's not trivial, and I sure don't want to go
back down the same road. But, I think there is something here.

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2020-06-14 23:13 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-11 16:03 [Bloat] FW: [Dewayne-Net] Ajit Pai caves to SpaceX but is still skeptical of Musk's latency claims David P. Reed
2020-06-11 16:14 ` Jonathan Morton
2020-06-11 18:46   ` David P. Reed
2020-06-11 18:56     ` David Lang
2020-06-11 19:16       ` David P. Reed
2020-06-11 19:28         ` David Lang
2020-06-12 15:39           ` Michael Richardson
2020-06-13  5:43             ` David Lang
2020-06-13 18:41               ` David P. Reed
2020-06-14  0:03                 ` David Lang
2020-06-14  0:36               ` Michael Richardson
2020-06-14  1:17                 ` David Lang
2020-06-14 15:40                   ` David P. Reed
2020-06-14 15:57                     ` Michael Richardson
2020-06-14 21:04                       ` David P. Reed
2020-06-14 23:13                         ` Michael Richardson
2020-06-12 15:30     ` Michael Richardson
2020-06-12 19:50       ` David P. Reed
2020-06-13 21:15         ` Michael Richardson
2020-06-13 23:02           ` Jonathan Morton
2020-06-14  0:06           ` David Lang
2020-06-14 11:23   ` Roland Bless

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox