Many ISPs need the kinds of quality shaping cake can do
 help / color / mirror / Atom feed
* Re: [LibreQoS] [Starlink] Researchers Seeking Probe Volunteers in USA
       [not found]   ` <F4CA66DA-516C-438A-8D8A-5F172E5DFA75@cable.comcast.com>
@ 2023-01-09 15:26     ` Dave Taht
  2023-01-09 17:00       ` Sebastian Moeller
                         ` (3 more replies)
  0 siblings, 4 replies; 183+ messages in thread
From: Dave Taht @ 2023-01-09 15:26 UTC (permalink / raw)
  To: Livingood, Jason
  Cc: David P. Reed, starlink, mike.reynolds, Rpm, bloat, libreqos

I have many kvetches about the new latency under load tests being
designed and distributed over the past year. I am delighted! that they
are happening, but most really need third party evaluation, and
calibration, and a solid explanation of what network pathologies they
do and don't cover. Also a RED team attitude towards them, as well as
thinking hard about what you are not measuring (operations research).

I actually rather love the new cloudflare speedtest, because it tests
a single TCP connection, rather than dozens, and at the same time folk
are complaining that it doesn't find the actual "speed!". yet... the
test itself more closely emulates a user experience than speedtest.net
does. I am personally pretty convinced that the fewer numbers of flows
that a web page opens improves the likelihood of a good user
experience, but lack data on it.

To try to tackle the evaluation and calibration part, I've reached out
to all the new test designers in the hope that we could get together
and produce a report of what each new test is actually doing. I've
tweeted, linked in, emailed, and spammed every measurement list I know
of, and only to some response, please reach out to other test designer
folks and have them join the rpm email list?

My principal kvetches in the new tests so far are:

0) None of the tests last long enough.

Ideally there should be a mode where they at least run to "time of
first loss", or periodically, just run longer than the
industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
there! It's really bad science to optimize the internet for 20
seconds. It's like optimizing a car, to handle well, for just 20
seconds.

1) Not testing up + down + ping at the same time

None of the new tests actually test the same thing that the infamous
rrul test does - all the others still test up, then down, and ping. It
was/remains my hope that the simpler parts of the flent test suite -
such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
tests would provide calibration to the test designers.

we've got zillions of flent results in the archive published here:
https://blog.cerowrt.org/post/found_in_flent/

The new tests have all added up + ping and down + ping, but not up +
down + ping. Why??

The behaviors of what happens in that case are really non-intuitive, I
know, but... it's just one more phase to add to any one of those new
tests. I'd be deliriously happy if someone(s) new to the field
started doing that, even optionally, and boggled at how it defeated
their assumptions.

Among other things that would show...

It's the home router industry's dirty secret than darn few "gigabit"
home routers can actually forward in both directions at a gigabit. I'd
like to smash that perception thoroughly, but given our starting point
is a gigabit router was a "gigabit switch" - and historically been
something that couldn't even forward at 200Mbit - we have a long way
to go there.

Only in the past year have non-x86 home routers appeared that could
actually do a gbit in both directions.

2) Few are actually testing within-stream latency

Apple's rpm project is making a stab in that direction. It looks
highly likely, that with a little more work, crusader and
go-responsiveness can finally start sampling the tcp RTT, loss and
markings, more directly. As for the rest... sampling TCP_INFO on
windows, and Linux, at least, always appeared simple to me, but I'm
discovering how hard it is by delving deep into the rust behind
crusader.

the goresponsiveness thing is also IMHO running WAY too many streams
at the same time, I guess motivated by an attempt to have the test
complete quickly?

B) To try and tackle the validation problem:

In the libreqos.io project we've established a testbed where tests can
be plunked through various ISP plan network emulations. It's here:
https://payne.taht.net (run bandwidth test for what's currently hooked
up)

We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
leverage with that, so I don't have to nat the various emulations.
(and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
to see more test designers setup a testbed like this to calibrate
their own stuff.

Presently we're able to test:
flent
netperf
iperf2
iperf3
speedtest-cli
crusader
the broadband forum udp based test:
https://github.com/BroadbandForum/obudpst
trexx

There's also a virtual machine setup that we can remotely drive a web
browser from (but I didn't want to nat the results to the world) to
test other web services.

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 15:26     ` [LibreQoS] [Starlink] Researchers Seeking Probe Volunteers in USA Dave Taht
@ 2023-01-09 17:00       ` Sebastian Moeller
  2023-01-09 17:04       ` Jeremy Austin
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-01-09 17:00 UTC (permalink / raw)
  To: Dave Täht
  Cc: Livingood, Jason, Rpm, mike.reynolds, libreqos, David P. Reed,
	starlink, bloat

Hi Dave,


just a data point, apples networkQuality on Monterey (12.6.2, x86) defaults to bi-directionally saturating traffic. Your argument about the duration still holds though the test is really short. While I understand the motivation behind that, I think it would to the internet much better if all such tests would randomly offer users extended test duration of, say a minute. Users need to opt-in, but that would at least collect some longer duration data. Now, I have no idea whether apple actually keeps results on their server side (Ookla sure does, but given Apples applaudable privacy stance they might not do so) if not it would do little good to run extended tests, but for "players" like Ookla that do keep some logs interspersing longer running tests would offer a great way to test ISPs outside the "magic 20 seconds".


> On Jan 9, 2023, at 16:26, Dave Taht via Starlink <starlink@lists.bufferbloat.net> wrote:
> 
> I have many kvetches about the new latency under load tests being
> designed and distributed over the past year. I am delighted! that they
> are happening, but most really need third party evaluation, and
> calibration, and a solid explanation of what network pathologies they
> do and don't cover. Also a RED team attitude towards them, as well as
> thinking hard about what you are not measuring (operations research).

	[SM] RED as in RED/BLUE team or as in random early detection? ;)

> 
> I actually rather love the new cloudflare speedtest, because it tests
> a single TCP connection, rather than dozens, and at the same time folk
> are complaining that it doesn't find the actual "speed!".

	[SM] Ookla's on-line test can be toggled between multi and single flow mode (which is good, the default is multi) but e.g. the official macos client application from Ookla does not offer this toggle and defaults to multi-flow (which is less good). Fast.com ca be configured for single flow tests, but defaults to multi-flow.


> yet... the
> test itself more closely emulates a user experience than speedtest.net
> does.

	[SM] I like the separate reporting for transfer rates for objects of different sizes. I would argue that both single and multi-flow tests have merit, but I agree with you that if only one test is performed a single-flow test seems somewhat better.

> I am personally pretty convinced that the fewer numbers of flows
> that a web page opens improves the likelihood of a good user
> experience, but lack data on it.
> 
> To try to tackle the evaluation and calibration part, I've reached out
> to all the new test designers in the hope that we could get together
> and produce a report of what each new test is actually doing.

	[SM] +1; and probably part of your questionaire already, what measures are actually reported back to the user.


> I've
> tweeted, linked in, emailed, and spammed every measurement list I know
> of, and only to some response, please reach out to other test designer
> folks and have them join the rpm email list?
> 
> My principal kvetches in the new tests so far are:
> 
> 0) None of the tests last long enough.
> 
> Ideally there should be a mode where they at least run to "time of
> first loss", or periodically, just run longer than the
> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
> there! It's really bad science to optimize the internet for 20
> seconds. It's like optimizing a car, to handle well, for just 20
> seconds.

	[SM] ++1

> 1) Not testing up + down + ping at the same time
> 
> None of the new tests actually test the same thing that the infamous
> rrul test does - all the others still test up, then down, and ping. It
> was/remains my hope that the simpler parts of the flent test suite -
> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
> tests would provide calibration to the test designers.
> 
> we've got zillions of flent results in the archive published here:
> https://blog.cerowrt.org/post/found_in_flent/
> 
> The new tests have all added up + ping and down + ping, but not up +
> down + ping. Why??

	[SM] I think at least on Monterey Apple's networkQuality does bidirectional tests (I just confirmed that via packet-capture, but it is already visible in iftop (but hobbled by iftop's relative high default hysteresis)). You actually need to manually intervene to get a sequential test:

laptop:~ user$ networkQuality -h
USAGE: networkQuality [-C <configuration_url>] [-c] [-h] [-I <interfaceName>] [-s] [-v]
    -C: override Configuration URL
    -c: Produce computer-readable output
    -h: Show help (this message)
    -I: Bind test to interface (e.g., en0, pdp_ip0,...)
    -s: Run tests sequentially instead of parallel upload/download
    -v: Verbose output

laptop:~ user $ networkQuality -v
==== SUMMARY ====                                                                                         
Upload capacity: 194.988 Mbps
Download capacity: 894.162 Mbps
Upload flows: 16
Download flows: 12
Responsiveness: High (2782 RPM)
Base RTT: 8
Start: 1/9/23, 17:45:57
End: 1/9/23, 17:46:12
OS Version: Version 12.6.2 (Build 21G320)

laptop:~ user $ networkQuality -v -s
==== SUMMARY ====                                                                                         
Upload capacity: 641.206 Mbps
Download capacity: 883.787 Mbps
Upload flows: 16
Download flows: 12
Upload Responsiveness: High (3529 RPM)
Download Responsiveness: High (1939 RPM)
Base RTT: 8
Start: 1/9/23, 17:46:17
End: 1/9/23, 17:46:41
OS Version: Version 12.6.2 (Build 21G320)

(this is alas not my home link...)


> 
> The behaviors of what happens in that case are really non-intuitive, I
> know, but... it's just one more phase to add to any one of those new
> tests. I'd be deliriously happy if someone(s) new to the field
> started doing that, even optionally, and boggled at how it defeated
> their assumptions.

	[SM] Someone at Apple apparently listened ;)


> 
> Among other things that would show...
> 
> It's the home router industry's dirty secret than darn few "gigabit"
> home routers can actually forward in both directions at a gigabit.

	[SM] That is going to be remedied in the near future, the first batch of nominal Gigabit links were mostly asymmetric, e.g. often something like 1000/50 over DOCSIS or 1000/500 over GPON (reflecting the asymmetric nature of the these media in the field). But with symmetric XGS-PON being deployed by more and more (still a low absolute number) ISPs symmetric performance is going to move into the spot-light. However my guess is that the first few generations of home routers for these speedgrades will rely heavily on accelerator engines.


> I'd
> like to smash that perception thoroughly, but given our starting point
> is a gigabit router was a "gigabit switch" - and historically been
> something that couldn't even forward at 200Mbit - we have a long way
> to go there.
> 
> Only in the past year have non-x86 home routers appeared that could
> actually do a gbit in both directions.
> 
> 2) Few are actually testing within-stream latency
> 
> Apple's rpm project is making a stab in that direction. It looks
> highly likely, that with a little more work, crusader and
> go-responsiveness can finally start sampling the tcp RTT, loss and
> markings, more directly. As for the rest... sampling TCP_INFO on
> windows, and Linux, at least, always appeared simple to me, but I'm
> discovering how hard it is by delving deep into the rust behind
> crusader.

	[SM] I think go-responsiveness looks at TCP_INFO already (on request) but will report an aggregate info block over all flows, which can get interesting as in my testing I often see a mix of IPv4 and IPv6 flows within individual tests, with noticeably different numbers for e.g. MSS. (Yes, MSS is not what you are asking for here, but I think flent does it right by diligently reporting all such measures flow-by-flow, but that will explode pretty quickly if say a test uses 32/32 flows by direction).


> 
> the goresponsiveness thing is also IMHO running WAY too many streams
> at the same time, I guess motivated by an attempt to have the test
> complete quickly?

	[SM] I can only guess, but that goal is to saturate the link persistently (and getting to that sate fast) and for that goal parallel flows seem to be OK, especially as that will reduce the server load for each of these flows a bit, no?


> 
> B) To try and tackle the validation problem:
> 
> In the libreqos.io project we've established a testbed where tests can
> be plunked through various ISP plan network emulations. It's here:
> https://payne.taht.net (run bandwidth test for what's currently hooked
> up)
> 
> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
> leverage with that, so I don't have to nat the various emulations.
> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
> to see more test designers setup a testbed like this to calibrate
> their own stuff.
> 
> Presently we're able to test:
> flent
> netperf
> iperf2
> iperf3
> speedtest-cli
> crusader
> the broadband forum udp based test:
> https://github.com/BroadbandForum/obudpst
> trexx
> 
> There's also a virtual machine setup that we can remotely drive a web
> browser from (but I didn't want to nat the results to the world) to
> test other web services.
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 15:26     ` [LibreQoS] [Starlink] Researchers Seeking Probe Volunteers in USA Dave Taht
  2023-01-09 17:00       ` Sebastian Moeller
@ 2023-01-09 17:04       ` Jeremy Austin
  2023-01-09 18:33         ` Dave Taht
  2023-01-09 18:54       ` [LibreQoS] [EXTERNAL] " Livingood, Jason
  2023-01-09 19:13       ` [LibreQoS] [Rpm] " rjmcmahon
  3 siblings, 1 reply; 183+ messages in thread
From: Jeremy Austin @ 2023-01-09 17:04 UTC (permalink / raw)
  To: Dave Taht
  Cc: Livingood, Jason, Rpm, mike.reynolds, libreqos, David P. Reed,
	starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 983 bytes --]

On Mon, Jan 9, 2023 at 10:26 AM Dave Taht via LibreQoS <
libreqos@lists.bufferbloat.net> wrote:

> I
>
> 2) Few are actually testing within-stream latency
>
>
If some kind of consensus can be generated around how latency under load
should be reported, and bearing in mind that to date Preseem measures
non-destructively, i.e., not generating synthetic flows, we would be
happy to help by adding that analysis to our regular reporting.

We have some FWA-specific latency numbers in our reports, but will be
adding more granular reporting for other access tech as well. A
single-dimension histogram isn't sufficient, IMO, but do we really need to
teach everyone to read CFS? Maybe.

--
Jeremy Austin
Sr. Product Manager
Preseem | Aterlo Networks
preseem.com

Book a Call: https://app.hubspot.com/meetings/jeremy548
Phone: 1-833-733-7336 x718
Email: jeremy@preseem.com

Stay Connected with Newsletters & More:
*https://preseem.com/stay-connected/* <https://preseem.com/stay-connected/>

[-- Attachment #2: Type: text/html, Size: 2319 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 17:04       ` Jeremy Austin
@ 2023-01-09 18:33         ` Dave Taht
  0 siblings, 0 replies; 183+ messages in thread
From: Dave Taht @ 2023-01-09 18:33 UTC (permalink / raw)
  To: Jeremy Austin
  Cc: Livingood, Jason, Rpm, mike.reynolds, libreqos, David P. Reed,
	starlink, bloat

On Mon, Jan 9, 2023 at 9:05 AM Jeremy Austin <jeremy@aterlo.com> wrote:
>
>
>
> On Mon, Jan 9, 2023 at 10:26 AM Dave Taht via LibreQoS <libreqos@lists.bufferbloat.net> wrote:
>>
>> I
>>
>> 2) Few are actually testing within-stream latency
>>
>
> If some kind of consensus can be generated around how latency under load should be reported, and bearing in mind that to date Preseem measures non-destructively, i.e., not generating synthetic flows, we would be happy to help by adding that analysis to our regular reporting.

Yes, it is presently too vague a term. What load? (and my kvetch,
mostly - over what time period)?

> We have some FWA-specific latency numbers in our reports, but will be adding more granular reporting for other access tech as well. A single-dimension histogram isn't sufficient, IMO, but do we really need to teach everyone to read CFS? Maybe.

In writing a really ranty blog entry about my new chromebook over the
holiday (feel free to subject yourself here:
https://blog.cerowrt.org/post/carping_on_a_chromebook/ ) I realized
how different my workloads were than most, and why latency under load
matters so much to me(!) -

I regularly use ssh from the front of my boat to aft, suffer from
running out of LTE bandwidth, use X to remotely screen share, do big
backups, git pulls and pushes, live 24/7 in 15+ mosh terminal tabs to
machines all over the world, play interactive network games, and do
massive compiles of huge source code bases.

I realized, today, after venting my spleen in that blog, that it was
highly unlikely that the vast majority of people out there used their
networks as I do, and it was irrational of me to project my needs on
theirs. Despite identifying new applications, like cloud gaming, and
edge computing, that would benefit if we smashed the LUL there, I am
selfishly in this game to make my DISPLAY variable "just work" for
emacs as well as it did in the 90s.

But then again, I'm pretty sure, most, at least occasionally, push a
big file up or down, at the very least, and get burned by bufferbloat.
The subset of gamers and to some extent videoconferencers, also - but
the majority?

So a really interesting piece of data that I'd like to acquire from an
ISP-facing network is not even the bloat, but, a histogram of the
durations from syn to fin for more normal users.  I imagine that 99%
of all tcp transactions to/from home users to be very, very short.
Uploads, longer.


>
> --
> Jeremy Austin
> Sr. Product Manager
> Preseem | Aterlo Networks
> preseem.com
>
> Book a Call: https://app.hubspot.com/meetings/jeremy548
> Phone: 1-833-733-7336 x718
> Email: jeremy@preseem.com
>
> Stay Connected with Newsletters & More: https://preseem.com/stay-connected/



-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [EXTERNAL] Re: [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 15:26     ` [LibreQoS] [Starlink] Researchers Seeking Probe Volunteers in USA Dave Taht
  2023-01-09 17:00       ` Sebastian Moeller
  2023-01-09 17:04       ` Jeremy Austin
@ 2023-01-09 18:54       ` Livingood, Jason
  2023-01-09 19:19         ` [LibreQoS] [Rpm] " rjmcmahon
  2023-01-09 20:49         ` [LibreQoS] [EXTERNAL] Re: [Starlink] Researchers Seeking Probe Volunteers in USA Dave Taht
  2023-01-09 19:13       ` [LibreQoS] [Rpm] " rjmcmahon
  3 siblings, 2 replies; 183+ messages in thread
From: Livingood, Jason @ 2023-01-09 18:54 UTC (permalink / raw)
  To: Dave Taht; +Cc: starlink, Rpm, bloat, libreqos

> 0) None of the tests last long enough.

The user-initiated ones tend to be shorter - likely because the average user does not want to wait several minutes for a test to complete. But IMO this is where a test platform like SamKnows, Ookla's embedded client, NetMicroscope, and others can come in - since they run in the background on some randomized schedule w/o user intervention. Thus, the user's time-sensitivity is no longer a factor and a longer duration test can be performed.

> 1) Not testing up + down + ping at the same time

You should consider publishing a LUL BCP I-D in the IRTF/IETF - like in IPPM...

JL


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 15:26     ` [LibreQoS] [Starlink] Researchers Seeking Probe Volunteers in USA Dave Taht
                         ` (2 preceding siblings ...)
  2023-01-09 18:54       ` [LibreQoS] [EXTERNAL] " Livingood, Jason
@ 2023-01-09 19:13       ` rjmcmahon
  2023-01-09 19:47         ` [LibreQoS] [Starlink] [Rpm] " Sebastian Moeller
                           ` (2 more replies)
  3 siblings, 3 replies; 183+ messages in thread
From: rjmcmahon @ 2023-01-09 19:13 UTC (permalink / raw)
  To: Dave Taht
  Cc: Livingood, Jason, Rpm, mike.reynolds, libreqos, David P. Reed,
	starlink, bloat

My biggest barrier is the lack of clock sync by the devices, i.e. very 
limited support for PTP in data centers and in end devices. This limits 
the ability to measure one way delays (OWD) and most assume that OWD is 
1/2 and RTT which typically is a mistake. We know this intuitively with 
airplane flight times or even car commute times where the one way time 
is not 1/2 a round trip time. Google maps & directions provide a time 
estimate for the one way link. It doesn't compute a round trip and 
divide by two.

For those that can get clock sync working, the iperf 2 --trip-times 
options is useful.

--trip-times
   enable the measurement of end to end write to read latencies (client 
and server clocks must be synchronized)

Bob
> I have many kvetches about the new latency under load tests being
> designed and distributed over the past year. I am delighted! that they
> are happening, but most really need third party evaluation, and
> calibration, and a solid explanation of what network pathologies they
> do and don't cover. Also a RED team attitude towards them, as well as
> thinking hard about what you are not measuring (operations research).
> 
> I actually rather love the new cloudflare speedtest, because it tests
> a single TCP connection, rather than dozens, and at the same time folk
> are complaining that it doesn't find the actual "speed!". yet... the
> test itself more closely emulates a user experience than speedtest.net
> does. I am personally pretty convinced that the fewer numbers of flows
> that a web page opens improves the likelihood of a good user
> experience, but lack data on it.
> 
> To try to tackle the evaluation and calibration part, I've reached out
> to all the new test designers in the hope that we could get together
> and produce a report of what each new test is actually doing. I've
> tweeted, linked in, emailed, and spammed every measurement list I know
> of, and only to some response, please reach out to other test designer
> folks and have them join the rpm email list?
> 
> My principal kvetches in the new tests so far are:
> 
> 0) None of the tests last long enough.
> 
> Ideally there should be a mode where they at least run to "time of
> first loss", or periodically, just run longer than the
> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
> there! It's really bad science to optimize the internet for 20
> seconds. It's like optimizing a car, to handle well, for just 20
> seconds.
> 
> 1) Not testing up + down + ping at the same time
> 
> None of the new tests actually test the same thing that the infamous
> rrul test does - all the others still test up, then down, and ping. It
> was/remains my hope that the simpler parts of the flent test suite -
> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
> tests would provide calibration to the test designers.
> 
> we've got zillions of flent results in the archive published here:
> https://blog.cerowrt.org/post/found_in_flent/
> ps. Misinformation about iperf 2 impacts my ability to do this.

> The new tests have all added up + ping and down + ping, but not up +
> down + ping. Why??
> 
> The behaviors of what happens in that case are really non-intuitive, I
> know, but... it's just one more phase to add to any one of those new
> tests. I'd be deliriously happy if someone(s) new to the field
> started doing that, even optionally, and boggled at how it defeated
> their assumptions.
> 
> Among other things that would show...
> 
> It's the home router industry's dirty secret than darn few "gigabit"
> home routers can actually forward in both directions at a gigabit. I'd
> like to smash that perception thoroughly, but given our starting point
> is a gigabit router was a "gigabit switch" - and historically been
> something that couldn't even forward at 200Mbit - we have a long way
> to go there.
> 
> Only in the past year have non-x86 home routers appeared that could
> actually do a gbit in both directions.
> 
> 2) Few are actually testing within-stream latency
> 
> Apple's rpm project is making a stab in that direction. It looks
> highly likely, that with a little more work, crusader and
> go-responsiveness can finally start sampling the tcp RTT, loss and
> markings, more directly. As for the rest... sampling TCP_INFO on
> windows, and Linux, at least, always appeared simple to me, but I'm
> discovering how hard it is by delving deep into the rust behind
> crusader.
> 
> the goresponsiveness thing is also IMHO running WAY too many streams
> at the same time, I guess motivated by an attempt to have the test
> complete quickly?
> 
> B) To try and tackle the validation problem:ps. Misinformation about 
> iperf 2 impacts my ability to do this.

> 
> In the libreqos.io project we've established a testbed where tests can
> be plunked through various ISP plan network emulations. It's here:
> https://payne.taht.net (run bandwidth test for what's currently hooked
> up)
> 
> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
> leverage with that, so I don't have to nat the various emulations.
> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
> to see more test designers setup a testbed like this to calibrate
> their own stuff.
> 
> Presently we're able to test:
> flent
> netperf
> iperf2
> iperf3
> speedtest-cli
> crusader
> the broadband forum udp based test:
> https://github.com/BroadbandForum/obudpst
> trexx
> 
> There's also a virtual machine setup that we can remotely drive a web
> browser from (but I didn't want to nat the results to the world) to
> test other web services.
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [EXTERNAL] Re: [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 18:54       ` [LibreQoS] [EXTERNAL] " Livingood, Jason
@ 2023-01-09 19:19         ` rjmcmahon
  2023-01-09 19:56           ` dan
  2023-01-09 20:49         ` [LibreQoS] [EXTERNAL] Re: [Starlink] Researchers Seeking Probe Volunteers in USA Dave Taht
  1 sibling, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-01-09 19:19 UTC (permalink / raw)
  To: Livingood, Jason; +Cc: Dave Taht, starlink, Rpm, libreqos, bloat

User based, long duration tests seem fundamentally flawed. QoE for users 
is driven by user expectations. And if a user won't wait on a long test 
they for sure aren't going to wait minutes for a web page download. If 
it's a long duration use case, e.g. a file download, then latency isn't 
typically driving QoE.

Not: Even for internal tests, we try to keep our automated tests down to 
2 seconds. There are reasons to test for minutes (things like phy cals 
in our chips) but it's more of the exception than the rule.

Bob
>> 0) None of the tests last long enough.
> 
> The user-initiated ones tend to be shorter - likely because the
> average user does not want to wait several minutes for a test to
> complete. But IMO this is where a test platform like SamKnows, Ookla's
> embedded client, NetMicroscope, and others can come in - since they
> run in the background on some randomized schedule w/o user
> intervention. Thus, the user's time-sensitivity is no longer a factor
> and a longer duration test can be performed.
> 
>> 1) Not testing up + down + ping at the same time
> 
> You should consider publishing a LUL BCP I-D in the IRTF/IETF - like in 
> IPPM...
> 
> JL
> 
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-09 19:13       ` [LibreQoS] [Rpm] " rjmcmahon
@ 2023-01-09 19:47         ` Sebastian Moeller
  2023-01-11 18:32           ` Rodney W. Grimes
  2023-01-09 20:20         ` [LibreQoS] [Rpm] [Starlink] " Dave Taht
  2023-01-10 17:36         ` [LibreQoS] [Rpm] [Starlink] " David P. Reed
  2 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-01-09 19:47 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Dave Täht, Dave Taht via Starlink, mike.reynolds, libreqos,
	David P. Reed, Rpm, bloat

Hi Bib,


> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink <starlink@lists.bufferbloat.net> wrote:
> 
> My biggest barrier is the lack of clock sync by the devices, i.e. very limited support for PTP in data centers and in end devices. This limits the ability to measure one way delays (OWD) and most assume that OWD is 1/2 and RTT which typically is a mistake. We know this intuitively with airplane flight times or even car commute times where the one way time is not 1/2 a round trip time. Google maps & directions provide a time estimate for the one way link. It doesn't compute a round trip and divide by two.
> 
> For those that can get clock sync working, the iperf 2 --trip-times options is useful.

	[SM] +1; and yet even with unsynchronized clocks one can try to measure how latency changes under load and that can be done per direction. Sure this is far inferior to real reliably measured OWDs, but if life/the internet deals you lemons....


> 
> --trip-times
>  enable the measurement of end to end write to read latencies (client and server clocks must be synchronized)

	[SM] Sweet!

Regards
	Sebastian

> 
> Bob
>> I have many kvetches about the new latency under load tests being
>> designed and distributed over the past year. I am delighted! that they
>> are happening, but most really need third party evaluation, and
>> calibration, and a solid explanation of what network pathologies they
>> do and don't cover. Also a RED team attitude towards them, as well as
>> thinking hard about what you are not measuring (operations research).
>> I actually rather love the new cloudflare speedtest, because it tests
>> a single TCP connection, rather than dozens, and at the same time folk
>> are complaining that it doesn't find the actual "speed!". yet... the
>> test itself more closely emulates a user experience than speedtest.net
>> does. I am personally pretty convinced that the fewer numbers of flows
>> that a web page opens improves the likelihood of a good user
>> experience, but lack data on it.
>> To try to tackle the evaluation and calibration part, I've reached out
>> to all the new test designers in the hope that we could get together
>> and produce a report of what each new test is actually doing. I've
>> tweeted, linked in, emailed, and spammed every measurement list I know
>> of, and only to some response, please reach out to other test designer
>> folks and have them join the rpm email list?
>> My principal kvetches in the new tests so far are:
>> 0) None of the tests last long enough.
>> Ideally there should be a mode where they at least run to "time of
>> first loss", or periodically, just run longer than the
>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>> there! It's really bad science to optimize the internet for 20
>> seconds. It's like optimizing a car, to handle well, for just 20
>> seconds.
>> 1) Not testing up + down + ping at the same time
>> None of the new tests actually test the same thing that the infamous
>> rrul test does - all the others still test up, then down, and ping. It
>> was/remains my hope that the simpler parts of the flent test suite -
>> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
>> tests would provide calibration to the test designers.
>> we've got zillions of flent results in the archive published here:
>> https://blog.cerowrt.org/post/found_in_flent/
>> ps. Misinformation about iperf 2 impacts my ability to do this.
> 
>> The new tests have all added up + ping and down + ping, but not up +
>> down + ping. Why??
>> The behaviors of what happens in that case are really non-intuitive, I
>> know, but... it's just one more phase to add to any one of those new
>> tests. I'd be deliriously happy if someone(s) new to the field
>> started doing that, even optionally, and boggled at how it defeated
>> their assumptions.
>> Among other things that would show...
>> It's the home router industry's dirty secret than darn few "gigabit"
>> home routers can actually forward in both directions at a gigabit. I'd
>> like to smash that perception thoroughly, but given our starting point
>> is a gigabit router was a "gigabit switch" - and historically been
>> something that couldn't even forward at 200Mbit - we have a long way
>> to go there.
>> Only in the past year have non-x86 home routers appeared that could
>> actually do a gbit in both directions.
>> 2) Few are actually testing within-stream latency
>> Apple's rpm project is making a stab in that direction. It looks
>> highly likely, that with a little more work, crusader and
>> go-responsiveness can finally start sampling the tcp RTT, loss and
>> markings, more directly. As for the rest... sampling TCP_INFO on
>> windows, and Linux, at least, always appeared simple to me, but I'm
>> discovering how hard it is by delving deep into the rust behind
>> crusader.
>> the goresponsiveness thing is also IMHO running WAY too many streams
>> at the same time, I guess motivated by an attempt to have the test
>> complete quickly?
>> B) To try and tackle the validation problem:ps. Misinformation about iperf 2 impacts my ability to do this.
> 
>> In the libreqos.io project we've established a testbed where tests can
>> be plunked through various ISP plan network emulations. It's here:
>> https://payne.taht.net (run bandwidth test for what's currently hooked
>> up)
>> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
>> leverage with that, so I don't have to nat the various emulations.
>> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
>> to see more test designers setup a testbed like this to calibrate
>> their own stuff.
>> Presently we're able to test:
>> flent
>> netperf
>> iperf2
>> iperf3
>> speedtest-cli
>> crusader
>> the broadband forum udp based test:
>> https://github.com/BroadbandForum/obudpst
>> trexx
>> There's also a virtual machine setup that we can remotely drive a web
>> browser from (but I didn't want to nat the results to the world) to
>> test other web services.
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [EXTERNAL] Re: [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 19:19         ` [LibreQoS] [Rpm] " rjmcmahon
@ 2023-01-09 19:56           ` dan
  2023-01-09 21:00             ` rjmcmahon
  2023-03-13 10:02             ` Sebastian Moeller
  0 siblings, 2 replies; 183+ messages in thread
From: dan @ 2023-01-09 19:56 UTC (permalink / raw)
  To: rjmcmahon; +Cc: Livingood, Jason, starlink, Rpm, bloat, libreqos

I'm not offering a complete solution here....  I'm not so keen on
speed tests.  It's akin to testing your car's performance by flooring
it til you hit the governor and hard breaking til you stop *while in
traffic*.   That doesn't demonstrate the utility of the car.

Data is already being transferred, let's measure that.    Doing some
routine simple tests intentionally during low, mid, high congestion
periods to see how the service is actually performing for the end
user.  You don't need to generate the traffic on a link to measure how
much traffic a link can handle.  And determining congestion on a
service in a fairly rudimentary way would be frequent latency tests to
'known good' service ie high capacity services that are unlikely to
experience congestion.

There are few use cases that matche a 2 minute speed test outside of
'wonder what my internet connection can do'.  And in those few use
cases such as a big file download, a routine latency test is a really
great measure of the quality of a service.  Sure, troubleshooting by
the ISP might include a full bore multi-minute speed test but that's
really not useful for the consumer.

Further, exposing this data to the end users, IMO, is likely better as
a chart of congestion and flow durations and some scoring.  ie, slice
out 7-8pm, during this segment you were able to pull 427Mbps without
congestion, netflix or streaming service use approximately 6% of
capacity.  Your service was busy for 100% of this time ( likely
measuring buffer bloat ).    Expressed as a pretty chart with consumer
friendly language.


When you guys are talking about per segment latency testing, you're
really talking about metrics for operators to be concerned with, not
end users.  It's useless information for them.  I had a woman about 2
months ago complain about her frame rates because her internet
connection was 15 emm ess's and that was terrible and I needed to fix
it.  (slow computer was the problem, obviously) but that data from
speedtest.net didn't actually help her at all, it just confused her.

Running timed speed tests at 3am (Eero, I'm looking at you) is pretty
pointless.  Running speed tests during busy hours is a little bit
harmful overall considering it's pushing into oversells on every ISP.

I could talk endlessly about how useless speed tests are to end user experience.


On Mon, Jan 9, 2023 at 12:20 PM rjmcmahon via LibreQoS
<libreqos@lists.bufferbloat.net> wrote:
>
> User based, long duration tests seem fundamentally flawed. QoE for users
> is driven by user expectations. And if a user won't wait on a long test
> they for sure aren't going to wait minutes for a web page download. If
> it's a long duration use case, e.g. a file download, then latency isn't
> typically driving QoE.
>
> Not: Even for internal tests, we try to keep our automated tests down to
> 2 seconds. There are reasons to test for minutes (things like phy cals
> in our chips) but it's more of the exception than the rule.
>
> Bob
> >> 0) None of the tests last long enough.
> >
> > The user-initiated ones tend to be shorter - likely because the
> > average user does not want to wait several minutes for a test to
> > complete. But IMO this is where a test platform like SamKnows, Ookla's
> > embedded client, NetMicroscope, and others can come in - since they
> > run in the background on some randomized schedule w/o user
> > intervention. Thus, the user's time-sensitivity is no longer a factor
> > and a longer duration test can be performed.
> >
> >> 1) Not testing up + down + ping at the same time
> >
> > You should consider publishing a LUL BCP I-D in the IRTF/IETF - like in
> > IPPM...
> >
> > JL
> >
> > _______________________________________________
> > Rpm mailing list
> > Rpm@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/rpm
> _______________________________________________
> LibreQoS mailing list
> LibreQoS@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/libreqos

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 19:13       ` [LibreQoS] [Rpm] " rjmcmahon
  2023-01-09 19:47         ` [LibreQoS] [Starlink] [Rpm] " Sebastian Moeller
@ 2023-01-09 20:20         ` Dave Taht
  2023-01-09 20:46           ` rjmcmahon
  2023-01-10 17:36         ` [LibreQoS] [Rpm] [Starlink] " David P. Reed
  2 siblings, 1 reply; 183+ messages in thread
From: Dave Taht @ 2023-01-09 20:20 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Livingood, Jason, Rpm, mike.reynolds, libreqos, David P. Reed,
	starlink, bloat

The DC that so graciously loaned us 3 machines for the testbed (thx
equinix!), does support ptp, but we have not configured it yet. In ntp
tests between these hosts we seem to be within 500us, and certainly
50us would be great, in the future.

I note that in all my kvetching about the new tests' needing
validation today... I kind of elided that I'm pretty happy with
iperf2's new tests that landed last august, and are now appearing in
linux package managers around the world. I hope more folk use them.
(sorry robert, it's been a long time since last august!)

Our new testbed has multiple setups. In one setup - basically the
machine name is equal to a given ISP plan, and a key testing point is
looking at the differences between the FCC 25-3 and 100/20 plans in
the real world. However at our scale (25gbit) it turned out that
emulating the delay realistically has problematic.

Anyway, here's a 25/3 result for iperf (other results and iperf test
type requests gladly accepted)

root@lqos:~# iperf -6 --trip-times -c c25-3 -e -i 1
------------------------------------------------------------
Client connecting to c25-3, TCP port 5001 with pid 2146556 (1 flows)
Write buffer size: 131072 Byte
TOS set to 0x0 (Nagle on)
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  1] local fd77::3%bond0.4 port 59396 connected with fd77::1:2 port
5001 (trip-times) (sock=3) (icwnd/mss/irtt=13/1428/948) (ct=1.10 ms)
on 2023-01-09 20:13:37 (UTC)
[ ID] Interval            Transfer    Bandwidth       Write/Err  Rtry
   Cwnd/RTT(var)        NetPwr
[  1] 0.0000-1.0000 sec  3.25 MBytes  27.3 Mbits/sec  26/0          0
     19K/6066(262) us  562
[  1] 1.0000-2.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
     15K/4671(207) us  673
[  1] 2.0000-3.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
     13K/5538(280) us  568
[  1] 3.0000-4.0000 sec  3.12 MBytes  26.2 Mbits/sec  25/0          0
     16K/6244(355) us  525
[  1] 4.0000-5.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
     19K/6152(216) us  511
[  1] 5.0000-6.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
     22K/6764(529) us  465
[  1] 6.0000-7.0000 sec  3.12 MBytes  26.2 Mbits/sec  25/0          0
     15K/5918(605) us  554
[  1] 7.0000-8.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
     18K/5178(327) us  608
[  1] 8.0000-9.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
     19K/5758(473) us  546
[  1] 9.0000-10.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
      16K/6141(280) us  512
[  1] 0.0000-10.0952 sec  30.6 MBytes  25.4 Mbits/sec  245/0
0       19K/5924(491) us  537


On Mon, Jan 9, 2023 at 11:13 AM rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>
> My biggest barrier is the lack of clock sync by the devices, i.e. very
> limited support for PTP in data centers and in end devices. This limits
> the ability to measure one way delays (OWD) and most assume that OWD is
> 1/2 and RTT which typically is a mistake. We know this intuitively with
> airplane flight times or even car commute times where the one way time
> is not 1/2 a round trip time. Google maps & directions provide a time
> estimate for the one way link. It doesn't compute a round trip and
> divide by two.
>
> For those that can get clock sync working, the iperf 2 --trip-times
> options is useful.
>
> --trip-times
>    enable the measurement of end to end write to read latencies (client
> and server clocks must be synchronized)
>
> Bob
> > I have many kvetches about the new latency under load tests being
> > designed and distributed over the past year. I am delighted! that they
> > are happening, but most really need third party evaluation, and
> > calibration, and a solid explanation of what network pathologies they
> > do and don't cover. Also a RED team attitude towards them, as well as
> > thinking hard about what you are not measuring (operations research).
> >
> > I actually rather love the new cloudflare speedtest, because it tests
> > a single TCP connection, rather than dozens, and at the same time folk
> > are complaining that it doesn't find the actual "speed!". yet... the
> > test itself more closely emulates a user experience than speedtest.net
> > does. I am personally pretty convinced that the fewer numbers of flows
> > that a web page opens improves the likelihood of a good user
> > experience, but lack data on it.
> >
> > To try to tackle the evaluation and calibration part, I've reached out
> > to all the new test designers in the hope that we could get together
> > and produce a report of what each new test is actually doing. I've
> > tweeted, linked in, emailed, and spammed every measurement list I know
> > of, and only to some response, please reach out to other test designer
> > folks and have them join the rpm email list?
> >
> > My principal kvetches in the new tests so far are:
> >
> > 0) None of the tests last long enough.
> >
> > Ideally there should be a mode where they at least run to "time of
> > first loss", or periodically, just run longer than the
> > industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
> > there! It's really bad science to optimize the internet for 20
> > seconds. It's like optimizing a car, to handle well, for just 20
> > seconds.
> >
> > 1) Not testing up + down + ping at the same time
> >
> > None of the new tests actually test the same thing that the infamous
> > rrul test does - all the others still test up, then down, and ping. It
> > was/remains my hope that the simpler parts of the flent test suite -
> > such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
> > tests would provide calibration to the test designers.
> >
> > we've got zillions of flent results in the archive published here:
> > https://blog.cerowrt.org/post/found_in_flent/
> > ps. Misinformation about iperf 2 impacts my ability to do this.
>
> > The new tests have all added up + ping and down + ping, but not up +
> > down + ping. Why??
> >
> > The behaviors of what happens in that case are really non-intuitive, I
> > know, but... it's just one more phase to add to any one of those new
> > tests. I'd be deliriously happy if someone(s) new to the field
> > started doing that, even optionally, and boggled at how it defeated
> > their assumptions.
> >
> > Among other things that would show...
> >
> > It's the home router industry's dirty secret than darn few "gigabit"
> > home routers can actually forward in both directions at a gigabit. I'd
> > like to smash that perception thoroughly, but given our starting point
> > is a gigabit router was a "gigabit switch" - and historically been
> > something that couldn't even forward at 200Mbit - we have a long way
> > to go there.
> >
> > Only in the past year have non-x86 home routers appeared that could
> > actually do a gbit in both directions.
> >
> > 2) Few are actually testing within-stream latency
> >
> > Apple's rpm project is making a stab in that direction. It looks
> > highly likely, that with a little more work, crusader and
> > go-responsiveness can finally start sampling the tcp RTT, loss and
> > markings, more directly. As for the rest... sampling TCP_INFO on
> > windows, and Linux, at least, always appeared simple to me, but I'm
> > discovering how hard it is by delving deep into the rust behind
> > crusader.
> >
> > the goresponsiveness thing is also IMHO running WAY too many streams
> > at the same time, I guess motivated by an attempt to have the test
> > complete quickly?
> >
> > B) To try and tackle the validation problem:ps. Misinformation about
> > iperf 2 impacts my ability to do this.
>
> >
> > In the libreqos.io project we've established a testbed where tests can
> > be plunked through various ISP plan network emulations. It's here:
> > https://payne.taht.net (run bandwidth test for what's currently hooked
> > up)
> >
> > We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
> > leverage with that, so I don't have to nat the various emulations.
> > (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
> > to see more test designers setup a testbed like this to calibrate
> > their own stuff.
> >
> > Presently we're able to test:
> > flent
> > netperf
> > iperf2
> > iperf3
> > speedtest-cli
> > crusader
> > the broadband forum udp based test:
> > https://github.com/BroadbandForum/obudpst
> > trexx
> >
> > There's also a virtual machine setup that we can remotely drive a web
> > browser from (but I didn't want to nat the results to the world) to
> > test other web services.
> > _______________________________________________
> > Rpm mailing list
> > Rpm@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/rpm



-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 20:20         ` [LibreQoS] [Rpm] [Starlink] " Dave Taht
@ 2023-01-09 20:46           ` rjmcmahon
  2023-01-09 20:59             ` Dave Taht
  2023-01-09 21:02             ` [LibreQoS] [Starlink] [Rpm] " Dick Roy
  0 siblings, 2 replies; 183+ messages in thread
From: rjmcmahon @ 2023-01-09 20:46 UTC (permalink / raw)
  To: Dave Taht
  Cc: Livingood, Jason, Rpm, mike.reynolds, libreqos, David P. Reed,
	starlink, bloat

The write to read latencies (OWD) are on the server side in CLT form. 
Use --histograms on the server side to enable them.

Your client side sampled TCP RTT is 6ms with less than a 1 ms of 
variance (or sqrt of variance as variance is typically squared)  No 
retries suggest the network isn't dropping packets.

All the newer bounceback code is only master and requires a compile from 
source. It will be released in 2.1.9 after testing cycles. Hopefully, in 
early March 2023

Bob

https://sourceforge.net/projects/iperf2/

> The DC that so graciously loaned us 3 machines for the testbed (thx
> equinix!), does support ptp, but we have not configured it yet. In ntp
> tests between these hosts we seem to be within 500us, and certainly
> 50us would be great, in the future.
> 
> I note that in all my kvetching about the new tests' needing
> validation today... I kind of elided that I'm pretty happy with
> iperf2's new tests that landed last august, and are now appearing in
> linux package managers around the world. I hope more folk use them.
> (sorry robert, it's been a long time since last august!)
> 
> Our new testbed has multiple setups. In one setup - basically the
> machine name is equal to a given ISP plan, and a key testing point is
> looking at the differences between the FCC 25-3 and 100/20 plans in
> the real world. However at our scale (25gbit) it turned out that
> emulating the delay realistically has problematic.
> 
> Anyway, here's a 25/3 result for iperf (other results and iperf test
> type requests gladly accepted)
> 
> root@lqos:~# iperf -6 --trip-times -c c25-3 -e -i 1
> ------------------------------------------------------------
> Client connecting to c25-3, TCP port 5001 with pid 2146556 (1 flows)
> Write buffer size: 131072 Byte
> TOS set to 0x0 (Nagle on)
> TCP window size: 85.3 KByte (default)
> ------------------------------------------------------------
> [  1] local fd77::3%bond0.4 port 59396 connected with fd77::1:2 port
> 5001 (trip-times) (sock=3) (icwnd/mss/irtt=13/1428/948) (ct=1.10 ms)
> on 2023-01-09 20:13:37 (UTC)
> [ ID] Interval            Transfer    Bandwidth       Write/Err  Rtry
>    Cwnd/RTT(var)        NetPwr
> [  1] 0.0000-1.0000 sec  3.25 MBytes  27.3 Mbits/sec  26/0          0
>      19K/6066(262) us  562
> [  1] 1.0000-2.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>      15K/4671(207) us  673
> [  1] 2.0000-3.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>      13K/5538(280) us  568
> [  1] 3.0000-4.0000 sec  3.12 MBytes  26.2 Mbits/sec  25/0          0
>      16K/6244(355) us  525
> [  1] 4.0000-5.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>      19K/6152(216) us  511
> [  1] 5.0000-6.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>      22K/6764(529) us  465
> [  1] 6.0000-7.0000 sec  3.12 MBytes  26.2 Mbits/sec  25/0          0
>      15K/5918(605) us  554
> [  1] 7.0000-8.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>      18K/5178(327) us  608
> [  1] 8.0000-9.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>      19K/5758(473) us  546
> [  1] 9.0000-10.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>       16K/6141(280) us  512
> [  1] 0.0000-10.0952 sec  30.6 MBytes  25.4 Mbits/sec  245/0
> 0       19K/5924(491) us  537
> 
> 
> On Mon, Jan 9, 2023 at 11:13 AM rjmcmahon <rjmcmahon@rjmcmahon.com> 
> wrote:
>> 
>> My biggest barrier is the lack of clock sync by the devices, i.e. very
>> limited support for PTP in data centers and in end devices. This 
>> limits
>> the ability to measure one way delays (OWD) and most assume that OWD 
>> is
>> 1/2 and RTT which typically is a mistake. We know this intuitively 
>> with
>> airplane flight times or even car commute times where the one way time
>> is not 1/2 a round trip time. Google maps & directions provide a time
>> estimate for the one way link. It doesn't compute a round trip and
>> divide by two.
>> 
>> For those that can get clock sync working, the iperf 2 --trip-times
>> options is useful.
>> 
>> --trip-times
>>    enable the measurement of end to end write to read latencies 
>> (client
>> and server clocks must be synchronized)
>> 
>> Bob
>> > I have many kvetches about the new latency under load tests being
>> > designed and distributed over the past year. I am delighted! that they
>> > are happening, but most really need third party evaluation, and
>> > calibration, and a solid explanation of what network pathologies they
>> > do and don't cover. Also a RED team attitude towards them, as well as
>> > thinking hard about what you are not measuring (operations research).
>> >
>> > I actually rather love the new cloudflare speedtest, because it tests
>> > a single TCP connection, rather than dozens, and at the same time folk
>> > are complaining that it doesn't find the actual "speed!". yet... the
>> > test itself more closely emulates a user experience than speedtest.net
>> > does. I am personally pretty convinced that the fewer numbers of flows
>> > that a web page opens improves the likelihood of a good user
>> > experience, but lack data on it.
>> >
>> > To try to tackle the evaluation and calibration part, I've reached out
>> > to all the new test designers in the hope that we could get together
>> > and produce a report of what each new test is actually doing. I've
>> > tweeted, linked in, emailed, and spammed every measurement list I know
>> > of, and only to some response, please reach out to other test designer
>> > folks and have them join the rpm email list?
>> >
>> > My principal kvetches in the new tests so far are:
>> >
>> > 0) None of the tests last long enough.
>> >
>> > Ideally there should be a mode where they at least run to "time of
>> > first loss", or periodically, just run longer than the
>> > industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>> > there! It's really bad science to optimize the internet for 20
>> > seconds. It's like optimizing a car, to handle well, for just 20
>> > seconds.
>> >
>> > 1) Not testing up + down + ping at the same time
>> >
>> > None of the new tests actually test the same thing that the infamous
>> > rrul test does - all the others still test up, then down, and ping. It
>> > was/remains my hope that the simpler parts of the flent test suite -
>> > such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
>> > tests would provide calibration to the test designers.
>> >
>> > we've got zillions of flent results in the archive published here:
>> > https://blog.cerowrt.org/post/found_in_flent/
>> > ps. Misinformation about iperf 2 impacts my ability to do this.
>> 
>> > The new tests have all added up + ping and down + ping, but not up +
>> > down + ping. Why??
>> >
>> > The behaviors of what happens in that case are really non-intuitive, I
>> > know, but... it's just one more phase to add to any one of those new
>> > tests. I'd be deliriously happy if someone(s) new to the field
>> > started doing that, even optionally, and boggled at how it defeated
>> > their assumptions.
>> >
>> > Among other things that would show...
>> >
>> > It's the home router industry's dirty secret than darn few "gigabit"
>> > home routers can actually forward in both directions at a gigabit. I'd
>> > like to smash that perception thoroughly, but given our starting point
>> > is a gigabit router was a "gigabit switch" - and historically been
>> > something that couldn't even forward at 200Mbit - we have a long way
>> > to go there.
>> >
>> > Only in the past year have non-x86 home routers appeared that could
>> > actually do a gbit in both directions.
>> >
>> > 2) Few are actually testing within-stream latency
>> >
>> > Apple's rpm project is making a stab in that direction. It looks
>> > highly likely, that with a little more work, crusader and
>> > go-responsiveness can finally start sampling the tcp RTT, loss and
>> > markings, more directly. As for the rest... sampling TCP_INFO on
>> > windows, and Linux, at least, always appeared simple to me, but I'm
>> > discovering how hard it is by delving deep into the rust behind
>> > crusader.
>> >
>> > the goresponsiveness thing is also IMHO running WAY too many streams
>> > at the same time, I guess motivated by an attempt to have the test
>> > complete quickly?
>> >
>> > B) To try and tackle the validation problem:ps. Misinformation about
>> > iperf 2 impacts my ability to do this.
>> 
>> >
>> > In the libreqos.io project we've established a testbed where tests can
>> > be plunked through various ISP plan network emulations. It's here:
>> > https://payne.taht.net (run bandwidth test for what's currently hooked
>> > up)
>> >
>> > We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
>> > leverage with that, so I don't have to nat the various emulations.
>> > (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
>> > to see more test designers setup a testbed like this to calibrate
>> > their own stuff.
>> >
>> > Presently we're able to test:
>> > flent
>> > netperf
>> > iperf2
>> > iperf3
>> > speedtest-cli
>> > crusader
>> > the broadband forum udp based test:
>> > https://github.com/BroadbandForum/obudpst
>> > trexx
>> >
>> > There's also a virtual machine setup that we can remotely drive a web
>> > browser from (but I didn't want to nat the results to the world) to
>> > test other web services.
>> > _______________________________________________
>> > Rpm mailing list
>> > Rpm@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/rpm

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [EXTERNAL] Re: [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 18:54       ` [LibreQoS] [EXTERNAL] " Livingood, Jason
  2023-01-09 19:19         ` [LibreQoS] [Rpm] " rjmcmahon
@ 2023-01-09 20:49         ` Dave Taht
  1 sibling, 0 replies; 183+ messages in thread
From: Dave Taht @ 2023-01-09 20:49 UTC (permalink / raw)
  To: Livingood, Jason; +Cc: starlink, Rpm, bloat, libreqos

On Mon, Jan 9, 2023 at 10:54 AM Livingood, Jason
<Jason_Livingood@comcast.com> wrote:
>
> > 0) None of the tests last long enough.
>
> The user-initiated ones tend to be shorter - likely because the average user does not want to wait several minutes for a test to complete. But IMO this is where a test platform like SamKnows, Ookla's embedded client, NetMicroscope, and others can come in - since they run in the background on some randomized schedule w/o user intervention. Thus, the user's time-sensitivity is no longer a factor and a longer duration test can be performed.

I would be so happy if someone independent ( and not necessarily me!)
was validating those more private tests, and retaining reference
packet captures of the various behaviors observed.

Bloat is just one problem among many, that shows up on a speedtest. I
have, for example, been working for months, on a very difficult
problem occurring on at least one wifi6 chipset... where the block-ack
window is being violated, leading to a ton of jitter under certain
loads, and a bandwidth reduction, that doesn't show up in summary
data.

Samknows published a really good blog recently,
here: https://samknows.com/blog/testing-principles

about how they are going about things... however...

>
> > 1) Not testing up + down + ping at the same time
>
> You should consider publishing a LUL BCP I-D in the IRTF/IETF - like in IPPM...

I have cc'd ippm a few times on these threads, and am on that mailing
list. It's pretty moribund compared to around here.

I am primarily interested in correct code (be it from a specification,
or not), and in looking at packet captures, to validate that it is
doing what it says on the tin, and moreover getting more stuff on
those tin, that I already know, should be tested for.

I agree that someday trying to nail down what latency under load
means, would be good to do. I'd settle at the moment, for single flow
simultaneous, tcp up, down, ping, and within stream latencies... all
plotted on the same chart.

> JL
>


-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 20:46           ` rjmcmahon
@ 2023-01-09 20:59             ` Dave Taht
  2023-01-09 21:06               ` rjmcmahon
  2023-01-09 21:02             ` [LibreQoS] [Starlink] [Rpm] " Dick Roy
  1 sibling, 1 reply; 183+ messages in thread
From: Dave Taht @ 2023-01-09 20:59 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Livingood, Jason, Rpm, mike.reynolds, libreqos, David P. Reed,
	starlink, bloat

On Mon, Jan 9, 2023 at 12:46 PM rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>
> The write to read latencies (OWD) are on the server side in CLT form.
> Use --histograms on the server side to enable them.

Thx. It is far more difficult to instrument things on the server side
of the testbed but we will tackle it.

> Your client side sampled TCP RTT is 6ms with less than a 1 ms of
> variance (or sqrt of variance as variance is typically squared)  No
> retries suggest the network isn't dropping packets.

Thank you for analyzing that result. the cake aqm, set for a 5ms
target, with RFC3168-style ECN, is enabled on this path, on this
setup, at the moment. So the result is correct.

A second test with ecn off showed the expected retries.

I have emulations also of fifos, pie, fq-pie, fq-codel, red, blue,
sfq, with various realworld delays, and so on... but this is a bit
distracting at the moment from our focus, which was in optimizing the
XDP + ebpf based bridge and epping based sampling tools to crack
25Gbit.

I think iperf2 will be great for us after that settles down.

> All the newer bounceback code is only master and requires a compile from
> source. It will be released in 2.1.9 after testing cycles. Hopefully, in
> early March 2023

I would like to somehow parse and present those histograms.
>
> Bob
>
> https://sourceforge.net/projects/iperf2/
>
> > The DC that so graciously loaned us 3 machines for the testbed (thx
> > equinix!), does support ptp, but we have not configured it yet. In ntp
> > tests between these hosts we seem to be within 500us, and certainly
> > 50us would be great, in the future.
> >
> > I note that in all my kvetching about the new tests' needing
> > validation today... I kind of elided that I'm pretty happy with
> > iperf2's new tests that landed last august, and are now appearing in
> > linux package managers around the world. I hope more folk use them.
> > (sorry robert, it's been a long time since last august!)
> >
> > Our new testbed has multiple setups. In one setup - basically the
> > machine name is equal to a given ISP plan, and a key testing point is
> > looking at the differences between the FCC 25-3 and 100/20 plans in
> > the real world. However at our scale (25gbit) it turned out that
> > emulating the delay realistically has problematic.
> >
> > Anyway, here's a 25/3 result for iperf (other results and iperf test
> > type requests gladly accepted)
> >
> > root@lqos:~# iperf -6 --trip-times -c c25-3 -e -i 1
> > ------------------------------------------------------------
> > Client connecting to c25-3, TCP port 5001 with pid 2146556 (1 flows)
> > Write buffer size: 131072 Byte
> > TOS set to 0x0 (Nagle on)
> > TCP window size: 85.3 KByte (default)
> > ------------------------------------------------------------
> > [  1] local fd77::3%bond0.4 port 59396 connected with fd77::1:2 port
> > 5001 (trip-times) (sock=3) (icwnd/mss/irtt=13/1428/948) (ct=1.10 ms)
> > on 2023-01-09 20:13:37 (UTC)
> > [ ID] Interval            Transfer    Bandwidth       Write/Err  Rtry
> >    Cwnd/RTT(var)        NetPwr
> > [  1] 0.0000-1.0000 sec  3.25 MBytes  27.3 Mbits/sec  26/0          0
> >      19K/6066(262) us  562
> > [  1] 1.0000-2.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
> >      15K/4671(207) us  673
> > [  1] 2.0000-3.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
> >      13K/5538(280) us  568
> > [  1] 3.0000-4.0000 sec  3.12 MBytes  26.2 Mbits/sec  25/0          0
> >      16K/6244(355) us  525
> > [  1] 4.0000-5.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
> >      19K/6152(216) us  511
> > [  1] 5.0000-6.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
> >      22K/6764(529) us  465
> > [  1] 6.0000-7.0000 sec  3.12 MBytes  26.2 Mbits/sec  25/0          0
> >      15K/5918(605) us  554
> > [  1] 7.0000-8.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
> >      18K/5178(327) us  608
> > [  1] 8.0000-9.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
> >      19K/5758(473) us  546
> > [  1] 9.0000-10.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
> >       16K/6141(280) us  512
> > [  1] 0.0000-10.0952 sec  30.6 MBytes  25.4 Mbits/sec  245/0
> > 0       19K/5924(491) us  537
> >
> >
> > On Mon, Jan 9, 2023 at 11:13 AM rjmcmahon <rjmcmahon@rjmcmahon.com>
> > wrote:
> >>
> >> My biggest barrier is the lack of clock sync by the devices, i.e. very
> >> limited support for PTP in data centers and in end devices. This
> >> limits
> >> the ability to measure one way delays (OWD) and most assume that OWD
> >> is
> >> 1/2 and RTT which typically is a mistake. We know this intuitively
> >> with
> >> airplane flight times or even car commute times where the one way time
> >> is not 1/2 a round trip time. Google maps & directions provide a time
> >> estimate for the one way link. It doesn't compute a round trip and
> >> divide by two.
> >>
> >> For those that can get clock sync working, the iperf 2 --trip-times
> >> options is useful.
> >>
> >> --trip-times
> >>    enable the measurement of end to end write to read latencies
> >> (client
> >> and server clocks must be synchronized)
> >>
> >> Bob
> >> > I have many kvetches about the new latency under load tests being
> >> > designed and distributed over the past year. I am delighted! that they
> >> > are happening, but most really need third party evaluation, and
> >> > calibration, and a solid explanation of what network pathologies they
> >> > do and don't cover. Also a RED team attitude towards them, as well as
> >> > thinking hard about what you are not measuring (operations research).
> >> >
> >> > I actually rather love the new cloudflare speedtest, because it tests
> >> > a single TCP connection, rather than dozens, and at the same time folk
> >> > are complaining that it doesn't find the actual "speed!". yet... the
> >> > test itself more closely emulates a user experience than speedtest.net
> >> > does. I am personally pretty convinced that the fewer numbers of flows
> >> > that a web page opens improves the likelihood of a good user
> >> > experience, but lack data on it.
> >> >
> >> > To try to tackle the evaluation and calibration part, I've reached out
> >> > to all the new test designers in the hope that we could get together
> >> > and produce a report of what each new test is actually doing. I've
> >> > tweeted, linked in, emailed, and spammed every measurement list I know
> >> > of, and only to some response, please reach out to other test designer
> >> > folks and have them join the rpm email list?
> >> >
> >> > My principal kvetches in the new tests so far are:
> >> >
> >> > 0) None of the tests last long enough.
> >> >
> >> > Ideally there should be a mode where they at least run to "time of
> >> > first loss", or periodically, just run longer than the
> >> > industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
> >> > there! It's really bad science to optimize the internet for 20
> >> > seconds. It's like optimizing a car, to handle well, for just 20
> >> > seconds.
> >> >
> >> > 1) Not testing up + down + ping at the same time
> >> >
> >> > None of the new tests actually test the same thing that the infamous
> >> > rrul test does - all the others still test up, then down, and ping. It
> >> > was/remains my hope that the simpler parts of the flent test suite -
> >> > such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
> >> > tests would provide calibration to the test designers.
> >> >
> >> > we've got zillions of flent results in the archive published here:
> >> > https://blog.cerowrt.org/post/found_in_flent/
> >> > ps. Misinformation about iperf 2 impacts my ability to do this.
> >>
> >> > The new tests have all added up + ping and down + ping, but not up +
> >> > down + ping. Why??
> >> >
> >> > The behaviors of what happens in that case are really non-intuitive, I
> >> > know, but... it's just one more phase to add to any one of those new
> >> > tests. I'd be deliriously happy if someone(s) new to the field
> >> > started doing that, even optionally, and boggled at how it defeated
> >> > their assumptions.
> >> >
> >> > Among other things that would show...
> >> >
> >> > It's the home router industry's dirty secret than darn few "gigabit"
> >> > home routers can actually forward in both directions at a gigabit. I'd
> >> > like to smash that perception thoroughly, but given our starting point
> >> > is a gigabit router was a "gigabit switch" - and historically been
> >> > something that couldn't even forward at 200Mbit - we have a long way
> >> > to go there.
> >> >
> >> > Only in the past year have non-x86 home routers appeared that could
> >> > actually do a gbit in both directions.
> >> >
> >> > 2) Few are actually testing within-stream latency
> >> >
> >> > Apple's rpm project is making a stab in that direction. It looks
> >> > highly likely, that with a little more work, crusader and
> >> > go-responsiveness can finally start sampling the tcp RTT, loss and
> >> > markings, more directly. As for the rest... sampling TCP_INFO on
> >> > windows, and Linux, at least, always appeared simple to me, but I'm
> >> > discovering how hard it is by delving deep into the rust behind
> >> > crusader.
> >> >
> >> > the goresponsiveness thing is also IMHO running WAY too many streams
> >> > at the same time, I guess motivated by an attempt to have the test
> >> > complete quickly?
> >> >
> >> > B) To try and tackle the validation problem:ps. Misinformation about
> >> > iperf 2 impacts my ability to do this.
> >>
> >> >
> >> > In the libreqos.io project we've established a testbed where tests can
> >> > be plunked through various ISP plan network emulations. It's here:
> >> > https://payne.taht.net (run bandwidth test for what's currently hooked
> >> > up)
> >> >
> >> > We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
> >> > leverage with that, so I don't have to nat the various emulations.
> >> > (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
> >> > to see more test designers setup a testbed like this to calibrate
> >> > their own stuff.
> >> >
> >> > Presently we're able to test:
> >> > flent
> >> > netperf
> >> > iperf2
> >> > iperf3
> >> > speedtest-cli
> >> > crusader
> >> > the broadband forum udp based test:
> >> > https://github.com/BroadbandForum/obudpst
> >> > trexx
> >> >
> >> > There's also a virtual machine setup that we can remotely drive a web
> >> > browser from (but I didn't want to nat the results to the world) to
> >> > test other web services.
> >> > _______________________________________________
> >> > Rpm mailing list
> >> > Rpm@lists.bufferbloat.net
> >> > https://lists.bufferbloat.net/listinfo/rpm



-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [EXTERNAL] Re: [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 19:56           ` dan
@ 2023-01-09 21:00             ` rjmcmahon
  2023-03-13 10:02             ` Sebastian Moeller
  1 sibling, 0 replies; 183+ messages in thread
From: rjmcmahon @ 2023-01-09 21:00 UTC (permalink / raw)
  To: dan; +Cc: Livingood, Jason, starlink, Rpm, bloat, libreqos

The target audience for iperf 2 latency metrics is network engineers and 
not end users. My belief is that a latency complaint from an end user is 
a defect escape, i.e. it should have been caught earlier by experts in 
our industry. That's part of the reason why I think open source tooling 
that is accurate and trustworthy is critical to our industry moving 
forward & improving. Minimize barriers to measuring & understanding 
issues so to speak.

I do hope one day we move to segment routing where latency telemetry 
drives forwarding planes. The early days of the internet were about 
connectivity. Then came capacity as demand grew. Now we need to improve 
the speed of causality per what's become a massively distributed 
computer system owned by no one single entity.

https://www.segment-routing.net/tutorials/2018-03-06-sr-delay-measurement/

Unfortunately, the performance of e2e latency experiences a form of 
tragedy of the commons as each segment tends to be unaware of the full 
path and their relative contributions.

The ancient Greek philosopher Aristotle pointed out the problem with 
common resources: ‘What is common to many is taken least care of, for 
all men have greater regard for what is their own than for what they 
possess in common with others.’

Bob
> I'm not offering a complete solution here....  I'm not so keen on
> speed tests.  It's akin to testing your car's performance by flooring
> it til you hit the governor and hard breaking til you stop *while in
> traffic*.   That doesn't demonstrate the utility of the car.
> 
> Data is already being transferred, let's measure that.    Doing some
> routine simple tests intentionally during low, mid, high congestion
> periods to see how the service is actually performing for the end
> user.  You don't need to generate the traffic on a link to measure how
> much traffic a link can handle.  And determining congestion on a
> service in a fairly rudimentary way would be frequent latency tests to
> 'known good' service ie high capacity services that are unlikely to
> experience congestion.
> 
> There are few use cases that matche a 2 minute speed test outside of
> 'wonder what my internet connection can do'.  And in those few use
> cases such as a big file download, a routine latency test is a really
> great measure of the quality of a service.  Sure, troubleshooting by
> the ISP might include a full bore multi-minute speed test but that's
> really not useful for the consumer.
> 
> Further, exposing this data to the end users, IMO, is likely better as
> a chart of congestion and flow durations and some scoring.  ie, slice
> out 7-8pm, during this segment you were able to pull 427Mbps without
> congestion, netflix or streaming service use approximately 6% of
> capacity.  Your service was busy for 100% of this time ( likely
> measuring buffer bloat ).    Expressed as a pretty chart with consumer
> friendly language.
> 
> 
> When you guys are talking about per segment latency testing, you're
> really talking about metrics for operators to be concerned with, not
> end users.  It's useless information for them.  I had a woman about 2
> months ago complain about her frame rates because her internet
> connection was 15 emm ess's and that was terrible and I needed to fix
> it.  (slow computer was the problem, obviously) but that data from
> speedtest.net didn't actually help her at all, it just confused her.
> 
> Running timed speed tests at 3am (Eero, I'm looking at you) is pretty
> pointless.  Running speed tests during busy hours is a little bit
> harmful overall considering it's pushing into oversells on every ISP.
> 
> I could talk endlessly about how useless speed tests are to end user 
> experience.
> 
> 
> On Mon, Jan 9, 2023 at 12:20 PM rjmcmahon via LibreQoS
> <libreqos@lists.bufferbloat.net> wrote:
>> 
>> User based, long duration tests seem fundamentally flawed. QoE for 
>> users
>> is driven by user expectations. And if a user won't wait on a long 
>> test
>> they for sure aren't going to wait minutes for a web page download. If
>> it's a long duration use case, e.g. a file download, then latency 
>> isn't
>> typically driving QoE.
>> 
>> Not: Even for internal tests, we try to keep our automated tests down 
>> to
>> 2 seconds. There are reasons to test for minutes (things like phy cals
>> in our chips) but it's more of the exception than the rule.
>> 
>> Bob
>> >> 0) None of the tests last long enough.
>> >
>> > The user-initiated ones tend to be shorter - likely because the
>> > average user does not want to wait several minutes for a test to
>> > complete. But IMO this is where a test platform like SamKnows, Ookla's
>> > embedded client, NetMicroscope, and others can come in - since they
>> > run in the background on some randomized schedule w/o user
>> > intervention. Thus, the user's time-sensitivity is no longer a factor
>> > and a longer duration test can be performed.
>> >
>> >> 1) Not testing up + down + ping at the same time
>> >
>> > You should consider publishing a LUL BCP I-D in the IRTF/IETF - like in
>> > IPPM...
>> >
>> > JL
>> >
>> > _______________________________________________
>> > Rpm mailing list
>> > Rpm@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/rpm
>> _______________________________________________
>> LibreQoS mailing list
>> LibreQoS@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/libreqos

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-09 20:46           ` rjmcmahon
  2023-01-09 20:59             ` Dave Taht
@ 2023-01-09 21:02             ` Dick Roy
  1 sibling, 0 replies; 183+ messages in thread
From: Dick Roy @ 2023-01-09 21:02 UTC (permalink / raw)
  To: 'rjmcmahon', 'Dave Taht'
  Cc: mike.reynolds, 'libreqos', 'David P. Reed',
	'Rpm', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 10301 bytes --]

 

 

-----Original Message-----
From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
rjmcmahon via Starlink
Sent: Monday, January 9, 2023 12:47 PM
To: Dave Taht
Cc: starlink@lists.bufferbloat.net; mike.reynolds@netforecast.com; libreqos;
David P. Reed; Rpm; bloat
Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

 

The write to read latencies (OWD) are on the server side in CLT form. 

Use --histograms on the server side to enable them.

 

Your client side sampled TCP RTT is 6ms with less than a 1 ms of 

variance (or sqrt of variance as variance is typically squared)

[RR] or standard deviation (std for short) :-)

  No 

retries suggest the network isn't dropping packets.

 

All the newer bounceback code is only master and requires a compile from 

source. It will be released in 2.1.9 after testing cycles. Hopefully, in 

early March 2023

 

Bob

 

https://sourceforge.net/projects/iperf2/

 

> The DC that so graciously loaned us 3 machines for the testbed (thx

> equinix!), does support ptp, but we have not configured it yet. In ntp

> tests between these hosts we seem to be within 500us, and certainly

> 50us would be great, in the future.

> 

> I note that in all my kvetching about the new tests' needing

> validation today... I kind of elided that I'm pretty happy with

> iperf2's new tests that landed last august, and are now appearing in

> linux package managers around the world. I hope more folk use them.

> (sorry robert, it's been a long time since last august!)

> 

> Our new testbed has multiple setups. In one setup - basically the

> machine name is equal to a given ISP plan, and a key testing point is

> looking at the differences between the FCC 25-3 and 100/20 plans in

> the real world. However at our scale (25gbit) it turned out that

> emulating the delay realistically has problematic.

> 

> Anyway, here's a 25/3 result for iperf (other results and iperf test

> type requests gladly accepted)

> 

> root@lqos:~# iperf -6 --trip-times -c c25-3 -e -i 1

> ------------------------------------------------------------

> Client connecting to c25-3, TCP port 5001 with pid 2146556 (1 flows)

> Write buffer size: 131072 Byte

> TOS set to 0x0 (Nagle on)

> TCP window size: 85.3 KByte (default)

> ------------------------------------------------------------

> [  1] local fd77::3%bond0.4 port 59396 connected with fd77::1:2 port

> 5001 (trip-times) (sock=3) (icwnd/mss/irtt=13/1428/948) (ct=1.10 ms)

> on 2023-01-09 20:13:37 (UTC)

> [ ID] Interval            Transfer    Bandwidth       Write/Err  Rtry

>    Cwnd/RTT(var)        NetPwr

> [  1] 0.0000-1.0000 sec  3.25 MBytes  27.3 Mbits/sec  26/0          0

>      19K/6066(262) us  562

> [  1] 1.0000-2.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0

>      15K/4671(207) us  673

> [  1] 2.0000-3.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0

>      13K/5538(280) us  568

> [  1] 3.0000-4.0000 sec  3.12 MBytes  26.2 Mbits/sec  25/0          0

>      16K/6244(355) us  525

> [  1] 4.0000-5.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0

>      19K/6152(216) us  511

> [  1] 5.0000-6.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0

>      22K/6764(529) us  465

> [  1] 6.0000-7.0000 sec  3.12 MBytes  26.2 Mbits/sec  25/0          0

>      15K/5918(605) us  554

> [  1] 7.0000-8.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0

>      18K/5178(327) us  608

> [  1] 8.0000-9.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0

>      19K/5758(473) us  546

> [  1] 9.0000-10.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0

>       16K/6141(280) us  512

> [  1] 0.0000-10.0952 sec  30.6 MBytes  25.4 Mbits/sec  245/0

> 0       19K/5924(491) us  537

> 

> 

> On Mon, Jan 9, 2023 at 11:13 AM rjmcmahon <rjmcmahon@rjmcmahon.com> 

> wrote:

>> 

>> My biggest barrier is the lack of clock sync by the devices, i.e. very

>> limited support for PTP in data centers and in end devices. This 

>> limits

>> the ability to measure one way delays (OWD) and most assume that OWD 

>> is

>> 1/2 and RTT which typically is a mistake. We know this intuitively 

>> with

>> airplane flight times or even car commute times where the one way time

>> is not 1/2 a round trip time. Google maps & directions provide a time

>> estimate for the one way link. It doesn't compute a round trip and

>> divide by two.

>> 

>> For those that can get clock sync working, the iperf 2 --trip-times

>> options is useful.

>> 

>> --trip-times

>>    enable the measurement of end to end write to read latencies 

>> (client

>> and server clocks must be synchronized)

>> 

>> Bob

>> > I have many kvetches about the new latency under load tests being

>> > designed and distributed over the past year. I am delighted! that they

>> > are happening, but most really need third party evaluation, and

>> > calibration, and a solid explanation of what network pathologies they

>> > do and don't cover. Also a RED team attitude towards them, as well as

>> > thinking hard about what you are not measuring (operations research).

>> >

>> > I actually rather love the new cloudflare speedtest, because it tests

>> > a single TCP connection, rather than dozens, and at the same time folk

>> > are complaining that it doesn't find the actual "speed!". yet... the

>> > test itself more closely emulates a user experience than speedtest.net

>> > does. I am personally pretty convinced that the fewer numbers of flows

>> > that a web page opens improves the likelihood of a good user

>> > experience, but lack data on it.

>> >

>> > To try to tackle the evaluation and calibration part, I've reached out

>> > to all the new test designers in the hope that we could get together

>> > and produce a report of what each new test is actually doing. I've

>> > tweeted, linked in, emailed, and spammed every measurement list I know

>> > of, and only to some response, please reach out to other test designer

>> > folks and have them join the rpm email list?

>> >

>> > My principal kvetches in the new tests so far are:

>> >

>> > 0) None of the tests last long enough.

>> >

>> > Ideally there should be a mode where they at least run to "time of

>> > first loss", or periodically, just run longer than the

>> > industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons

>> > there! It's really bad science to optimize the internet for 20

>> > seconds. It's like optimizing a car, to handle well, for just 20

>> > seconds.

>> >

>> > 1) Not testing up + down + ping at the same time

>> >

>> > None of the new tests actually test the same thing that the infamous

>> > rrul test does - all the others still test up, then down, and ping. It

>> > was/remains my hope that the simpler parts of the flent test suite -

>> > such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair

>> > tests would provide calibration to the test designers.

>> >

>> > we've got zillions of flent results in the archive published here:

>> > https://blog.cerowrt.org/post/found_in_flent/

>> > ps. Misinformation about iperf 2 impacts my ability to do this.

>> 

>> > The new tests have all added up + ping and down + ping, but not up +

>> > down + ping. Why??

>> >

>> > The behaviors of what happens in that case are really non-intuitive, I

>> > know, but... it's just one more phase to add to any one of those new

>> > tests. I'd be deliriously happy if someone(s) new to the field

>> > started doing that, even optionally, and boggled at how it defeated

>> > their assumptions.

>> >

>> > Among other things that would show...

>> >

>> > It's the home router industry's dirty secret than darn few "gigabit"

>> > home routers can actually forward in both directions at a gigabit. I'd

>> > like to smash that perception thoroughly, but given our starting point

>> > is a gigabit router was a "gigabit switch" - and historically been

>> > something that couldn't even forward at 200Mbit - we have a long way

>> > to go there.

>> >

>> > Only in the past year have non-x86 home routers appeared that could

>> > actually do a gbit in both directions.

>> >

>> > 2) Few are actually testing within-stream latency

>> >

>> > Apple's rpm project is making a stab in that direction. It looks

>> > highly likely, that with a little more work, crusader and

>> > go-responsiveness can finally start sampling the tcp RTT, loss and

>> > markings, more directly. As for the rest... sampling TCP_INFO on

>> > windows, and Linux, at least, always appeared simple to me, but I'm

>> > discovering how hard it is by delving deep into the rust behind

>> > crusader.

>> >

>> > the goresponsiveness thing is also IMHO running WAY too many streams

>> > at the same time, I guess motivated by an attempt to have the test

>> > complete quickly?

>> >

>> > B) To try and tackle the validation problem:ps. Misinformation about

>> > iperf 2 impacts my ability to do this.

>> 

>> >

>> > In the libreqos.io project we've established a testbed where tests can

>> > be plunked through various ISP plan network emulations. It's here:

>> > https://payne.taht.net (run bandwidth test for what's currently hooked

>> > up)

>> >

>> > We could rather use an AS number and at least a ipv4/24 and ipv6/48 to

>> > leverage with that, so I don't have to nat the various emulations.

>> > (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,

>> > to see more test designers setup a testbed like this to calibrate

>> > their own stuff.

>> >

>> > Presently we're able to test:

>> > flent

>> > netperf

>> > iperf2

>> > iperf3

>> > speedtest-cli

>> > crusader

>> > the broadband forum udp based test:

>> > https://github.com/BroadbandForum/obudpst

>> > trexx

>> >

>> > There's also a virtual machine setup that we can remotely drive a web

>> > browser from (but I didn't want to nat the results to the world) to

>> > test other web services.

>> > _______________________________________________

>> > Rpm mailing list

>> > Rpm@lists.bufferbloat.net

>> > https://lists.bufferbloat.net/listinfo/rpm

_______________________________________________

Starlink mailing list

Starlink@lists.bufferbloat.net

https://lists.bufferbloat.net/listinfo/starlink


[-- Attachment #2: Type: text/html, Size: 40564 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 20:59             ` Dave Taht
@ 2023-01-09 21:06               ` rjmcmahon
  2023-01-09 21:18                 ` rjmcmahon
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-01-09 21:06 UTC (permalink / raw)
  To: Dave Taht
  Cc: Livingood, Jason, Rpm, mike.reynolds, libreqos, David P. Reed,
	starlink, bloat

A peer likes gnuplot and sed. There are many, many visualization tools. 
An excerpt below:

My quick hack one-line parser was based on just a single line from the 
iperf output, not the entire log:

[  1] 0.00-1.00 sec T8-PDF: 
bin(w=1ms):cnt(849)=1:583,2:112,3:9,4:8,5:11,6:10,7:7,8:8,9:7,10:2,11:3,12:2,13:2,14:2,15:2,16:3,17:2,18:3,19:1,21:2,22:2,23:3,24:2,26:3,27:2,28:3,29:2,30:2,31:3,32:2,33:2,34:2,35:5,37:1,39:1,40:3,41:5,42:2,43:3,44:3,45:3,46:3,47:3,48:1,49:2,50:3,51:2,52:1,53:1 
(50.00/99.7/99.80/%=1/51/52,Outliers=0,obl/obu=0/0)

Your log contains 30 such histograms.  A very crude approach would be to 
filter only the lines that have T8-PDF:

plot "< sed -n '/T8-PDF/{s/.*)=//;s/ (.*//;s/,/\\n/g;s/:/ /g;p}' 
lat.txt" with lp

or

plot "< sed -n '/T8(f)-PDF/{s/.*)=//;s/ (.*//;s/,/\\n/g;s/:/ /g;p}' 
lat.txt" with lp

http://www.gnuplot.info/

Bob

> On Mon, Jan 9, 2023 at 12:46 PM rjmcmahon <rjmcmahon@rjmcmahon.com> 
> wrote:
>> 
>> The write to read latencies (OWD) are on the server side in CLT form.
>> Use --histograms on the server side to enable them.
> 
> Thx. It is far more difficult to instrument things on the server side
> of the testbed but we will tackle it.
> 
>> Your client side sampled TCP RTT is 6ms with less than a 1 ms of
>> variance (or sqrt of variance as variance is typically squared)  No
>> retries suggest the network isn't dropping packets.
> 
> Thank you for analyzing that result. the cake aqm, set for a 5ms
> target, with RFC3168-style ECN, is enabled on this path, on this
> setup, at the moment. So the result is correct.
> 
> A second test with ecn off showed the expected retries.
> 
> I have emulations also of fifos, pie, fq-pie, fq-codel, red, blue,
> sfq, with various realworld delays, and so on... but this is a bit
> distracting at the moment from our focus, which was in optimizing the
> XDP + ebpf based bridge and epping based sampling tools to crack
> 25Gbit.
> 
> I think iperf2 will be great for us after that settles down.
> 
>> All the newer bounceback code is only master and requires a compile 
>> from
>> source. It will be released in 2.1.9 after testing cycles. Hopefully, 
>> in
>> early March 2023
> 
> I would like to somehow parse and present those histograms.
>> 
>> Bob
>> 
>> https://sourceforge.net/projects/iperf2/
>> 
>> > The DC that so graciously loaned us 3 machines for the testbed (thx
>> > equinix!), does support ptp, but we have not configured it yet. In ntp
>> > tests between these hosts we seem to be within 500us, and certainly
>> > 50us would be great, in the future.
>> >
>> > I note that in all my kvetching about the new tests' needing
>> > validation today... I kind of elided that I'm pretty happy with
>> > iperf2's new tests that landed last august, and are now appearing in
>> > linux package managers around the world. I hope more folk use them.
>> > (sorry robert, it's been a long time since last august!)
>> >
>> > Our new testbed has multiple setups. In one setup - basically the
>> > machine name is equal to a given ISP plan, and a key testing point is
>> > looking at the differences between the FCC 25-3 and 100/20 plans in
>> > the real world. However at our scale (25gbit) it turned out that
>> > emulating the delay realistically has problematic.
>> >
>> > Anyway, here's a 25/3 result for iperf (other results and iperf test
>> > type requests gladly accepted)
>> >
>> > root@lqos:~# iperf -6 --trip-times -c c25-3 -e -i 1
>> > ------------------------------------------------------------
>> > Client connecting to c25-3, TCP port 5001 with pid 2146556 (1 flows)
>> > Write buffer size: 131072 Byte
>> > TOS set to 0x0 (Nagle on)
>> > TCP window size: 85.3 KByte (default)
>> > ------------------------------------------------------------
>> > [  1] local fd77::3%bond0.4 port 59396 connected with fd77::1:2 port
>> > 5001 (trip-times) (sock=3) (icwnd/mss/irtt=13/1428/948) (ct=1.10 ms)
>> > on 2023-01-09 20:13:37 (UTC)
>> > [ ID] Interval            Transfer    Bandwidth       Write/Err  Rtry
>> >    Cwnd/RTT(var)        NetPwr
>> > [  1] 0.0000-1.0000 sec  3.25 MBytes  27.3 Mbits/sec  26/0          0
>> >      19K/6066(262) us  562
>> > [  1] 1.0000-2.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>> >      15K/4671(207) us  673
>> > [  1] 2.0000-3.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>> >      13K/5538(280) us  568
>> > [  1] 3.0000-4.0000 sec  3.12 MBytes  26.2 Mbits/sec  25/0          0
>> >      16K/6244(355) us  525
>> > [  1] 4.0000-5.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>> >      19K/6152(216) us  511
>> > [  1] 5.0000-6.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>> >      22K/6764(529) us  465
>> > [  1] 6.0000-7.0000 sec  3.12 MBytes  26.2 Mbits/sec  25/0          0
>> >      15K/5918(605) us  554
>> > [  1] 7.0000-8.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>> >      18K/5178(327) us  608
>> > [  1] 8.0000-9.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>> >      19K/5758(473) us  546
>> > [  1] 9.0000-10.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>> >       16K/6141(280) us  512
>> > [  1] 0.0000-10.0952 sec  30.6 MBytes  25.4 Mbits/sec  245/0
>> > 0       19K/5924(491) us  537
>> >
>> >
>> > On Mon, Jan 9, 2023 at 11:13 AM rjmcmahon <rjmcmahon@rjmcmahon.com>
>> > wrote:
>> >>
>> >> My biggest barrier is the lack of clock sync by the devices, i.e. very
>> >> limited support for PTP in data centers and in end devices. This
>> >> limits
>> >> the ability to measure one way delays (OWD) and most assume that OWD
>> >> is
>> >> 1/2 and RTT which typically is a mistake. We know this intuitively
>> >> with
>> >> airplane flight times or even car commute times where the one way time
>> >> is not 1/2 a round trip time. Google maps & directions provide a time
>> >> estimate for the one way link. It doesn't compute a round trip and
>> >> divide by two.
>> >>
>> >> For those that can get clock sync working, the iperf 2 --trip-times
>> >> options is useful.
>> >>
>> >> --trip-times
>> >>    enable the measurement of end to end write to read latencies
>> >> (client
>> >> and server clocks must be synchronized)
>> >>
>> >> Bob
>> >> > I have many kvetches about the new latency under load tests being
>> >> > designed and distributed over the past year. I am delighted! that they
>> >> > are happening, but most really need third party evaluation, and
>> >> > calibration, and a solid explanation of what network pathologies they
>> >> > do and don't cover. Also a RED team attitude towards them, as well as
>> >> > thinking hard about what you are not measuring (operations research).
>> >> >
>> >> > I actually rather love the new cloudflare speedtest, because it tests
>> >> > a single TCP connection, rather than dozens, and at the same time folk
>> >> > are complaining that it doesn't find the actual "speed!". yet... the
>> >> > test itself more closely emulates a user experience than speedtest.net
>> >> > does. I am personally pretty convinced that the fewer numbers of flows
>> >> > that a web page opens improves the likelihood of a good user
>> >> > experience, but lack data on it.
>> >> >
>> >> > To try to tackle the evaluation and calibration part, I've reached out
>> >> > to all the new test designers in the hope that we could get together
>> >> > and produce a report of what each new test is actually doing. I've
>> >> > tweeted, linked in, emailed, and spammed every measurement list I know
>> >> > of, and only to some response, please reach out to other test designer
>> >> > folks and have them join the rpm email list?
>> >> >
>> >> > My principal kvetches in the new tests so far are:
>> >> >
>> >> > 0) None of the tests last long enough.
>> >> >
>> >> > Ideally there should be a mode where they at least run to "time of
>> >> > first loss", or periodically, just run longer than the
>> >> > industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>> >> > there! It's really bad science to optimize the internet for 20
>> >> > seconds. It's like optimizing a car, to handle well, for just 20
>> >> > seconds.
>> >> >
>> >> > 1) Not testing up + down + ping at the same time
>> >> >
>> >> > None of the new tests actually test the same thing that the infamous
>> >> > rrul test does - all the others still test up, then down, and ping. It
>> >> > was/remains my hope that the simpler parts of the flent test suite -
>> >> > such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
>> >> > tests would provide calibration to the test designers.
>> >> >
>> >> > we've got zillions of flent results in the archive published here:
>> >> > https://blog.cerowrt.org/post/found_in_flent/
>> >> > ps. Misinformation about iperf 2 impacts my ability to do this.
>> >>
>> >> > The new tests have all added up + ping and down + ping, but not up +
>> >> > down + ping. Why??
>> >> >
>> >> > The behaviors of what happens in that case are really non-intuitive, I
>> >> > know, but... it's just one more phase to add to any one of those new
>> >> > tests. I'd be deliriously happy if someone(s) new to the field
>> >> > started doing that, even optionally, and boggled at how it defeated
>> >> > their assumptions.
>> >> >
>> >> > Among other things that would show...
>> >> >
>> >> > It's the home router industry's dirty secret than darn few "gigabit"
>> >> > home routers can actually forward in both directions at a gigabit. I'd
>> >> > like to smash that perception thoroughly, but given our starting point
>> >> > is a gigabit router was a "gigabit switch" - and historically been
>> >> > something that couldn't even forward at 200Mbit - we have a long way
>> >> > to go there.
>> >> >
>> >> > Only in the past year have non-x86 home routers appeared that could
>> >> > actually do a gbit in both directions.
>> >> >
>> >> > 2) Few are actually testing within-stream latency
>> >> >
>> >> > Apple's rpm project is making a stab in that direction. It looks
>> >> > highly likely, that with a little more work, crusader and
>> >> > go-responsiveness can finally start sampling the tcp RTT, loss and
>> >> > markings, more directly. As for the rest... sampling TCP_INFO on
>> >> > windows, and Linux, at least, always appeared simple to me, but I'm
>> >> > discovering how hard it is by delving deep into the rust behind
>> >> > crusader.
>> >> >
>> >> > the goresponsiveness thing is also IMHO running WAY too many streams
>> >> > at the same time, I guess motivated by an attempt to have the test
>> >> > complete quickly?
>> >> >
>> >> > B) To try and tackle the validation problem:ps. Misinformation about
>> >> > iperf 2 impacts my ability to do this.
>> >>
>> >> >
>> >> > In the libreqos.io project we've established a testbed where tests can
>> >> > be plunked through various ISP plan network emulations. It's here:
>> >> > https://payne.taht.net (run bandwidth test for what's currently hooked
>> >> > up)
>> >> >
>> >> > We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
>> >> > leverage with that, so I don't have to nat the various emulations.
>> >> > (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
>> >> > to see more test designers setup a testbed like this to calibrate
>> >> > their own stuff.
>> >> >
>> >> > Presently we're able to test:
>> >> > flent
>> >> > netperf
>> >> > iperf2
>> >> > iperf3
>> >> > speedtest-cli
>> >> > crusader
>> >> > the broadband forum udp based test:
>> >> > https://github.com/BroadbandForum/obudpst
>> >> > trexx
>> >> >
>> >> > There's also a virtual machine setup that we can remotely drive a web
>> >> > browser from (but I didn't want to nat the results to the world) to
>> >> > test other web services.
>> >> > _______________________________________________
>> >> > Rpm mailing list
>> >> > Rpm@lists.bufferbloat.net
>> >> > https://lists.bufferbloat.net/listinfo/rpm

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 21:06               ` rjmcmahon
@ 2023-01-09 21:18                 ` rjmcmahon
  0 siblings, 0 replies; 183+ messages in thread
From: rjmcmahon @ 2023-01-09 21:18 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Dave Taht, starlink, mike.reynolds, libreqos, David P. Reed, Rpm,
	Livingood, Jason, bloat

Also released is python code. It's based on python 3's asyncio. It just 
needs password-less ssh to be able to create the pipes. This opens up 
the stats processing to a vast majority of tools used by data scientists 
at large.

https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/
https://docs.python.org/3/library/asyncio.html

Creating traffic profiles is basically instantiate then run.  Here is an 
example facetime test.


#instantiate DUT host and NIC devices
wifi1 = ssh_node(name='WiFi_A', ipaddr=args.host_wifi1, device='eth1', 
devip='192.168.1.58')
wifi2 = ssh_node(name='WiFi_B', ipaddr=args.host_wifi2, device='eth1', 
devip='192.168.1.70')

#instantiate traffic objects or flows

video=iperf_flow(name='VIDEO_FACETIME_UDP', user='root', server=wifi2, 
client=wifi1, dstip=wifi2.devip, proto='UDP', interval=1, debug=False, 
srcip=wifi1.devip, srcport='6001', dstport='6001', 
offered_load='30:600K',trip_times=True, tos='ac_vi', latency=True, 
fullduplex=True)
audio=iperf_flow(name='AUDIO_FACETIME_UDP', user='root', server=wifi2, 
client=wifi1, dstip=wifi2.devip, proto='UDP', interval=1, debug=False, 
srcip=wifi1.devip, srcport='6002', dstport='6002', 
offered_load='50:25K',trip_times=True, tos='ac_vo', latency=True, 
fullduplex=True)

ssh_node.open_consoles(silent_mode=True)

traffic_flows = iperf_flow.get_instances()
try:
     if traffic_flows:
         for runid in range(args.runcount) :
             for traffic_flow in traffic_flows:
                 print("Running ({}/{}) {} traffic client={} server={} 
dest={} with load {} for {} seconds".format(str(runid+1), 
str(args.runcount), traffic_flow.name, traffic_flow.client, 
traffic_flow.server, traffic_flow.dstip, traffic_flow.offered_load, 
args.time))
             gc.disable()
             iperf_flow.run(time=args.time, flows='all', epoch_sync=True)
             gc.enable()
             try :
                 gc.collect()
             except:
                 pass

         for traffic_flow in traffic_flows :
             
traffic_flow.compute_ks_table(directory=args.output_directory, 
title=args.test_name)

     else:
         print("No traffic Flows instantiated per test 
{}".format(args.test_name))

finally :
     ssh_node.close_consoles()
     if traffic_flows:
         iperf_flow.close_loop()
     logging.shutdown()


Bob
> A peer likes gnuplot and sed. There are many, many visualization
> tools. An excerpt below:
> 
> My quick hack one-line parser was based on just a single line from the
> iperf output, not the entire log:
> 
> [  1] 0.00-1.00 sec T8-PDF:
> bin(w=1ms):cnt(849)=1:583,2:112,3:9,4:8,5:11,6:10,7:7,8:8,9:7,10:2,11:3,12:2,13:2,14:2,15:2,16:3,17:2,18:3,19:1,21:2,22:2,23:3,24:2,26:3,27:2,28:3,29:2,30:2,31:3,32:2,33:2,34:2,35:5,37:1,39:1,40:3,41:5,42:2,43:3,44:3,45:3,46:3,47:3,48:1,49:2,50:3,51:2,52:1,53:1
> (50.00/99.7/99.80/%=1/51/52,Outliers=0,obl/obu=0/0)
> 
> Your log contains 30 such histograms.  A very crude approach would be
> to filter only the lines that have T8-PDF:
> 
> plot "< sed -n '/T8-PDF/{s/.*)=//;s/ (.*//;s/,/\\n/g;s/:/ /g;p}'
> lat.txt" with lp
> 
> or
> 
> plot "< sed -n '/T8(f)-PDF/{s/.*)=//;s/ (.*//;s/,/\\n/g;s/:/ /g;p}'
> lat.txt" with lp
> 
> http://www.gnuplot.info/
> 
> Bob
> 
>> On Mon, Jan 9, 2023 at 12:46 PM rjmcmahon <rjmcmahon@rjmcmahon.com> 
>> wrote:
>>> 
>>> The write to read latencies (OWD) are on the server side in CLT form.
>>> Use --histograms on the server side to enable them.
>> 
>> Thx. It is far more difficult to instrument things on the server side
>> of the testbed but we will tackle it.
>> 
>>> Your client side sampled TCP RTT is 6ms with less than a 1 ms of
>>> variance (or sqrt of variance as variance is typically squared)  No
>>> retries suggest the network isn't dropping packets.
>> 
>> Thank you for analyzing that result. the cake aqm, set for a 5ms
>> target, with RFC3168-style ECN, is enabled on this path, on this
>> setup, at the moment. So the result is correct.
>> 
>> A second test with ecn off showed the expected retries.
>> 
>> I have emulations also of fifos, pie, fq-pie, fq-codel, red, blue,
>> sfq, with various realworld delays, and so on... but this is a bit
>> distracting at the moment from our focus, which was in optimizing the
>> XDP + ebpf based bridge and epping based sampling tools to crack
>> 25Gbit.
>> 
>> I think iperf2 will be great for us after that settles down.
>> 
>>> All the newer bounceback code is only master and requires a compile 
>>> from
>>> source. It will be released in 2.1.9 after testing cycles. Hopefully, 
>>> in
>>> early March 2023
>> 
>> I would like to somehow parse and present those histograms.
>>> 
>>> Bob
>>> 
>>> https://sourceforge.net/projects/iperf2/
>>> 
>>> > The DC that so graciously loaned us 3 machines for the testbed (thx
>>> > equinix!), does support ptp, but we have not configured it yet. In ntp
>>> > tests between these hosts we seem to be within 500us, and certainly
>>> > 50us would be great, in the future.
>>> >
>>> > I note that in all my kvetching about the new tests' needing
>>> > validation today... I kind of elided that I'm pretty happy with
>>> > iperf2's new tests that landed last august, and are now appearing in
>>> > linux package managers around the world. I hope more folk use them.
>>> > (sorry robert, it's been a long time since last august!)
>>> >
>>> > Our new testbed has multiple setups. In one setup - basically the
>>> > machine name is equal to a given ISP plan, and a key testing point is
>>> > looking at the differences between the FCC 25-3 and 100/20 plans in
>>> > the real world. However at our scale (25gbit) it turned out that
>>> > emulating the delay realistically has problematic.
>>> >
>>> > Anyway, here's a 25/3 result for iperf (other results and iperf test
>>> > type requests gladly accepted)
>>> >
>>> > root@lqos:~# iperf -6 --trip-times -c c25-3 -e -i 1
>>> > ------------------------------------------------------------
>>> > Client connecting to c25-3, TCP port 5001 with pid 2146556 (1 flows)
>>> > Write buffer size: 131072 Byte
>>> > TOS set to 0x0 (Nagle on)
>>> > TCP window size: 85.3 KByte (default)
>>> > ------------------------------------------------------------
>>> > [  1] local fd77::3%bond0.4 port 59396 connected with fd77::1:2 port
>>> > 5001 (trip-times) (sock=3) (icwnd/mss/irtt=13/1428/948) (ct=1.10 ms)
>>> > on 2023-01-09 20:13:37 (UTC)
>>> > [ ID] Interval            Transfer    Bandwidth       Write/Err  Rtry
>>> >    Cwnd/RTT(var)        NetPwr
>>> > [  1] 0.0000-1.0000 sec  3.25 MBytes  27.3 Mbits/sec  26/0          0
>>> >      19K/6066(262) us  562
>>> > [  1] 1.0000-2.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>>> >      15K/4671(207) us  673
>>> > [  1] 2.0000-3.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>>> >      13K/5538(280) us  568
>>> > [  1] 3.0000-4.0000 sec  3.12 MBytes  26.2 Mbits/sec  25/0          0
>>> >      16K/6244(355) us  525
>>> > [  1] 4.0000-5.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>>> >      19K/6152(216) us  511
>>> > [  1] 5.0000-6.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>>> >      22K/6764(529) us  465
>>> > [  1] 6.0000-7.0000 sec  3.12 MBytes  26.2 Mbits/sec  25/0          0
>>> >      15K/5918(605) us  554
>>> > [  1] 7.0000-8.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>>> >      18K/5178(327) us  608
>>> > [  1] 8.0000-9.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>>> >      19K/5758(473) us  546
>>> > [  1] 9.0000-10.0000 sec  3.00 MBytes  25.2 Mbits/sec  24/0          0
>>> >       16K/6141(280) us  512
>>> > [  1] 0.0000-10.0952 sec  30.6 MBytes  25.4 Mbits/sec  245/0
>>> > 0       19K/5924(491) us  537
>>> >
>>> >
>>> > On Mon, Jan 9, 2023 at 11:13 AM rjmcmahon <rjmcmahon@rjmcmahon.com>
>>> > wrote:
>>> >>
>>> >> My biggest barrier is the lack of clock sync by the devices, i.e. very
>>> >> limited support for PTP in data centers and in end devices. This
>>> >> limits
>>> >> the ability to measure one way delays (OWD) and most assume that OWD
>>> >> is
>>> >> 1/2 and RTT which typically is a mistake. We know this intuitively
>>> >> with
>>> >> airplane flight times or even car commute times where the one way time
>>> >> is not 1/2 a round trip time. Google maps & directions provide a time
>>> >> estimate for the one way link. It doesn't compute a round trip and
>>> >> divide by two.
>>> >>
>>> >> For those that can get clock sync working, the iperf 2 --trip-times
>>> >> options is useful.
>>> >>
>>> >> --trip-times
>>> >>    enable the measurement of end to end write to read latencies
>>> >> (client
>>> >> and server clocks must be synchronized)
>>> >>
>>> >> Bob
>>> >> > I have many kvetches about the new latency under load tests being
>>> >> > designed and distributed over the past year. I am delighted! that they
>>> >> > are happening, but most really need third party evaluation, and
>>> >> > calibration, and a solid explanation of what network pathologies they
>>> >> > do and don't cover. Also a RED team attitude towards them, as well as
>>> >> > thinking hard about what you are not measuring (operations research).
>>> >> >
>>> >> > I actually rather love the new cloudflare speedtest, because it tests
>>> >> > a single TCP connection, rather than dozens, and at the same time folk
>>> >> > are complaining that it doesn't find the actual "speed!". yet... the
>>> >> > test itself more closely emulates a user experience than speedtest.net
>>> >> > does. I am personally pretty convinced that the fewer numbers of flows
>>> >> > that a web page opens improves the likelihood of a good user
>>> >> > experience, but lack data on it.
>>> >> >
>>> >> > To try to tackle the evaluation and calibration part, I've reached out
>>> >> > to all the new test designers in the hope that we could get together
>>> >> > and produce a report of what each new test is actually doing. I've
>>> >> > tweeted, linked in, emailed, and spammed every measurement list I know
>>> >> > of, and only to some response, please reach out to other test designer
>>> >> > folks and have them join the rpm email list?
>>> >> >
>>> >> > My principal kvetches in the new tests so far are:
>>> >> >
>>> >> > 0) None of the tests last long enough.
>>> >> >
>>> >> > Ideally there should be a mode where they at least run to "time of
>>> >> > first loss", or periodically, just run longer than the
>>> >> > industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>>> >> > there! It's really bad science to optimize the internet for 20
>>> >> > seconds. It's like optimizing a car, to handle well, for just 20
>>> >> > seconds.
>>> >> >
>>> >> > 1) Not testing up + down + ping at the same time
>>> >> >
>>> >> > None of the new tests actually test the same thing that the infamous
>>> >> > rrul test does - all the others still test up, then down, and ping. It
>>> >> > was/remains my hope that the simpler parts of the flent test suite -
>>> >> > such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
>>> >> > tests would provide calibration to the test designers.
>>> >> >
>>> >> > we've got zillions of flent results in the archive published here:
>>> >> > https://blog.cerowrt.org/post/found_in_flent/
>>> >> > ps. Misinformation about iperf 2 impacts my ability to do this.
>>> >>
>>> >> > The new tests have all added up + ping and down + ping, but not up +
>>> >> > down + ping. Why??
>>> >> >
>>> >> > The behaviors of what happens in that case are really non-intuitive, I
>>> >> > know, but... it's just one more phase to add to any one of those new
>>> >> > tests. I'd be deliriously happy if someone(s) new to the field
>>> >> > started doing that, even optionally, and boggled at how it defeated
>>> >> > their assumptions.
>>> >> >
>>> >> > Among other things that would show...
>>> >> >
>>> >> > It's the home router industry's dirty secret than darn few "gigabit"
>>> >> > home routers can actually forward in both directions at a gigabit. I'd
>>> >> > like to smash that perception thoroughly, but given our starting point
>>> >> > is a gigabit router was a "gigabit switch" - and historically been
>>> >> > something that couldn't even forward at 200Mbit - we have a long way
>>> >> > to go there.
>>> >> >
>>> >> > Only in the past year have non-x86 home routers appeared that could
>>> >> > actually do a gbit in both directions.
>>> >> >
>>> >> > 2) Few are actually testing within-stream latency
>>> >> >
>>> >> > Apple's rpm project is making a stab in that direction. It looks
>>> >> > highly likely, that with a little more work, crusader and
>>> >> > go-responsiveness can finally start sampling the tcp RTT, loss and
>>> >> > markings, more directly. As for the rest... sampling TCP_INFO on
>>> >> > windows, and Linux, at least, always appeared simple to me, but I'm
>>> >> > discovering how hard it is by delving deep into the rust behind
>>> >> > crusader.
>>> >> >
>>> >> > the goresponsiveness thing is also IMHO running WAY too many streams
>>> >> > at the same time, I guess motivated by an attempt to have the test
>>> >> > complete quickly?
>>> >> >
>>> >> > B) To try and tackle the validation problem:ps. Misinformation about
>>> >> > iperf 2 impacts my ability to do this.
>>> >>
>>> >> >
>>> >> > In the libreqos.io project we've established a testbed where tests can
>>> >> > be plunked through various ISP plan network emulations. It's here:
>>> >> > https://payne.taht.net (run bandwidth test for what's currently hooked
>>> >> > up)
>>> >> >
>>> >> > We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
>>> >> > leverage with that, so I don't have to nat the various emulations.
>>> >> > (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
>>> >> > to see more test designers setup a testbed like this to calibrate
>>> >> > their own stuff.
>>> >> >
>>> >> > Presently we're able to test:
>>> >> > flent
>>> >> > netperf
>>> >> > iperf2
>>> >> > iperf3
>>> >> > speedtest-cli
>>> >> > crusader
>>> >> > the broadband forum udp based test:
>>> >> > https://github.com/BroadbandForum/obudpst
>>> >> > trexx
>>> >> >
>>> >> > There's also a virtual machine setup that we can remotely drive a web
>>> >> > browser from (but I didn't want to nat the results to the world) to
>>> >> > test other web services.
>>> >> > _______________________________________________
>>> >> > Rpm mailing list
>>> >> > Rpm@lists.bufferbloat.net
>>> >> > https://lists.bufferbloat.net/listinfo/rpm
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 19:13       ` [LibreQoS] [Rpm] " rjmcmahon
  2023-01-09 19:47         ` [LibreQoS] [Starlink] [Rpm] " Sebastian Moeller
  2023-01-09 20:20         ` [LibreQoS] [Rpm] [Starlink] " Dave Taht
@ 2023-01-10 17:36         ` David P. Reed
  2 siblings, 0 replies; 183+ messages in thread
From: David P. Reed @ 2023-01-10 17:36 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Dave Taht, Livingood, Jason, Rpm, mike.reynolds, libreqos,
	starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 7528 bytes --]


On time-sync: Every smartphone sold today can have their clocks synced, both in rate and count value, using GPS that every smartphone has.
 
So I think the problem of no clock sync is based on the fact that NTP and PTP are so very, very ancient. And the tooling (iperf and netperf) don't have much ambition (mcmahon being the exception).
 
I speak on this based on my experience implementing nanosecond accuracy clock sync among multiple computers in a datacenter at TidalScale (not using PTP, because the hardware is so unstandardized and the driver support for PTP is terrible, even in Linux), and also using clock sync to test software defined radio transceivers in my hobby radios. You can do better than GPS, but frankly, GPS is enough to get a stable synchronized clock outside the datacenter context.
 
The real issue is that the gear that ISP's provide for SMB and residential access is pretty cheap and minimal - they don't provide GPS-accurate clock timestamps. They could, but this is typical industry penny-pinching.
 
Maybe Comcast Research might fund development of devices that are inexpensive, use GPS timesync, and provide end-to-end performance characterization tools in the form of a $100 SBC plus a good performance testing suite?
 
This would let us characterize Starlink, too.
 
 
On Monday, January 9, 2023 2:13pm, "rjmcmahon" <rjmcmahon@rjmcmahon.com> said:



> My biggest barrier is the lack of clock sync by the devices, i.e. very
> limited support for PTP in data centers and in end devices. This limits
> the ability to measure one way delays (OWD) and most assume that OWD is
> 1/2 and RTT which typically is a mistake. We know this intuitively with
> airplane flight times or even car commute times where the one way time
> is not 1/2 a round trip time. Google maps & directions provide a time
> estimate for the one way link. It doesn't compute a round trip and
> divide by two.
> 
> For those that can get clock sync working, the iperf 2 --trip-times
> options is useful.
> 
> --trip-times
> enable the measurement of end to end write to read latencies (client
> and server clocks must be synchronized)
> 
> Bob
> > I have many kvetches about the new latency under load tests being
> > designed and distributed over the past year. I am delighted! that they
> > are happening, but most really need third party evaluation, and
> > calibration, and a solid explanation of what network pathologies they
> > do and don't cover. Also a RED team attitude towards them, as well as
> > thinking hard about what you are not measuring (operations research).
> >
> > I actually rather love the new cloudflare speedtest, because it tests
> > a single TCP connection, rather than dozens, and at the same time folk
> > are complaining that it doesn't find the actual "speed!". yet... the
> > test itself more closely emulates a user experience than speedtest.net
> > does. I am personally pretty convinced that the fewer numbers of flows
> > that a web page opens improves the likelihood of a good user
> > experience, but lack data on it.
> >
> > To try to tackle the evaluation and calibration part, I've reached out
> > to all the new test designers in the hope that we could get together
> > and produce a report of what each new test is actually doing. I've
> > tweeted, linked in, emailed, and spammed every measurement list I know
> > of, and only to some response, please reach out to other test designer
> > folks and have them join the rpm email list?
> >
> > My principal kvetches in the new tests so far are:
> >
> > 0) None of the tests last long enough.
> >
> > Ideally there should be a mode where they at least run to "time of
> > first loss", or periodically, just run longer than the
> > industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
> > there! It's really bad science to optimize the internet for 20
> > seconds. It's like optimizing a car, to handle well, for just 20
> > seconds.
> >
> > 1) Not testing up + down + ping at the same time
> >
> > None of the new tests actually test the same thing that the infamous
> > rrul test does - all the others still test up, then down, and ping. It
> > was/remains my hope that the simpler parts of the flent test suite -
> > such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
> > tests would provide calibration to the test designers.
> >
> > we've got zillions of flent results in the archive published here:
> > https://blog.cerowrt.org/post/found_in_flent/
> > ps. Misinformation about iperf 2 impacts my ability to do this.
> 
> > The new tests have all added up + ping and down + ping, but not up +
> > down + ping. Why??
> >
> > The behaviors of what happens in that case are really non-intuitive, I
> > know, but... it's just one more phase to add to any one of those new
> > tests. I'd be deliriously happy if someone(s) new to the field
> > started doing that, even optionally, and boggled at how it defeated
> > their assumptions.
> >
> > Among other things that would show...
> >
> > It's the home router industry's dirty secret than darn few "gigabit"
> > home routers can actually forward in both directions at a gigabit. I'd
> > like to smash that perception thoroughly, but given our starting point
> > is a gigabit router was a "gigabit switch" - and historically been
> > something that couldn't even forward at 200Mbit - we have a long way
> > to go there.
> >
> > Only in the past year have non-x86 home routers appeared that could
> > actually do a gbit in both directions.
> >
> > 2) Few are actually testing within-stream latency
> >
> > Apple's rpm project is making a stab in that direction. It looks
> > highly likely, that with a little more work, crusader and
> > go-responsiveness can finally start sampling the tcp RTT, loss and
> > markings, more directly. As for the rest... sampling TCP_INFO on
> > windows, and Linux, at least, always appeared simple to me, but I'm
> > discovering how hard it is by delving deep into the rust behind
> > crusader.
> >
> > the goresponsiveness thing is also IMHO running WAY too many streams
> > at the same time, I guess motivated by an attempt to have the test
> > complete quickly?
> >
> > B) To try and tackle the validation problem:ps. Misinformation about
> > iperf 2 impacts my ability to do this.
> 
> >
> > In the libreqos.io project we've established a testbed where tests can
> > be plunked through various ISP plan network emulations. It's here:
> > https://payne.taht.net (run bandwidth test for what's currently hooked
> > up)
> >
> > We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
> > leverage with that, so I don't have to nat the various emulations.
> > (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
> > to see more test designers setup a testbed like this to calibrate
> > their own stuff.
> >
> > Presently we're able to test:
> > flent
> > netperf
> > iperf2
> > iperf3
> > speedtest-cli
> > crusader
> > the broadband forum udp based test:
> > https://github.com/BroadbandForum/obudpst
> > trexx
> >
> > There's also a virtual machine setup that we can remotely drive a web
> > browser from (but I didn't want to nat the results to the world) to
> > test other web services.
> > _______________________________________________
> > Rpm mailing list
> > Rpm@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/rpm
> 

[-- Attachment #2: Type: text/html, Size: 10369 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-09 19:47         ` [LibreQoS] [Starlink] [Rpm] " Sebastian Moeller
@ 2023-01-11 18:32           ` Rodney W. Grimes
  2023-01-11 20:01             ` Sebastian Moeller
  2023-01-11 20:09             ` rjmcmahon
  0 siblings, 2 replies; 183+ messages in thread
From: Rodney W. Grimes @ 2023-01-11 18:32 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: rjmcmahon, Rpm, mike.reynolds, David P. Reed, libreqos,
	Dave Taht via Starlink, bloat

Hello,

	Yall can call me crazy if you want.. but... see below [RWG]
> Hi Bib,
> 
> 
> > On Jan 9, 2023, at 20:13, rjmcmahon via Starlink <starlink@lists.bufferbloat.net> wrote:
> > 
> > My biggest barrier is the lack of clock sync by the devices, i.e. very limited support for PTP in data centers and in end devices. This limits the ability to measure one way delays (OWD) and most assume that OWD is 1/2 and RTT which typically is a mistake. We know this intuitively with airplane flight times or even car commute times where the one way time is not 1/2 a round trip time. Google maps & directions provide a time estimate for the one way link. It doesn't compute a round trip and divide by two.
> > 
> > For those that can get clock sync working, the iperf 2 --trip-times options is useful.
> 
> 	[SM] +1; and yet even with unsynchronized clocks one can try to measure how latency changes under load and that can be done per direction. Sure this is far inferior to real reliably measured OWDs, but if life/the internet deals you lemons....

 [RWG] iperf2/iperf3, etc are already moving large amounts of data back and forth, for that matter any rate test, why not abuse some of that data and add the fundemental NTP clock sync data and bidirectionally pass each others concept of "current time".  IIRC (its been 25 years since I worked on NTP at this level) you *should* be able to get a fairly accurate clock delta between each end, and then use that info and time stamps in the data stream to compute OWD's.  You need to put 4 time stamps in the packet, and with that you can compute "offset".

> 
> 
> > 
> > --trip-times
> >  enable the measurement of end to end write to read latencies (client and server clocks must be synchronized)
 [RWG] --clock-skew
	enable the measurement of the wall clock difference between sender and receiver

> 
> 	[SM] Sweet!
> 
> Regards
> 	Sebastian
> 
> > 
> > Bob
> >> I have many kvetches about the new latency under load tests being
> >> designed and distributed over the past year. I am delighted! that they
> >> are happening, but most really need third party evaluation, and
> >> calibration, and a solid explanation of what network pathologies they
> >> do and don't cover. Also a RED team attitude towards them, as well as
> >> thinking hard about what you are not measuring (operations research).
> >> I actually rather love the new cloudflare speedtest, because it tests
> >> a single TCP connection, rather than dozens, and at the same time folk
> >> are complaining that it doesn't find the actual "speed!". yet... the
> >> test itself more closely emulates a user experience than speedtest.net
> >> does. I am personally pretty convinced that the fewer numbers of flows
> >> that a web page opens improves the likelihood of a good user
> >> experience, but lack data on it.
> >> To try to tackle the evaluation and calibration part, I've reached out
> >> to all the new test designers in the hope that we could get together
> >> and produce a report of what each new test is actually doing. I've
> >> tweeted, linked in, emailed, and spammed every measurement list I know
> >> of, and only to some response, please reach out to other test designer
> >> folks and have them join the rpm email list?
> >> My principal kvetches in the new tests so far are:
> >> 0) None of the tests last long enough.
> >> Ideally there should be a mode where they at least run to "time of
> >> first loss", or periodically, just run longer than the
> >> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
> >> there! It's really bad science to optimize the internet for 20
> >> seconds. It's like optimizing a car, to handle well, for just 20
> >> seconds.
> >> 1) Not testing up + down + ping at the same time
> >> None of the new tests actually test the same thing that the infamous
> >> rrul test does - all the others still test up, then down, and ping. It
> >> was/remains my hope that the simpler parts of the flent test suite -
> >> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
> >> tests would provide calibration to the test designers.
> >> we've got zillions of flent results in the archive published here:
> >> https://blog.cerowrt.org/post/found_in_flent/
> >> ps. Misinformation about iperf 2 impacts my ability to do this.
> > 
> >> The new tests have all added up + ping and down + ping, but not up +
> >> down + ping. Why??
> >> The behaviors of what happens in that case are really non-intuitive, I
> >> know, but... it's just one more phase to add to any one of those new
> >> tests. I'd be deliriously happy if someone(s) new to the field
> >> started doing that, even optionally, and boggled at how it defeated
> >> their assumptions.
> >> Among other things that would show...
> >> It's the home router industry's dirty secret than darn few "gigabit"
> >> home routers can actually forward in both directions at a gigabit. I'd
> >> like to smash that perception thoroughly, but given our starting point
> >> is a gigabit router was a "gigabit switch" - and historically been
> >> something that couldn't even forward at 200Mbit - we have a long way
> >> to go there.
> >> Only in the past year have non-x86 home routers appeared that could
> >> actually do a gbit in both directions.
> >> 2) Few are actually testing within-stream latency
> >> Apple's rpm project is making a stab in that direction. It looks
> >> highly likely, that with a little more work, crusader and
> >> go-responsiveness can finally start sampling the tcp RTT, loss and
> >> markings, more directly. As for the rest... sampling TCP_INFO on
> >> windows, and Linux, at least, always appeared simple to me, but I'm
> >> discovering how hard it is by delving deep into the rust behind
> >> crusader.
> >> the goresponsiveness thing is also IMHO running WAY too many streams
> >> at the same time, I guess motivated by an attempt to have the test
> >> complete quickly?
> >> B) To try and tackle the validation problem:ps. Misinformation about iperf 2 impacts my ability to do this.
> > 
> >> In the libreqos.io project we've established a testbed where tests can
> >> be plunked through various ISP plan network emulations. It's here:
> >> https://payne.taht.net (run bandwidth test for what's currently hooked
> >> up)
> >> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
> >> leverage with that, so I don't have to nat the various emulations.
> >> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
> >> to see more test designers setup a testbed like this to calibrate
> >> their own stuff.
> >> Presently we're able to test:
> >> flent
> >> netperf
> >> iperf2
> >> iperf3
> >> speedtest-cli
> >> crusader
> >> the broadband forum udp based test:
> >> https://github.com/BroadbandForum/obudpst
> >> trexx
> >> There's also a virtual machine setup that we can remotely drive a web
> >> browser from (but I didn't want to nat the results to the world) to
> >> test other web services.
> >> _______________________________________________
> >> Rpm mailing list
> >> Rpm@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/rpm
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
> 
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
> 
> 

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-11 18:32           ` Rodney W. Grimes
@ 2023-01-11 20:01             ` Sebastian Moeller
  2023-01-11 21:46               ` Dick Roy
  2023-01-11 20:09             ` rjmcmahon
  1 sibling, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-01-11 20:01 UTC (permalink / raw)
  To: Rodney W. Grimes
  Cc: rjmcmahon, Rpm, mike.reynolds, David P. Reed, libreqos,
	Dave Taht via Starlink, bloat

Hi Rodney,




> On Jan 11, 2023, at 19:32, Rodney W. Grimes <starlink@gndrsh.dnsmgr.net> wrote:
> 
> Hello,
> 
> 	Yall can call me crazy if you want.. but... see below [RWG]
>> Hi Bib,
>> 
>> 
>>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink <starlink@lists.bufferbloat.net> wrote:
>>> 
>>> My biggest barrier is the lack of clock sync by the devices, i.e. very limited support for PTP in data centers and in end devices. This limits the ability to measure one way delays (OWD) and most assume that OWD is 1/2 and RTT which typically is a mistake. We know this intuitively with airplane flight times or even car commute times where the one way time is not 1/2 a round trip time. Google maps & directions provide a time estimate for the one way link. It doesn't compute a round trip and divide by two.
>>> 
>>> For those that can get clock sync working, the iperf 2 --trip-times options is useful.
>> 
>> 	[SM] +1; and yet even with unsynchronized clocks one can try to measure how latency changes under load and that can be done per direction. Sure this is far inferior to real reliably measured OWDs, but if life/the internet deals you lemons....
> 
> [RWG] iperf2/iperf3, etc are already moving large amounts of data back and forth, for that matter any rate test, why not abuse some of that data and add the fundemental NTP clock sync data and bidirectionally pass each others concept of "current time".  IIRC (its been 25 years since I worked on NTP at this level) you *should* be able to get a fairly accurate clock delta between each end, and then use that info and time stamps in the data stream to compute OWD's.  You need to put 4 time stamps in the packet, and with that you can compute "offset".

	[SM] Nice idea. I would guess that all timeslot based access technologies (so starlink, docsis, GPON, LTE?) all distribute "high quality time" carefully to the "modems", so maybe all that would be needed is to expose that high quality time to the LAN side of those modems, dressed up as NTP server?


> 
>> 
>> 
>>> 
>>> --trip-times
>>> enable the measurement of end to end write to read latencies (client and server clocks must be synchronized)
> [RWG] --clock-skew
> 	enable the measurement of the wall clock difference between sender and receiver
> 
>> 
>> 	[SM] Sweet!
>> 
>> Regards
>> 	Sebastian
>> 
>>> 
>>> Bob
>>>> I have many kvetches about the new latency under load tests being
>>>> designed and distributed over the past year. I am delighted! that they
>>>> are happening, but most really need third party evaluation, and
>>>> calibration, and a solid explanation of what network pathologies they
>>>> do and don't cover. Also a RED team attitude towards them, as well as
>>>> thinking hard about what you are not measuring (operations research).
>>>> I actually rather love the new cloudflare speedtest, because it tests
>>>> a single TCP connection, rather than dozens, and at the same time folk
>>>> are complaining that it doesn't find the actual "speed!". yet... the
>>>> test itself more closely emulates a user experience than speedtest.net
>>>> does. I am personally pretty convinced that the fewer numbers of flows
>>>> that a web page opens improves the likelihood of a good user
>>>> experience, but lack data on it.
>>>> To try to tackle the evaluation and calibration part, I've reached out
>>>> to all the new test designers in the hope that we could get together
>>>> and produce a report of what each new test is actually doing. I've
>>>> tweeted, linked in, emailed, and spammed every measurement list I know
>>>> of, and only to some response, please reach out to other test designer
>>>> folks and have them join the rpm email list?
>>>> My principal kvetches in the new tests so far are:
>>>> 0) None of the tests last long enough.
>>>> Ideally there should be a mode where they at least run to "time of
>>>> first loss", or periodically, just run longer than the
>>>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>>>> there! It's really bad science to optimize the internet for 20
>>>> seconds. It's like optimizing a car, to handle well, for just 20
>>>> seconds.
>>>> 1) Not testing up + down + ping at the same time
>>>> None of the new tests actually test the same thing that the infamous
>>>> rrul test does - all the others still test up, then down, and ping. It
>>>> was/remains my hope that the simpler parts of the flent test suite -
>>>> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
>>>> tests would provide calibration to the test designers.
>>>> we've got zillions of flent results in the archive published here:
>>>> https://blog.cerowrt.org/post/found_in_flent/
>>>> ps. Misinformation about iperf 2 impacts my ability to do this.
>>> 
>>>> The new tests have all added up + ping and down + ping, but not up +
>>>> down + ping. Why??
>>>> The behaviors of what happens in that case are really non-intuitive, I
>>>> know, but... it's just one more phase to add to any one of those new
>>>> tests. I'd be deliriously happy if someone(s) new to the field
>>>> started doing that, even optionally, and boggled at how it defeated
>>>> their assumptions.
>>>> Among other things that would show...
>>>> It's the home router industry's dirty secret than darn few "gigabit"
>>>> home routers can actually forward in both directions at a gigabit. I'd
>>>> like to smash that perception thoroughly, but given our starting point
>>>> is a gigabit router was a "gigabit switch" - and historically been
>>>> something that couldn't even forward at 200Mbit - we have a long way
>>>> to go there.
>>>> Only in the past year have non-x86 home routers appeared that could
>>>> actually do a gbit in both directions.
>>>> 2) Few are actually testing within-stream latency
>>>> Apple's rpm project is making a stab in that direction. It looks
>>>> highly likely, that with a little more work, crusader and
>>>> go-responsiveness can finally start sampling the tcp RTT, loss and
>>>> markings, more directly. As for the rest... sampling TCP_INFO on
>>>> windows, and Linux, at least, always appeared simple to me, but I'm
>>>> discovering how hard it is by delving deep into the rust behind
>>>> crusader.
>>>> the goresponsiveness thing is also IMHO running WAY too many streams
>>>> at the same time, I guess motivated by an attempt to have the test
>>>> complete quickly?
>>>> B) To try and tackle the validation problem:ps. Misinformation about iperf 2 impacts my ability to do this.
>>> 
>>>> In the libreqos.io project we've established a testbed where tests can
>>>> be plunked through various ISP plan network emulations. It's here:
>>>> https://payne.taht.net (run bandwidth test for what's currently hooked
>>>> up)
>>>> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
>>>> leverage with that, so I don't have to nat the various emulations.
>>>> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
>>>> to see more test designers setup a testbed like this to calibrate
>>>> their own stuff.
>>>> Presently we're able to test:
>>>> flent
>>>> netperf
>>>> iperf2
>>>> iperf3
>>>> speedtest-cli
>>>> crusader
>>>> the broadband forum udp based test:
>>>> https://github.com/BroadbandForum/obudpst
>>>> trexx
>>>> There's also a virtual machine setup that we can remotely drive a web
>>>> browser from (but I didn't want to nat the results to the world) to
>>>> test other web services.
>>>> _______________________________________________
>>>> Rpm mailing list
>>>> Rpm@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/rpm
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>> 
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-11 18:32           ` Rodney W. Grimes
  2023-01-11 20:01             ` Sebastian Moeller
@ 2023-01-11 20:09             ` rjmcmahon
  2023-01-12  8:14               ` Sebastian Moeller
  1 sibling, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-01-11 20:09 UTC (permalink / raw)
  To: Rodney W. Grimes
  Cc: Sebastian Moeller, Rpm, mike.reynolds, David P. Reed, libreqos,
	Dave Taht via Starlink, bloat

Iperf 2 is designed to measure network i/o. Note: It doesn't have to 
move large amounts of data. It can support data profiles that don't 
drive TCP's CCA as an example.

Two things I've been asked for and avoided:

1) Integrate clock sync into iperf's test traffic
2) Measure and output CPU usages

I think both of these are outside the scope of a tool designed to test 
network i/o over sockets, rather these should be developed & validated 
independently of a network i/o tool.

Clock error really isn't about amount/frequency of traffic but rather 
getting a periodic high-quality reference. I tend to use GPS pulse per 
second to lock the local system oscillator to. As David says, most every 
modern handheld computer has the GPS chips to do this already. So to me 
it seems more of a policy choice between data center operators and 
device mfgs and less of a technical issue.

Bob
> Hello,
> 
> 	Yall can call me crazy if you want.. but... see below [RWG]
>> Hi Bib,
>> 
>> 
>> > On Jan 9, 2023, at 20:13, rjmcmahon via Starlink <starlink@lists.bufferbloat.net> wrote:
>> >
>> > My biggest barrier is the lack of clock sync by the devices, i.e. very limited support for PTP in data centers and in end devices. This limits the ability to measure one way delays (OWD) and most assume that OWD is 1/2 and RTT which typically is a mistake. We know this intuitively with airplane flight times or even car commute times where the one way time is not 1/2 a round trip time. Google maps & directions provide a time estimate for the one way link. It doesn't compute a round trip and divide by two.
>> >
>> > For those that can get clock sync working, the iperf 2 --trip-times options is useful.
>> 
>> 	[SM] +1; and yet even with unsynchronized clocks one can try to 
>> measure how latency changes under load and that can be done per 
>> direction. Sure this is far inferior to real reliably measured OWDs, 
>> but if life/the internet deals you lemons....
> 
>  [RWG] iperf2/iperf3, etc are already moving large amounts of data
> back and forth, for that matter any rate test, why not abuse some of
> that data and add the fundemental NTP clock sync data and
> bidirectionally pass each others concept of "current time".  IIRC (its
> been 25 years since I worked on NTP at this level) you *should* be
> able to get a fairly accurate clock delta between each end, and then
> use that info and time stamps in the data stream to compute OWD's.
> You need to put 4 time stamps in the packet, and with that you can
> compute "offset".
> 
>> 
>> 
>> >
>> > --trip-times
>> >  enable the measurement of end to end write to read latencies (client and server clocks must be synchronized)
>  [RWG] --clock-skew
> 	enable the measurement of the wall clock difference between sender and 
> receiver
> 
>> 
>> 	[SM] Sweet!
>> 
>> Regards
>> 	Sebastian
>> 
>> >
>> > Bob
>> >> I have many kvetches about the new latency under load tests being
>> >> designed and distributed over the past year. I am delighted! that they
>> >> are happening, but most really need third party evaluation, and
>> >> calibration, and a solid explanation of what network pathologies they
>> >> do and don't cover. Also a RED team attitude towards them, as well as
>> >> thinking hard about what you are not measuring (operations research).
>> >> I actually rather love the new cloudflare speedtest, because it tests
>> >> a single TCP connection, rather than dozens, and at the same time folk
>> >> are complaining that it doesn't find the actual "speed!". yet... the
>> >> test itself more closely emulates a user experience than speedtest.net
>> >> does. I am personally pretty convinced that the fewer numbers of flows
>> >> that a web page opens improves the likelihood of a good user
>> >> experience, but lack data on it.
>> >> To try to tackle the evaluation and calibration part, I've reached out
>> >> to all the new test designers in the hope that we could get together
>> >> and produce a report of what each new test is actually doing. I've
>> >> tweeted, linked in, emailed, and spammed every measurement list I know
>> >> of, and only to some response, please reach out to other test designer
>> >> folks and have them join the rpm email list?
>> >> My principal kvetches in the new tests so far are:
>> >> 0) None of the tests last long enough.
>> >> Ideally there should be a mode where they at least run to "time of
>> >> first loss", or periodically, just run longer than the
>> >> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>> >> there! It's really bad science to optimize the internet for 20
>> >> seconds. It's like optimizing a car, to handle well, for just 20
>> >> seconds.
>> >> 1) Not testing up + down + ping at the same time
>> >> None of the new tests actually test the same thing that the infamous
>> >> rrul test does - all the others still test up, then down, and ping. It
>> >> was/remains my hope that the simpler parts of the flent test suite -
>> >> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
>> >> tests would provide calibration to the test designers.
>> >> we've got zillions of flent results in the archive published here:
>> >> https://blog.cerowrt.org/post/found_in_flent/
>> >> ps. Misinformation about iperf 2 impacts my ability to do this.
>> >
>> >> The new tests have all added up + ping and down + ping, but not up +
>> >> down + ping. Why??
>> >> The behaviors of what happens in that case are really non-intuitive, I
>> >> know, but... it's just one more phase to add to any one of those new
>> >> tests. I'd be deliriously happy if someone(s) new to the field
>> >> started doing that, even optionally, and boggled at how it defeated
>> >> their assumptions.
>> >> Among other things that would show...
>> >> It's the home router industry's dirty secret than darn few "gigabit"
>> >> home routers can actually forward in both directions at a gigabit. I'd
>> >> like to smash that perception thoroughly, but given our starting point
>> >> is a gigabit router was a "gigabit switch" - and historically been
>> >> something that couldn't even forward at 200Mbit - we have a long way
>> >> to go there.
>> >> Only in the past year have non-x86 home routers appeared that could
>> >> actually do a gbit in both directions.
>> >> 2) Few are actually testing within-stream latency
>> >> Apple's rpm project is making a stab in that direction. It looks
>> >> highly likely, that with a little more work, crusader and
>> >> go-responsiveness can finally start sampling the tcp RTT, loss and
>> >> markings, more directly. As for the rest... sampling TCP_INFO on
>> >> windows, and Linux, at least, always appeared simple to me, but I'm
>> >> discovering how hard it is by delving deep into the rust behind
>> >> crusader.
>> >> the goresponsiveness thing is also IMHO running WAY too many streams
>> >> at the same time, I guess motivated by an attempt to have the test
>> >> complete quickly?
>> >> B) To try and tackle the validation problem:ps. Misinformation about iperf 2 impacts my ability to do this.
>> >
>> >> In the libreqos.io project we've established a testbed where tests can
>> >> be plunked through various ISP plan network emulations. It's here:
>> >> https://payne.taht.net (run bandwidth test for what's currently hooked
>> >> up)
>> >> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
>> >> leverage with that, so I don't have to nat the various emulations.
>> >> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
>> >> to see more test designers setup a testbed like this to calibrate
>> >> their own stuff.
>> >> Presently we're able to test:
>> >> flent
>> >> netperf
>> >> iperf2
>> >> iperf3
>> >> speedtest-cli
>> >> crusader
>> >> the broadband forum udp based test:
>> >> https://github.com/BroadbandForum/obudpst
>> >> trexx
>> >> There's also a virtual machine setup that we can remotely drive a web
>> >> browser from (but I didn't want to nat the results to the world) to
>> >> test other web services.
>> >> _______________________________________________
>> >> Rpm mailing list
>> >> Rpm@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/rpm
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/starlink
>> 
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>> 
>> 

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-11 20:01             ` Sebastian Moeller
@ 2023-01-11 21:46               ` Dick Roy
  2023-01-12  8:22                 ` Sebastian Moeller
  0 siblings, 1 reply; 183+ messages in thread
From: Dick Roy @ 2023-01-11 21:46 UTC (permalink / raw)
  To: 'Sebastian Moeller', 'Rodney W. Grimes'
  Cc: mike.reynolds, 'libreqos', 'David P. Reed',
	'Rpm', 'rjmcmahon', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 9881 bytes --]

 

 

-----Original Message-----
From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
Sebastian Moeller via Starlink
Sent: Wednesday, January 11, 2023 12:01 PM
To: Rodney W. Grimes
Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos; David
P. Reed; Rpm; rjmcmahon; bloat
Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

 

Hi Rodney,

 

 

 

 

> On Jan 11, 2023, at 19:32, Rodney W. Grimes <starlink@gndrsh.dnsmgr.net>
wrote:

> 

> Hello,

> 

>     Yall can call me crazy if you want.. but... see below [RWG]

>> Hi Bib,

>> 

>> 

>>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink
<starlink@lists.bufferbloat.net> wrote:

>>> 

>>> My biggest barrier is the lack of clock sync by the devices, i.e. very
limited support for PTP in data centers and in end devices. This limits the
ability to measure one way delays (OWD) and most assume that OWD is 1/2 and
RTT which typically is a mistake. We know this intuitively with airplane
flight times or even car commute times where the one way time is not 1/2 a
round trip time. Google maps & directions provide a time estimate for the
one way link. It doesn't compute a round trip and divide by two.

>>> 

>>> For those that can get clock sync working, the iperf 2 --trip-times
options is useful.

>> 

>>    [SM] +1; and yet even with unsynchronized clocks one can try to
measure how latency changes under load and that can be done per direction.
Sure this is far inferior to real reliably measured OWDs, but if life/the
internet deals you lemons....

> 

> [RWG] iperf2/iperf3, etc are already moving large amounts of data back and
forth, for that matter any rate test, why not abuse some of that data and
add the fundemental NTP clock sync data and bidirectionally pass each others
concept of "current time".  IIRC (its been 25 years since I worked on NTP at
this level) you *should* be able to get a fairly accurate clock delta
between each end, and then use that info and time stamps in the data stream
to compute OWD's.  You need to put 4 time stamps in the packet, and with
that you can compute "offset".

[RR] For this to work at a reasonable level of accuracy, the timestamping
circuits on both ends need to be deterministic and repeatable as I recall.
Any uncertainty in that process adds to synchronization
errors/uncertainties.

 

      [SM] Nice idea. I would guess that all timeslot based access
technologies (so starlink, docsis, GPON, LTE?) all distribute "high quality
time" carefully to the "modems", so maybe all that would be needed is to
expose that high quality time to the LAN side of those modems, dressed up as
NTP server?

[RR] It's not that simple!  Distributing "high-quality time", i.e.
"synchronizing all clocks" does not solve the communication problem in
synchronous slotted MAC/PHYs!  All the technologies you mentioned above are
essentially P2P, not intended for broadcast.  Point is, there is a point
controller (aka PoC) often called a base station (eNodeB, gNodeB, .) that
actually "controls everything that is necessary to control" at the UE
including time, frequency and sampling time offsets, and these are critical
to get right if you want to communicate, and they are ALL subject to the
laws of physics (cf. the speed of light)! Turns out that what is necessary
for the system to function anywhere near capacity, is for all the clocks
governing transmissions from the UEs to be "unsynchronized" such that all
the UE transmissions arrive at the PoC at the same (prescribed) time! For
some technologies, in particular 5G!, these considerations are ESSENTIAL.
Feel free to scour the 3GPP LTE 5G RLC and PHY specs if you don't believe
me! :-)   

 

 

> 

>> 

>> 

>>> 

>>> --trip-times

>>> enable the measurement of end to end write to read latencies (client and
server clocks must be synchronized)

> [RWG] --clock-skew

>     enable the measurement of the wall clock difference between sender and
receiver

> 

>> 

>>    [SM] Sweet!

>> 

>> Regards

>>    Sebastian

>> 

>>> 

>>> Bob

>>>> I have many kvetches about the new latency under load tests being

>>>> designed and distributed over the past year. I am delighted! that they

>>>> are happening, but most really need third party evaluation, and

>>>> calibration, and a solid explanation of what network pathologies they

>>>> do and don't cover. Also a RED team attitude towards them, as well as

>>>> thinking hard about what you are not measuring (operations research).

>>>> I actually rather love the new cloudflare speedtest, because it tests

>>>> a single TCP connection, rather than dozens, and at the same time folk

>>>> are complaining that it doesn't find the actual "speed!". yet... the

>>>> test itself more closely emulates a user experience than speedtest.net

>>>> does. I am personally pretty convinced that the fewer numbers of flows

>>>> that a web page opens improves the likelihood of a good user

>>>> experience, but lack data on it.

>>>> To try to tackle the evaluation and calibration part, I've reached out

>>>> to all the new test designers in the hope that we could get together

>>>> and produce a report of what each new test is actually doing. I've

>>>> tweeted, linked in, emailed, and spammed every measurement list I know

>>>> of, and only to some response, please reach out to other test designer

>>>> folks and have them join the rpm email list?

>>>> My principal kvetches in the new tests so far are:

>>>> 0) None of the tests last long enough.

>>>> Ideally there should be a mode where they at least run to "time of

>>>> first loss", or periodically, just run longer than the

>>>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons

>>>> there! It's really bad science to optimize the internet for 20

>>>> seconds. It's like optimizing a car, to handle well, for just 20

>>>> seconds.

>>>> 1) Not testing up + down + ping at the same time

>>>> None of the new tests actually test the same thing that the infamous

>>>> rrul test does - all the others still test up, then down, and ping. It

>>>> was/remains my hope that the simpler parts of the flent test suite -

>>>> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair

>>>> tests would provide calibration to the test designers.

>>>> we've got zillions of flent results in the archive published here:

>>>> https://blog.cerowrt.org/post/found_in_flent/

>>>> ps. Misinformation about iperf 2 impacts my ability to do this.

>>> 

>>>> The new tests have all added up + ping and down + ping, but not up +

>>>> down + ping. Why??

>>>> The behaviors of what happens in that case are really non-intuitive, I

>>>> know, but... it's just one more phase to add to any one of those new

>>>> tests. I'd be deliriously happy if someone(s) new to the field

>>>> started doing that, even optionally, and boggled at how it defeated

>>>> their assumptions.

>>>> Among other things that would show...

>>>> It's the home router industry's dirty secret than darn few "gigabit"

>>>> home routers can actually forward in both directions at a gigabit. I'd

>>>> like to smash that perception thoroughly, but given our starting point

>>>> is a gigabit router was a "gigabit switch" - and historically been

>>>> something that couldn't even forward at 200Mbit - we have a long way

>>>> to go there.

>>>> Only in the past year have non-x86 home routers appeared that could

>>>> actually do a gbit in both directions.

>>>> 2) Few are actually testing within-stream latency

>>>> Apple's rpm project is making a stab in that direction. It looks

>>>> highly likely, that with a little more work, crusader and

>>>> go-responsiveness can finally start sampling the tcp RTT, loss and

>>>> markings, more directly. As for the rest... sampling TCP_INFO on

>>>> windows, and Linux, at least, always appeared simple to me, but I'm

>>>> discovering how hard it is by delving deep into the rust behind

>>>> crusader.

>>>> the goresponsiveness thing is also IMHO running WAY too many streams

>>>> at the same time, I guess motivated by an attempt to have the test

>>>> complete quickly?

>>>> B) To try and tackle the validation problem:ps. Misinformation about
iperf 2 impacts my ability to do this.

>>> 

>>>> In the libreqos.io project we've established a testbed where tests can

>>>> be plunked through various ISP plan network emulations. It's here:

>>>> https://payne.taht.net (run bandwidth test for what's currently hooked

>>>> up)

>>>> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to

>>>> leverage with that, so I don't have to nat the various emulations.

>>>> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,

>>>> to see more test designers setup a testbed like this to calibrate

>>>> their own stuff.

>>>> Presently we're able to test:

>>>> flent

>>>> netperf

>>>> iperf2

>>>> iperf3

>>>> speedtest-cli

>>>> crusader

>>>> the broadband forum udp based test:

>>>> https://github.com/BroadbandForum/obudpst

>>>> trexx

>>>> There's also a virtual machine setup that we can remotely drive a web

>>>> browser from (but I didn't want to nat the results to the world) to

>>>> test other web services.

>>>> _______________________________________________

>>>> Rpm mailing list

>>>> Rpm@lists.bufferbloat.net

>>>> https://lists.bufferbloat.net/listinfo/rpm

>>> _______________________________________________

>>> Starlink mailing list

>>> Starlink@lists.bufferbloat.net

>>> https://lists.bufferbloat.net/listinfo/starlink

>> 

>> _______________________________________________

>> Starlink mailing list

>> Starlink@lists.bufferbloat.net

>> https://lists.bufferbloat.net/listinfo/starlink

 

_______________________________________________

Starlink mailing list

Starlink@lists.bufferbloat.net

https://lists.bufferbloat.net/listinfo/starlink


[-- Attachment #2: Type: text/html, Size: 32121 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-11 20:09             ` rjmcmahon
@ 2023-01-12  8:14               ` Sebastian Moeller
  2023-01-12 17:49                 ` Robert McMahon
  0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-01-12  8:14 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Rodney W. Grimes, Rpm, mike.reynolds, David P. Reed, libreqos,
	Dave Taht via Starlink, bloat

Hi Bob,


> On Jan 11, 2023, at 21:09, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
> 
> Iperf 2 is designed to measure network i/o. Note: It doesn't have to move large amounts of data. It can support data profiles that don't drive TCP's CCA as an example.
> 
> Two things I've been asked for and avoided:
> 
> 1) Integrate clock sync into iperf's test traffic

	[SM] This I understand, measurement conditions can be unsuited for tight time synchronization...


> 2) Measure and output CPU usages

	[SM] This one puzzles me, as far as I understand the only way to properly diagnose network issues is to rule out other things like CPU overload that can have symptoms similar to network issues. As an example, the cake qdisc will if CPU cycles become tight first increases its internal queueing and jitter (not consciously, it is just an observation that once cake does not get access to the CPU as timely as it wants, queuing latency and variability increases) and then later also shows reduced throughput, so similar things that can happen along an e2e network path for completely different reasons, e.g. lower level retransmissions or a variable rate link. So i would think that checking the CPU load at least coarse would be within the scope of network testing tools, no?

Regards
	Sebastian




> I think both of these are outside the scope of a tool designed to test network i/o over sockets, rather these should be developed & validated independently of a network i/o tool.
> 
> Clock error really isn't about amount/frequency of traffic but rather getting a periodic high-quality reference. I tend to use GPS pulse per second to lock the local system oscillator to. As David says, most every modern handheld computer has the GPS chips to do this already. So to me it seems more of a policy choice between data center operators and device mfgs and less of a technical issue.
> 
> Bob
>> Hello,
>> 	Yall can call me crazy if you want.. but... see below [RWG]
>>> Hi Bib,
>>> > On Jan 9, 2023, at 20:13, rjmcmahon via Starlink <starlink@lists.bufferbloat.net> wrote:
>>> >
>>> > My biggest barrier is the lack of clock sync by the devices, i.e. very limited support for PTP in data centers and in end devices. This limits the ability to measure one way delays (OWD) and most assume that OWD is 1/2 and RTT which typically is a mistake. We know this intuitively with airplane flight times or even car commute times where the one way time is not 1/2 a round trip time. Google maps & directions provide a time estimate for the one way link. It doesn't compute a round trip and divide by two.
>>> >
>>> > For those that can get clock sync working, the iperf 2 --trip-times options is useful.
>>> 	[SM] +1; and yet even with unsynchronized clocks one can try to measure how latency changes under load and that can be done per direction. Sure this is far inferior to real reliably measured OWDs, but if life/the internet deals you lemons....
>> [RWG] iperf2/iperf3, etc are already moving large amounts of data
>> back and forth, for that matter any rate test, why not abuse some of
>> that data and add the fundemental NTP clock sync data and
>> bidirectionally pass each others concept of "current time".  IIRC (its
>> been 25 years since I worked on NTP at this level) you *should* be
>> able to get a fairly accurate clock delta between each end, and then
>> use that info and time stamps in the data stream to compute OWD's.
>> You need to put 4 time stamps in the packet, and with that you can
>> compute "offset".
>>> >
>>> > --trip-times
>>> >  enable the measurement of end to end write to read latencies (client and server clocks must be synchronized)
>> [RWG] --clock-skew
>> 	enable the measurement of the wall clock difference between sender and receiver
>>> 	[SM] Sweet!
>>> Regards
>>> 	Sebastian
>>> >
>>> > Bob
>>> >> I have many kvetches about the new latency under load tests being
>>> >> designed and distributed over the past year. I am delighted! that they
>>> >> are happening, but most really need third party evaluation, and
>>> >> calibration, and a solid explanation of what network pathologies they
>>> >> do and don't cover. Also a RED team attitude towards them, as well as
>>> >> thinking hard about what you are not measuring (operations research).
>>> >> I actually rather love the new cloudflare speedtest, because it tests
>>> >> a single TCP connection, rather than dozens, and at the same time folk
>>> >> are complaining that it doesn't find the actual "speed!". yet... the
>>> >> test itself more closely emulates a user experience than speedtest.net
>>> >> does. I am personally pretty convinced that the fewer numbers of flows
>>> >> that a web page opens improves the likelihood of a good user
>>> >> experience, but lack data on it.
>>> >> To try to tackle the evaluation and calibration part, I've reached out
>>> >> to all the new test designers in the hope that we could get together
>>> >> and produce a report of what each new test is actually doing. I've
>>> >> tweeted, linked in, emailed, and spammed every measurement list I know
>>> >> of, and only to some response, please reach out to other test designer
>>> >> folks and have them join the rpm email list?
>>> >> My principal kvetches in the new tests so far are:
>>> >> 0) None of the tests last long enough.
>>> >> Ideally there should be a mode where they at least run to "time of
>>> >> first loss", or periodically, just run longer than the
>>> >> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>>> >> there! It's really bad science to optimize the internet for 20
>>> >> seconds. It's like optimizing a car, to handle well, for just 20
>>> >> seconds.
>>> >> 1) Not testing up + down + ping at the same time
>>> >> None of the new tests actually test the same thing that the infamous
>>> >> rrul test does - all the others still test up, then down, and ping. It
>>> >> was/remains my hope that the simpler parts of the flent test suite -
>>> >> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
>>> >> tests would provide calibration to the test designers.
>>> >> we've got zillions of flent results in the archive published here:
>>> >> https://blog.cerowrt.org/post/found_in_flent/
>>> >> ps. Misinformation about iperf 2 impacts my ability to do this.
>>> >
>>> >> The new tests have all added up + ping and down + ping, but not up +
>>> >> down + ping. Why??
>>> >> The behaviors of what happens in that case are really non-intuitive, I
>>> >> know, but... it's just one more phase to add to any one of those new
>>> >> tests. I'd be deliriously happy if someone(s) new to the field
>>> >> started doing that, even optionally, and boggled at how it defeated
>>> >> their assumptions.
>>> >> Among other things that would show...
>>> >> It's the home router industry's dirty secret than darn few "gigabit"
>>> >> home routers can actually forward in both directions at a gigabit. I'd
>>> >> like to smash that perception thoroughly, but given our starting point
>>> >> is a gigabit router was a "gigabit switch" - and historically been
>>> >> something that couldn't even forward at 200Mbit - we have a long way
>>> >> to go there.
>>> >> Only in the past year have non-x86 home routers appeared that could
>>> >> actually do a gbit in both directions.
>>> >> 2) Few are actually testing within-stream latency
>>> >> Apple's rpm project is making a stab in that direction. It looks
>>> >> highly likely, that with a little more work, crusader and
>>> >> go-responsiveness can finally start sampling the tcp RTT, loss and
>>> >> markings, more directly. As for the rest... sampling TCP_INFO on
>>> >> windows, and Linux, at least, always appeared simple to me, but I'm
>>> >> discovering how hard it is by delving deep into the rust behind
>>> >> crusader.
>>> >> the goresponsiveness thing is also IMHO running WAY too many streams
>>> >> at the same time, I guess motivated by an attempt to have the test
>>> >> complete quickly?
>>> >> B) To try and tackle the validation problem:ps. Misinformation about iperf 2 impacts my ability to do this.
>>> >
>>> >> In the libreqos.io project we've established a testbed where tests can
>>> >> be plunked through various ISP plan network emulations. It's here:
>>> >> https://payne.taht.net (run bandwidth test for what's currently hooked
>>> >> up)
>>> >> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
>>> >> leverage with that, so I don't have to nat the various emulations.
>>> >> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
>>> >> to see more test designers setup a testbed like this to calibrate
>>> >> their own stuff.
>>> >> Presently we're able to test:
>>> >> flent
>>> >> netperf
>>> >> iperf2
>>> >> iperf3
>>> >> speedtest-cli
>>> >> crusader
>>> >> the broadband forum udp based test:
>>> >> https://github.com/BroadbandForum/obudpst
>>> >> trexx
>>> >> There's also a virtual machine setup that we can remotely drive a web
>>> >> browser from (but I didn't want to nat the results to the world) to
>>> >> test other web services.
>>> >> _______________________________________________
>>> >> Rpm mailing list
>>> >> Rpm@lists.bufferbloat.net
>>> >> https://lists.bufferbloat.net/listinfo/rpm
>>> > _______________________________________________
>>> > Starlink mailing list
>>> > Starlink@lists.bufferbloat.net
>>> > https://lists.bufferbloat.net/listinfo/starlink
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-11 21:46               ` Dick Roy
@ 2023-01-12  8:22                 ` Sebastian Moeller
  2023-01-12 18:02                   ` rjmcmahon
  2023-01-12 20:39                   ` Dick Roy
  0 siblings, 2 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-01-12  8:22 UTC (permalink / raw)
  To: Dick Roy
  Cc: Rodney W. Grimes, mike.reynolds, libreqos, David P. Reed, Rpm,
	rjmcmahon, bloat

Hi RR,


> On Jan 11, 2023, at 22:46, Dick Roy <dickroy@alum.mit.edu> wrote:
> 
>  
>  
> -----Original Message-----
> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of Sebastian Moeller via Starlink
> Sent: Wednesday, January 11, 2023 12:01 PM
> To: Rodney W. Grimes
> Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos; David P. Reed; Rpm; rjmcmahon; bloat
> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
>  
> Hi Rodney,
>  
>  
>  
>  
> > On Jan 11, 2023, at 19:32, Rodney W. Grimes <starlink@gndrsh.dnsmgr.net> wrote:
> > 
> > Hello,
> > 
> >     Yall can call me crazy if you want.. but... see below [RWG]
> >> Hi Bib,
> >> 
> >> 
> >>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink <starlink@lists.bufferbloat.net> wrote:
> >>> 
> >>> My biggest barrier is the lack of clock sync by the devices, i.e. very limited support for PTP in data centers and in end devices. This limits the ability to measure one way delays (OWD) and most assume that OWD is 1/2 and RTT which typically is a mistake. We know this intuitively with airplane flight times or even car commute times where the one way time is not 1/2 a round trip time. Google maps & directions provide a time estimate for the one way link. It doesn't compute a round trip and divide by two.
> >>> 
> >>> For those that can get clock sync working, the iperf 2 --trip-times options is useful.
> >> 
> >>    [SM] +1; and yet even with unsynchronized clocks one can try to measure how latency changes under load and that can be done per direction. Sure this is far inferior to real reliably measured OWDs, but if life/the internet deals you lemons....
> > 
> > [RWG] iperf2/iperf3, etc are already moving large amounts of data back and forth, for that matter any rate test, why not abuse some of that data and add the fundemental NTP clock sync data and bidirectionally pass each others concept of "current time".  IIRC (its been 25 years since I worked on NTP at this level) you *should* be able to get a fairly accurate clock delta between each end, and then use that info and time stamps in the data stream to compute OWD's.  You need to put 4 time stamps in the packet, and with that you can compute "offset".
> [RR] For this to work at a reasonable level of accuracy, the timestamping circuits on both ends need to be deterministic and repeatable as I recall. Any uncertainty in that process adds to synchronization errors/uncertainties.
>  
>       [SM] Nice idea. I would guess that all timeslot based access technologies (so starlink, docsis, GPON, LTE?) all distribute "high quality time" carefully to the "modems", so maybe all that would be needed is to expose that high quality time to the LAN side of those modems, dressed up as NTP server?
> [RR] It’s not that simple!  Distributing “high-quality time”, i.e. “synchronizing all clocks” does not solve the communication problem in synchronous slotted MAC/PHYs!

	[SM] I happily believe you, but the same idea of "time slot" needs to be shared by all nodes, no? So the clockss need to be reasonably similar rate, aka synchronized (see below).


>  All the technologies you mentioned above are essentially P2P, not intended for broadcast.  Point is, there is a point controller (aka PoC) often called a base station (eNodeB, gNodeB, …) that actually “controls everything that is necessary to control” at the UE including time, frequency and sampling time offsets, and these are critical to get right if you want to communicate, and they are ALL subject to the laws of physics (cf. the speed of light)! Turns out that what is necessary for the system to function anywhere near capacity, is for all the clocks governing transmissions from the UEs to be “unsynchronized” such that all the UE transmissions arrive at the PoC at the same (prescribed) time!

	[SM] Fair enough. I would call clocks that are "in sync" albeit with individual offsets as synchronized, but I am a layman and that might sound offensively wrong to experts in the field. But even without the naming my point is that all systems that depend on some idea of shared time-base are halfway there of exposing that time to end users, by "translating it into an NTP time source at the modem.


> For some technologies, in particular 5G!, these considerations are ESSENTIAL. Feel free to scour the 3GPP LTE 5G RLC and PHY specs if you don’t believe me! J   

	[SM Far be it from me not to believe you, so thanks for the pointers. Yet, I still think that unless different nodes of a shared segment move at significantly different speeds, that there should be a common "tick-duration" for all clocks even if each clock runs at an offset... (I naively would try to implement something like that by trying to fully synchronize clocks and maintain a local offset value to convert from "absolute" time to "network" time, but likely because coming from the outside I am blissfully unaware of the detail challenges that need to be solved).

Regards & Thanks
	Sebastian


>  
>  
> > 
> >> 
> >> 
> >>> 
> >>> --trip-times
> >>> enable the measurement of end to end write to read latencies (client and server clocks must be synchronized)
> > [RWG] --clock-skew
> >     enable the measurement of the wall clock difference between sender and receiver
> > 
> >> 
> >>    [SM] Sweet!
> >> 
> >> Regards
> >>    Sebastian
> >> 
> >>> 
> >>> Bob
> >>>> I have many kvetches about the new latency under load tests being
> >>>> designed and distributed over the past year. I am delighted! that they
> >>>> are happening, but most really need third party evaluation, and
> >>>> calibration, and a solid explanation of what network pathologies they
> >>>> do and don't cover. Also a RED team attitude towards them, as well as
> >>>> thinking hard about what you are not measuring (operations research).
> >>>> I actually rather love the new cloudflare speedtest, because it tests
> >>>> a single TCP connection, rather than dozens, and at the same time folk
> >>>> are complaining that it doesn't find the actual "speed!". yet... the
> >>>> test itself more closely emulates a user experience than speedtest.net
> >>>> does. I am personally pretty convinced that the fewer numbers of flows
> >>>> that a web page opens improves the likelihood of a good user
> >>>> experience, but lack data on it.
> >>>> To try to tackle the evaluation and calibration part, I've reached out
> >>>> to all the new test designers in the hope that we could get together
> >>>> and produce a report of what each new test is actually doing. I've
> >>>> tweeted, linked in, emailed, and spammed every measurement list I know
> >>>> of, and only to some response, please reach out to other test designer
> >>>> folks and have them join the rpm email list?
> >>>> My principal kvetches in the new tests so far are:
> >>>> 0) None of the tests last long enough.
> >>>> Ideally there should be a mode where they at least run to "time of
> >>>> first loss", or periodically, just run longer than the
> >>>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
> >>>> there! It's really bad science to optimize the internet for 20
> >>>> seconds. It's like optimizing a car, to handle well, for just 20
> >>>> seconds.
> >>>> 1) Not testing up + down + ping at the same time
> >>>> None of the new tests actually test the same thing that the infamous
> >>>> rrul test does - all the others still test up, then down, and ping. It
> >>>> was/remains my hope that the simpler parts of the flent test suite -
> >>>> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
> >>>> tests would provide calibration to the test designers.
> >>>> we've got zillions of flent results in the archive published here:
> >>>> https://blog.cerowrt.org/post/found_in_flent/
> >>>> ps. Misinformation about iperf 2 impacts my ability to do this.
> >>> 
> >>>> The new tests have all added up + ping and down + ping, but not up +
> >>>> down + ping. Why??
> >>>> The behaviors of what happens in that case are really non-intuitive, I
> >>>> know, but... it's just one more phase to add to any one of those new
> >>>> tests. I'd be deliriously happy if someone(s) new to the field
> >>>> started doing that, even optionally, and boggled at how it defeated
> >>>> their assumptions.
> >>>> Among other things that would show...
> >>>> It's the home router industry's dirty secret than darn few "gigabit"
> >>>> home routers can actually forward in both directions at a gigabit. I'd
> >>>> like to smash that perception thoroughly, but given our starting point
> >>>> is a gigabit router was a "gigabit switch" - and historically been
> >>>> something that couldn't even forward at 200Mbit - we have a long way
> >>>> to go there.
> >>>> Only in the past year have non-x86 home routers appeared that could
> >>>> actually do a gbit in both directions.
> >>>> 2) Few are actually testing within-stream latency
> >>>> Apple's rpm project is making a stab in that direction. It looks
> >>>> highly likely, that with a little more work, crusader and
> >>>> go-responsiveness can finally start sampling the tcp RTT, loss and
> >>>> markings, more directly. As for the rest... sampling TCP_INFO on
> >>>> windows, and Linux, at least, always appeared simple to me, but I'm
> >>>> discovering how hard it is by delving deep into the rust behind
> >>>> crusader.
> >>>> the goresponsiveness thing is also IMHO running WAY too many streams
> >>>> at the same time, I guess motivated by an attempt to have the test
> >>>> complete quickly?
> >>>> B) To try and tackle the validation problem:ps. Misinformation about iperf 2 impacts my ability to do this.
> >>> 
> >>>> In the libreqos.io project we've established a testbed where tests can
> >>>> be plunked through various ISP plan network emulations. It's here:
> >>>> https://payne.taht.net (run bandwidth test for what's currently hooked
> >>>> up)
> >>>> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
> >>>> leverage with that, so I don't have to nat the various emulations.
> >>>> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
> >>>> to see more test designers setup a testbed like this to calibrate
> >>>> their own stuff.
> >>>> Presently we're able to test:
> >>>> flent
> >>>> netperf
> >>>> iperf2
> >>>> iperf3
> >>>> speedtest-cli
> >>>> crusader
> >>>> the broadband forum udp based test:
> >>>> https://github.com/BroadbandForum/obudpst
> >>>> trexx
> >>>> There's also a virtual machine setup that we can remotely drive a web
> >>>> browser from (but I didn't want to nat the results to the world) to
> >>>> test other web services.
> >>>> _______________________________________________
> >>>> Rpm mailing list
> >>>> Rpm@lists.bufferbloat.net
> >>>> https://lists.bufferbloat.net/listinfo/rpm
> >>> _______________________________________________
> >>> Starlink mailing list
> >>> Starlink@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/starlink
> >> 
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/starlink
>  
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-12  8:14               ` Sebastian Moeller
@ 2023-01-12 17:49                 ` Robert McMahon
  2023-01-12 21:57                   ` Dick Roy
  0 siblings, 1 reply; 183+ messages in thread
From: Robert McMahon @ 2023-01-12 17:49 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Rodney W. Grimes, Rpm, mike.reynolds, David P. Reed, libreqos,
	Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 10817 bytes --]

Hi Sebastien,

⁣You make a good point. What I did was issue a warning if the tool found it was being CPU limited vs i/o limited. This indicates the i/o test likely is inaccurate from an i/o perspective, and the results are suspect. It does this crudely by comparing the cpu thread doing stats against the traffic threads doing i/o, which thread is waiting on the others. There is no attempt to assess the cpu load itself. So it's designed with a singular purpose of making sure i/o threads only block on syscalls of write and read.

I probably should revisit this both in design and implementation. Thanks for bringing it up and all input is truly appreciated.

Bob

On Jan 12, 2023, 12:14 AM, at 12:14 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>Hi Bob,
>
>
>> On Jan 11, 2023, at 21:09, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>>
>> Iperf 2 is designed to measure network i/o. Note: It doesn't have to
>move large amounts of data. It can support data profiles that don't
>drive TCP's CCA as an example.
>>
>> Two things I've been asked for and avoided:
>>
>> 1) Integrate clock sync into iperf's test traffic
>
>	[SM] This I understand, measurement conditions can be unsuited for
>tight time synchronization...
>
>
>> 2) Measure and output CPU usages
>
>	[SM] This one puzzles me, as far as I understand the only way to
>properly diagnose network issues is to rule out other things like CPU
>overload that can have symptoms similar to network issues. As an
>example, the cake qdisc will if CPU cycles become tight first increases
>its internal queueing and jitter (not consciously, it is just an
>observation that once cake does not get access to the CPU as timely as
>it wants, queuing latency and variability increases) and then later
>also shows reduced throughput, so similar things that can happen along
>an e2e network path for completely different reasons, e.g. lower level
>retransmissions or a variable rate link. So i would think that checking
>the CPU load at least coarse would be within the scope of network
>testing tools, no?
>
>Regards
>	Sebastian
>
>
>
>
>> I think both of these are outside the scope of a tool designed to
>test network i/o over sockets, rather these should be developed &
>validated independently of a network i/o tool.
>>
>> Clock error really isn't about amount/frequency of traffic but rather
>getting a periodic high-quality reference. I tend to use GPS pulse per
>second to lock the local system oscillator to. As David says, most
>every modern handheld computer has the GPS chips to do this already. So
>to me it seems more of a policy choice between data center operators
>and device mfgs and less of a technical issue.
>>
>> Bob
>>> Hello,
>>> 	Yall can call me crazy if you want.. but... see below [RWG]
>>>> Hi Bib,
>>>> > On Jan 9, 2023, at 20:13, rjmcmahon via Starlink
><starlink@lists.bufferbloat.net> wrote:
>>>> >
>>>> > My biggest barrier is the lack of clock sync by the devices, i.e.
>very limited support for PTP in data centers and in end devices. This
>limits the ability to measure one way delays (OWD) and most assume that
>OWD is 1/2 and RTT which typically is a mistake. We know this
>intuitively with airplane flight times or even car commute times where
>the one way time is not 1/2 a round trip time. Google maps & directions
>provide a time estimate for the one way link. It doesn't compute a
>round trip and divide by two.
>>>> >
>>>> > For those that can get clock sync working, the iperf 2
>--trip-times options is useful.
>>>> 	[SM] +1; and yet even with unsynchronized clocks one can try to
>measure how latency changes under load and that can be done per
>direction. Sure this is far inferior to real reliably measured OWDs,
>but if life/the internet deals you lemons....
>>> [RWG] iperf2/iperf3, etc are already moving large amounts of data
>>> back and forth, for that matter any rate test, why not abuse some of
>>> that data and add the fundemental NTP clock sync data and
>>> bidirectionally pass each others concept of "current time".  IIRC
>(its
>>> been 25 years since I worked on NTP at this level) you *should* be
>>> able to get a fairly accurate clock delta between each end, and then
>>> use that info and time stamps in the data stream to compute OWD's.
>>> You need to put 4 time stamps in the packet, and with that you can
>>> compute "offset".
>>>> >
>>>> > --trip-times
>>>> >  enable the measurement of end to end write to read latencies
>(client and server clocks must be synchronized)
>>> [RWG] --clock-skew
>>> 	enable the measurement of the wall clock difference between sender
>and receiver
>>>> 	[SM] Sweet!
>>>> Regards
>>>> 	Sebastian
>>>> >
>>>> > Bob
>>>> >> I have many kvetches about the new latency under load tests
>being
>>>> >> designed and distributed over the past year. I am delighted!
>that they
>>>> >> are happening, but most really need third party evaluation, and
>>>> >> calibration, and a solid explanation of what network pathologies
>they
>>>> >> do and don't cover. Also a RED team attitude towards them, as
>well as
>>>> >> thinking hard about what you are not measuring (operations
>research).
>>>> >> I actually rather love the new cloudflare speedtest, because it
>tests
>>>> >> a single TCP connection, rather than dozens, and at the same
>time folk
>>>> >> are complaining that it doesn't find the actual "speed!". yet...
>the
>>>> >> test itself more closely emulates a user experience than
>speedtest.net
>>>> >> does. I am personally pretty convinced that the fewer numbers of
>flows
>>>> >> that a web page opens improves the likelihood of a good user
>>>> >> experience, but lack data on it.
>>>> >> To try to tackle the evaluation and calibration part, I've
>reached out
>>>> >> to all the new test designers in the hope that we could get
>together
>>>> >> and produce a report of what each new test is actually doing.
>I've
>>>> >> tweeted, linked in, emailed, and spammed every measurement list
>I know
>>>> >> of, and only to some response, please reach out to other test
>designer
>>>> >> folks and have them join the rpm email list?
>>>> >> My principal kvetches in the new tests so far are:
>>>> >> 0) None of the tests last long enough.
>>>> >> Ideally there should be a mode where they at least run to "time
>of
>>>> >> first loss", or periodically, just run longer than the
>>>> >> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>>>> >> there! It's really bad science to optimize the internet for 20
>>>> >> seconds. It's like optimizing a car, to handle well, for just 20
>>>> >> seconds.
>>>> >> 1) Not testing up + down + ping at the same time
>>>> >> None of the new tests actually test the same thing that the
>infamous
>>>> >> rrul test does - all the others still test up, then down, and
>ping. It
>>>> >> was/remains my hope that the simpler parts of the flent test
>suite -
>>>> >> such as the tcp_up_squarewave tests, the rrul test, and the
>rtt_fair
>>>> >> tests would provide calibration to the test designers.
>>>> >> we've got zillions of flent results in the archive published
>here:
>>>> >> https://blog.cerowrt.org/post/found_in_flent/
>>>> >> ps. Misinformation about iperf 2 impacts my ability to do this.
>>>> >
>>>> >> The new tests have all added up + ping and down + ping, but not
>up +
>>>> >> down + ping. Why??
>>>> >> The behaviors of what happens in that case are really
>non-intuitive, I
>>>> >> know, but... it's just one more phase to add to any one of those
>new
>>>> >> tests. I'd be deliriously happy if someone(s) new to the field
>>>> >> started doing that, even optionally, and boggled at how it
>defeated
>>>> >> their assumptions.
>>>> >> Among other things that would show...
>>>> >> It's the home router industry's dirty secret than darn few
>"gigabit"
>>>> >> home routers can actually forward in both directions at a
>gigabit. I'd
>>>> >> like to smash that perception thoroughly, but given our starting
>point
>>>> >> is a gigabit router was a "gigabit switch" - and historically
>been
>>>> >> something that couldn't even forward at 200Mbit - we have a long
>way
>>>> >> to go there.
>>>> >> Only in the past year have non-x86 home routers appeared that
>could
>>>> >> actually do a gbit in both directions.
>>>> >> 2) Few are actually testing within-stream latency
>>>> >> Apple's rpm project is making a stab in that direction. It looks
>>>> >> highly likely, that with a little more work, crusader and
>>>> >> go-responsiveness can finally start sampling the tcp RTT, loss
>and
>>>> >> markings, more directly. As for the rest... sampling TCP_INFO on
>>>> >> windows, and Linux, at least, always appeared simple to me, but
>I'm
>>>> >> discovering how hard it is by delving deep into the rust behind
>>>> >> crusader.
>>>> >> the goresponsiveness thing is also IMHO running WAY too many
>streams
>>>> >> at the same time, I guess motivated by an attempt to have the
>test
>>>> >> complete quickly?
>>>> >> B) To try and tackle the validation problem:ps. Misinformation
>about iperf 2 impacts my ability to do this.
>>>> >
>>>> >> In the libreqos.io project we've established a testbed where
>tests can
>>>> >> be plunked through various ISP plan network emulations. It's
>here:
>>>> >> https://payne.taht.net (run bandwidth test for what's currently
>hooked
>>>> >> up)
>>>> >> We could rather use an AS number and at least a ipv4/24 and
>ipv6/48 to
>>>> >> leverage with that, so I don't have to nat the various
>emulations.
>>>> >> (and funding, anyone got funding?) Or, as the code is GPLv2
>licensed,
>>>> >> to see more test designers setup a testbed like this to
>calibrate
>>>> >> their own stuff.
>>>> >> Presently we're able to test:
>>>> >> flent
>>>> >> netperf
>>>> >> iperf2
>>>> >> iperf3
>>>> >> speedtest-cli
>>>> >> crusader
>>>> >> the broadband forum udp based test:
>>>> >> https://github.com/BroadbandForum/obudpst
>>>> >> trexx
>>>> >> There's also a virtual machine setup that we can remotely drive
>a web
>>>> >> browser from (but I didn't want to nat the results to the world)
>to
>>>> >> test other web services.
>>>> >> _______________________________________________
>>>> >> Rpm mailing list
>>>> >> Rpm@lists.bufferbloat.net
>>>> >> https://lists.bufferbloat.net/listinfo/rpm
>>>> > _______________________________________________
>>>> > Starlink mailing list
>>>> > Starlink@lists.bufferbloat.net
>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink

[-- Attachment #2: Type: text/html, Size: 12665 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-12  8:22                 ` Sebastian Moeller
@ 2023-01-12 18:02                   ` rjmcmahon
  2023-01-12 21:34                     ` Dick Roy
  2023-01-12 20:39                   ` Dick Roy
  1 sibling, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-01-12 18:02 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Dick Roy, Rodney W. Grimes, mike.reynolds, libreqos,
	David P. Reed, Rpm, bloat

For WiFi there is the TSF

https://en.wikipedia.org/wiki/Timing_synchronization_function

We in test & measurement use that in our internal telemetry. The TSF of 
a Wifi device only needs frequency-sync for some things typically 
related to access to the medium. A phase locked loop does it. A device 
that decides to go to sleep, as an example, will also stop its TSF 
creating a non-linearity. It's difficult to synchronize it to the system 
clock or the GPS atomic clock - though we do this for internal testing 
reasons so it can be done.

What's mostly missing for T&M with WiFi is the GPS atomic clock as 
that's a convenient time domain to use as the canonical domain.

Bob
> Hi RR,
> 
> 
>> On Jan 11, 2023, at 22:46, Dick Roy <dickroy@alum.mit.edu> wrote:
>> 
>> 
>> 
>> -----Original Message-----
>> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On 
>> Behalf Of Sebastian Moeller via Starlink
>> Sent: Wednesday, January 11, 2023 12:01 PM
>> To: Rodney W. Grimes
>> Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos; 
>> David P. Reed; Rpm; rjmcmahon; bloat
>> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in 
>> USA
>> 
>> Hi Rodney,
>> 
>> 
>> 
>> 
>> > On Jan 11, 2023, at 19:32, Rodney W. Grimes <starlink@gndrsh.dnsmgr.net> wrote:
>> >
>> > Hello,
>> >
>> >     Yall can call me crazy if you want.. but... see below [RWG]
>> >> Hi Bib,
>> >>
>> >>
>> >>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink <starlink@lists.bufferbloat.net> wrote:
>> >>>
>> >>> My biggest barrier is the lack of clock sync by the devices, i.e. very limited support for PTP in data centers and in end devices. This limits the ability to measure one way delays (OWD) and most assume that OWD is 1/2 and RTT which typically is a mistake. We know this intuitively with airplane flight times or even car commute times where the one way time is not 1/2 a round trip time. Google maps & directions provide a time estimate for the one way link. It doesn't compute a round trip and divide by two.
>> >>>
>> >>> For those that can get clock sync working, the iperf 2 --trip-times options is useful.
>> >>
>> >>    [SM] +1; and yet even with unsynchronized clocks one can try to measure how latency changes under load and that can be done per direction. Sure this is far inferior to real reliably measured OWDs, but if life/the internet deals you lemons....
>> >
>> > [RWG] iperf2/iperf3, etc are already moving large amounts of data back and forth, for that matter any rate test, why not abuse some of that data and add the fundemental NTP clock sync data and bidirectionally pass each others concept of "current time".  IIRC (its been 25 years since I worked on NTP at this level) you *should* be able to get a fairly accurate clock delta between each end, and then use that info and time stamps in the data stream to compute OWD's.  You need to put 4 time stamps in the packet, and with that you can compute "offset".
>> [RR] For this to work at a reasonable level of accuracy, the 
>> timestamping circuits on both ends need to be deterministic and 
>> repeatable as I recall. Any uncertainty in that process adds to 
>> synchronization errors/uncertainties.
>> 
>>       [SM] Nice idea. I would guess that all timeslot based access 
>> technologies (so starlink, docsis, GPON, LTE?) all distribute "high 
>> quality time" carefully to the "modems", so maybe all that would be 
>> needed is to expose that high quality time to the LAN side of those 
>> modems, dressed up as NTP server?
>> [RR] It’s not that simple!  Distributing “high-quality time”, i.e. 
>> “synchronizing all clocks” does not solve the communication problem in 
>> synchronous slotted MAC/PHYs!
> 
> 	[SM] I happily believe you, but the same idea of "time slot" needs to
> be shared by all nodes, no? So the clockss need to be reasonably
> similar rate, aka synchronized (see below).
> 
> 
>>  All the technologies you mentioned above are essentially P2P, not 
>> intended for broadcast.  Point is, there is a point controller (aka 
>> PoC) often called a base station (eNodeB, gNodeB, …) that actually 
>> “controls everything that is necessary to control” at the UE including 
>> time, frequency and sampling time offsets, and these are critical to 
>> get right if you want to communicate, and they are ALL subject to the 
>> laws of physics (cf. the speed of light)! Turns out that what is 
>> necessary for the system to function anywhere near capacity, is for 
>> all the clocks governing transmissions from the UEs to be 
>> “unsynchronized” such that all the UE transmissions arrive at the PoC 
>> at the same (prescribed) time!
> 
> 	[SM] Fair enough. I would call clocks that are "in sync" albeit with
> individual offsets as synchronized, but I am a layman and that might
> sound offensively wrong to experts in the field. But even without the
> naming my point is that all systems that depend on some idea of shared
> time-base are halfway there of exposing that time to end users, by
> "translating it into an NTP time source at the modem.
> 
> 
>> For some technologies, in particular 5G!, these considerations are 
>> ESSENTIAL. Feel free to scour the 3GPP LTE 5G RLC and PHY specs if you 
>> don’t believe me! J
> 
> 	[SM Far be it from me not to believe you, so thanks for the pointers.
> Yet, I still think that unless different nodes of a shared segment
> move at significantly different speeds, that there should be a common
> "tick-duration" for all clocks even if each clock runs at an offset...
> (I naively would try to implement something like that by trying to
> fully synchronize clocks and maintain a local offset value to convert
> from "absolute" time to "network" time, but likely because coming from
> the outside I am blissfully unaware of the detail challenges that need
> to be solved).
> 
> Regards & Thanks
> 	Sebastian
> 
> 
>> 
>> 
>> >
>> >>
>> >>
>> >>>
>> >>> --trip-times
>> >>> enable the measurement of end to end write to read latencies (client and server clocks must be synchronized)
>> > [RWG] --clock-skew
>> >     enable the measurement of the wall clock difference between sender and receiver
>> >
>> >>
>> >>    [SM] Sweet!
>> >>
>> >> Regards
>> >>    Sebastian
>> >>
>> >>>
>> >>> Bob
>> >>>> I have many kvetches about the new latency under load tests being
>> >>>> designed and distributed over the past year. I am delighted! that they
>> >>>> are happening, but most really need third party evaluation, and
>> >>>> calibration, and a solid explanation of what network pathologies they
>> >>>> do and don't cover. Also a RED team attitude towards them, as well as
>> >>>> thinking hard about what you are not measuring (operations research).
>> >>>> I actually rather love the new cloudflare speedtest, because it tests
>> >>>> a single TCP connection, rather than dozens, and at the same time folk
>> >>>> are complaining that it doesn't find the actual "speed!". yet... the
>> >>>> test itself more closely emulates a user experience than speedtest.net
>> >>>> does. I am personally pretty convinced that the fewer numbers of flows
>> >>>> that a web page opens improves the likelihood of a good user
>> >>>> experience, but lack data on it.
>> >>>> To try to tackle the evaluation and calibration part, I've reached out
>> >>>> to all the new test designers in the hope that we could get together
>> >>>> and produce a report of what each new test is actually doing. I've
>> >>>> tweeted, linked in, emailed, and spammed every measurement list I know
>> >>>> of, and only to some response, please reach out to other test designer
>> >>>> folks and have them join the rpm email list?
>> >>>> My principal kvetches in the new tests so far are:
>> >>>> 0) None of the tests last long enough.
>> >>>> Ideally there should be a mode where they at least run to "time of
>> >>>> first loss", or periodically, just run longer than the
>> >>>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>> >>>> there! It's really bad science to optimize the internet for 20
>> >>>> seconds. It's like optimizing a car, to handle well, for just 20
>> >>>> seconds.
>> >>>> 1) Not testing up + down + ping at the same time
>> >>>> None of the new tests actually test the same thing that the infamous
>> >>>> rrul test does - all the others still test up, then down, and ping. It
>> >>>> was/remains my hope that the simpler parts of the flent test suite -
>> >>>> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
>> >>>> tests would provide calibration to the test designers.
>> >>>> we've got zillions of flent results in the archive published here:
>> >>>> https://blog.cerowrt.org/post/found_in_flent/
>> >>>> ps. Misinformation about iperf 2 impacts my ability to do this.
>> >>>
>> >>>> The new tests have all added up + ping and down + ping, but not up +
>> >>>> down + ping. Why??
>> >>>> The behaviors of what happens in that case are really non-intuitive, I
>> >>>> know, but... it's just one more phase to add to any one of those new
>> >>>> tests. I'd be deliriously happy if someone(s) new to the field
>> >>>> started doing that, even optionally, and boggled at how it defeated
>> >>>> their assumptions.
>> >>>> Among other things that would show...
>> >>>> It's the home router industry's dirty secret than darn few "gigabit"
>> >>>> home routers can actually forward in both directions at a gigabit. I'd
>> >>>> like to smash that perception thoroughly, but given our starting point
>> >>>> is a gigabit router was a "gigabit switch" - and historically been
>> >>>> something that couldn't even forward at 200Mbit - we have a long way
>> >>>> to go there.
>> >>>> Only in the past year have non-x86 home routers appeared that could
>> >>>> actually do a gbit in both directions.
>> >>>> 2) Few are actually testing within-stream latency
>> >>>> Apple's rpm project is making a stab in that direction. It looks
>> >>>> highly likely, that with a little more work, crusader and
>> >>>> go-responsiveness can finally start sampling the tcp RTT, loss and
>> >>>> markings, more directly. As for the rest... sampling TCP_INFO on
>> >>>> windows, and Linux, at least, always appeared simple to me, but I'm
>> >>>> discovering how hard it is by delving deep into the rust behind
>> >>>> crusader.
>> >>>> the goresponsiveness thing is also IMHO running WAY too many streams
>> >>>> at the same time, I guess motivated by an attempt to have the test
>> >>>> complete quickly?
>> >>>> B) To try and tackle the validation problem:ps. Misinformation about iperf 2 impacts my ability to do this.
>> >>>
>> >>>> In the libreqos.io project we've established a testbed where tests can
>> >>>> be plunked through various ISP plan network emulations. It's here:
>> >>>> https://payne.taht.net (run bandwidth test for what's currently hooked
>> >>>> up)
>> >>>> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
>> >>>> leverage with that, so I don't have to nat the various emulations.
>> >>>> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
>> >>>> to see more test designers setup a testbed like this to calibrate
>> >>>> their own stuff.
>> >>>> Presently we're able to test:
>> >>>> flent
>> >>>> netperf
>> >>>> iperf2
>> >>>> iperf3
>> >>>> speedtest-cli
>> >>>> crusader
>> >>>> the broadband forum udp based test:
>> >>>> https://github.com/BroadbandForum/obudpst
>> >>>> trexx
>> >>>> There's also a virtual machine setup that we can remotely drive a web
>> >>>> browser from (but I didn't want to nat the results to the world) to
>> >>>> test other web services.
>> >>>> _______________________________________________
>> >>>> Rpm mailing list
>> >>>> Rpm@lists.bufferbloat.net
>> >>>> https://lists.bufferbloat.net/listinfo/rpm
>> >>> _______________________________________________
>> >>> Starlink mailing list
>> >>> Starlink@lists.bufferbloat.net
>> >>> https://lists.bufferbloat.net/listinfo/starlink
>> >>
>> >> _______________________________________________
>> >> Starlink mailing list
>> >> Starlink@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/starlink
>> 
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-12  8:22                 ` Sebastian Moeller
  2023-01-12 18:02                   ` rjmcmahon
@ 2023-01-12 20:39                   ` Dick Roy
  2023-01-13  7:33                     ` Sebastian Moeller
  2023-01-13  7:40                     ` rjmcmahon
  1 sibling, 2 replies; 183+ messages in thread
From: Dick Roy @ 2023-01-12 20:39 UTC (permalink / raw)
  To: 'Sebastian Moeller'
  Cc: 'Rodney W. Grimes', mike.reynolds, 'libreqos',
	'David P. Reed', 'Rpm', 'rjmcmahon',
	'bloat'

[-- Attachment #1: Type: text/plain, Size: 16054 bytes --]

Hi Sebastian (et. al.),

 

[I'll comment up here instead of inline.]  

 

Let me start by saying that I have not been intimately involved with the
IEEE 1588 effort (PTP), however I was involved in the 802.11 efforts along a
similar vein, just adding the wireless first hop component and it's effects
on PTP.  

 

What was apparent from the outset was that there was a lack of understanding
what the terms "to synchronize" or "to be synchronized" actually mean.  It's
not trivial . because we live in a (approximately, that's another story!)
4-D space-time continuum where the Lorentz metric plays a critical role.
Therein, simultaneity (aka "things happening at the same time") means the
"distance" between two such events is zero and that distance is given by
sqrt(x^2 + y^2 + z^2 - (ct)^2) and the "thing happening" can be the tick of
a clock somewhere. Now since everything is relative (time with respect to
what? / location with respect to where?) it's pretty easy to see that "if
you don't know where you are, you can't know what time it is!" (English
sailors of the 18th century knew this well!) Add to this the fact that if
everything were stationary, nothing would happen (as Einstein said "Nothing
happens until something moves!"), special relativity also pays a role.
Clocks on GPS satellites run approx. 7usecs/day slower than those on earth
due to their "speed" (8700 mph roughly)! Then add the consequence that
without mass we wouldn't exist (in these forms at least:-)), and
gravitational effects (aka General Relativity) come into play. Those turn
out to make clocks on GPS satellites run 45usec/day faster than those on
earth!  The net effect is that GPS clocks run about 38usec/day faster than
clocks on earth.  So what does it mean to "synchronize to GPS"?  Point is:
it's a non-trivial question with a very complicated answer.  The reason it
is important to get all this right is that the "what that ties time and
space together" is the speed of light and that turns out to be a
"foot-per-nanosecond" in a vacuum (roughly 300m/usec).  This means if I am
uncertain about my location to say 300 meters, then I also am not sure what
time it is to a usec AND vice-versa! 

 

All that said, the simplest explanation of synchronization is probably: Two
clocks are synchronized if, when they are brought (slowly) into physical
proximity ("sat next to each other") in the same (quasi-)inertial frame and
the same gravitational potential (not so obvious BTW . see the FYI below!),
an observer of both would say "they are keeping time identically". Since
this experiment is rarely possible, one can never be "sure" that his clock
is synchronized to any other clock elsewhere. And what does it mean to say
they "were synchronized" when brought together, but now they are not because
they are now in different gravitational potentials! (FYI, there are land
mine detectors being developed on this very principle! I know someone who
actually worked on such a project!) 

 

This all gets even more complicated when dealing with large networks of
networks in which the "speed of information transmission" can vary depending
on the medium (cf. coaxial cables versus fiber versus microwave links!) In
fact, the atmosphere is one of those media and variations therein result in
the need for "GPS corrections" (cf. RTCM GPS correction messages, RTK, etc.)
in order to get to sub-nsec/cm accuracy.  Point is if you have a set of
nodes distributed across the country all with GPS and all "synchronized to
GPS time", and a second identical set of nodes (with no GPS) instead
connected with a network of cables and fiber links, all of different lengths
and composition using different carrier frequencies (dielectric constants
vary with frequency!) "synchronized" to some clock somewhere using NTP or
PTP), the synchronization of the two sets will be different unless a common
reference clock is used AND all the above effects are taken into account,
and good luck with that! :-) 

 

In conclusion, if anyone tells you that clock synchronization in
communication networks is simple ("Just use GPS!"), you should feel free to
chuckle (under your breath if necessary:-)) 

 

Cheers,

 

RR

 

 

  

 

 

 

-----Original Message-----
From: Sebastian Moeller [mailto:moeller0@gmx.de] 
Sent: Thursday, January 12, 2023 12:23 AM
To: Dick Roy
Cc: Rodney W. Grimes; mike.reynolds@netforecast.com; libreqos; David P.
Reed; Rpm; rjmcmahon; bloat
Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

 

Hi RR,

 

 

> On Jan 11, 2023, at 22:46, Dick Roy <dickroy@alum.mit.edu> wrote:

> 

>  

>  

> -----Original Message-----

> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf
Of Sebastian Moeller via Starlink

> Sent: Wednesday, January 11, 2023 12:01 PM

> To: Rodney W. Grimes

> Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos; David
P. Reed; Rpm; rjmcmahon; bloat

> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

>  

> Hi Rodney,

>  

>  

>  

>  

> > On Jan 11, 2023, at 19:32, Rodney W. Grimes <starlink@gndrsh.dnsmgr.net>
wrote:

> > 

> > Hello,

> > 

> >     Yall can call me crazy if you want.. but... see below [RWG]

> >> Hi Bib,

> >> 

> >> 

> >>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink
<starlink@lists.bufferbloat.net> wrote:

> >>> 

> >>> My biggest barrier is the lack of clock sync by the devices, i.e. very
limited support for PTP in data centers and in end devices. This limits the
ability to measure one way delays (OWD) and most assume that OWD is 1/2 and
RTT which typically is a mistake. We know this intuitively with airplane
flight times or even car commute times where the one way time is not 1/2 a
round trip time. Google maps & directions provide a time estimate for the
one way link. It doesn't compute a round trip and divide by two.

> >>> 

> >>> For those that can get clock sync working, the iperf 2 --trip-times
options is useful.

> >> 

> >>    [SM] +1; and yet even with unsynchronized clocks one can try to
measure how latency changes under load and that can be done per direction.
Sure this is far inferior to real reliably measured OWDs, but if life/the
internet deals you lemons....

> > 

> > [RWG] iperf2/iperf3, etc are already moving large amounts of data back
and forth, for that matter any rate test, why not abuse some of that data
and add the fundemental NTP clock sync data and bidirectionally pass each
others concept of "current time".  IIRC (its been 25 years since I worked on
NTP at this level) you *should* be able to get a fairly accurate clock delta
between each end, and then use that info and time stamps in the data stream
to compute OWD's.  You need to put 4 time stamps in the packet, and with
that you can compute "offset".

> [RR] For this to work at a reasonable level of accuracy, the timestamping
circuits on both ends need to be deterministic and repeatable as I recall.
Any uncertainty in that process adds to synchronization
errors/uncertainties.

>  

>       [SM] Nice idea. I would guess that all timeslot based access
technologies (so starlink, docsis, GPON, LTE?) all distribute "high quality
time" carefully to the "modems", so maybe all that would be needed is to
expose that high quality time to the LAN side of those modems, dressed up as
NTP server?

> [RR] It's not that simple!  Distributing "high-quality time", i.e.
"synchronizing all clocks" does not solve the communication problem in
synchronous slotted MAC/PHYs!

 

      [SM] I happily believe you, but the same idea of "time slot" needs to
be shared by all nodes, no? So the clockss need to be reasonably similar
rate, aka synchronized (see below).

 

 

>  All the technologies you mentioned above are essentially P2P, not
intended for broadcast.  Point is, there is a point controller (aka PoC)
often called a base station (eNodeB, gNodeB, .) that actually "controls
everything that is necessary to control" at the UE including time, frequency
and sampling time offsets, and these are critical to get right if you want
to communicate, and they are ALL subject to the laws of physics (cf. the
speed of light)! Turns out that what is necessary for the system to function
anywhere near capacity, is for all the clocks governing transmissions from
the UEs to be "unsynchronized" such that all the UE transmissions arrive at
the PoC at the same (prescribed) time!

 

      [SM] Fair enough. I would call clocks that are "in sync" albeit with
individual offsets as synchronized, but I am a layman and that might sound
offensively wrong to experts in the field. But even without the naming my
point is that all systems that depend on some idea of shared time-base are
halfway there of exposing that time to end users, by "translating it into an
NTP time source at the modem.

 

 

> For some technologies, in particular 5G!, these considerations are
ESSENTIAL. Feel free to scour the 3GPP LTE 5G RLC and PHY specs if you don't
believe me! J   

 

      [SM Far be it from me not to believe you, so thanks for the pointers.
Yet, I still think that unless different nodes of a shared segment move at
significantly different speeds, that there should be a common
"tick-duration" for all clocks even if each clock runs at an offset... (I
naively would try to implement something like that by trying to fully
synchronize clocks and maintain a local offset value to convert from
"absolute" time to "network" time, but likely because coming from the
outside I am blissfully unaware of the detail challenges that need to be
solved).

 

Regards & Thanks

      Sebastian

 

 

>  

>  

> > 

> >> 

> >> 

> >>> 

> >>> --trip-times

> >>> enable the measurement of end to end write to read latencies (client
and server clocks must be synchronized)

> > [RWG] --clock-skew

> >     enable the measurement of the wall clock difference between sender
and receiver

> > 

> >> 

> >>    [SM] Sweet!

> >> 

> >> Regards

> >>    Sebastian

> >> 

> >>> 

> >>> Bob

> >>>> I have many kvetches about the new latency under load tests being

> >>>> designed and distributed over the past year. I am delighted! that
they

> >>>> are happening, but most really need third party evaluation, and

> >>>> calibration, and a solid explanation of what network pathologies they

> >>>> do and don't cover. Also a RED team attitude towards them, as well as

> >>>> thinking hard about what you are not measuring (operations research).

> >>>> I actually rather love the new cloudflare speedtest, because it tests

> >>>> a single TCP connection, rather than dozens, and at the same time
folk

> >>>> are complaining that it doesn't find the actual "speed!". yet... the

> >>>> test itself more closely emulates a user experience than
speedtest.net

> >>>> does. I am personally pretty convinced that the fewer numbers of
flows

> >>>> that a web page opens improves the likelihood of a good user

> >>>> experience, but lack data on it.

> >>>> To try to tackle the evaluation and calibration part, I've reached
out

> >>>> to all the new test designers in the hope that we could get together

> >>>> and produce a report of what each new test is actually doing. I've

> >>>> tweeted, linked in, emailed, and spammed every measurement list I
know

> >>>> of, and only to some response, please reach out to other test
designer

> >>>> folks and have them join the rpm email list?

> >>>> My principal kvetches in the new tests so far are:

> >>>> 0) None of the tests last long enough.

> >>>> Ideally there should be a mode where they at least run to "time of

> >>>> first loss", or periodically, just run longer than the

> >>>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons

> >>>> there! It's really bad science to optimize the internet for 20

> >>>> seconds. It's like optimizing a car, to handle well, for just 20

> >>>> seconds.

> >>>> 1) Not testing up + down + ping at the same time

> >>>> None of the new tests actually test the same thing that the infamous

> >>>> rrul test does - all the others still test up, then down, and ping.
It

> >>>> was/remains my hope that the simpler parts of the flent test suite -

> >>>> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair

> >>>> tests would provide calibration to the test designers.

> >>>> we've got zillions of flent results in the archive published here:

> >>>> https://blog.cerowrt.org/post/found_in_flent/

> >>>> ps. Misinformation about iperf 2 impacts my ability to do this.

> >>> 

> >>>> The new tests have all added up + ping and down + ping, but not up +

> >>>> down + ping. Why??

> >>>> The behaviors of what happens in that case are really non-intuitive,
I

> >>>> know, but... it's just one more phase to add to any one of those new

> >>>> tests. I'd be deliriously happy if someone(s) new to the field

> >>>> started doing that, even optionally, and boggled at how it defeated

> >>>> their assumptions.

> >>>> Among other things that would show...

> >>>> It's the home router industry's dirty secret than darn few "gigabit"

> >>>> home routers can actually forward in both directions at a gigabit.
I'd

> >>>> like to smash that perception thoroughly, but given our starting
point

> >>>> is a gigabit router was a "gigabit switch" - and historically been

> >>>> something that couldn't even forward at 200Mbit - we have a long way

> >>>> to go there.

> >>>> Only in the past year have non-x86 home routers appeared that could

> >>>> actually do a gbit in both directions.

> >>>> 2) Few are actually testing within-stream latency

> >>>> Apple's rpm project is making a stab in that direction. It looks

> >>>> highly likely, that with a little more work, crusader and

> >>>> go-responsiveness can finally start sampling the tcp RTT, loss and

> >>>> markings, more directly. As for the rest... sampling TCP_INFO on

> >>>> windows, and Linux, at least, always appeared simple to me, but I'm

> >>>> discovering how hard it is by delving deep into the rust behind

> >>>> crusader.

> >>>> the goresponsiveness thing is also IMHO running WAY too many streams

> >>>> at the same time, I guess motivated by an attempt to have the test

> >>>> complete quickly?

> >>>> B) To try and tackle the validation problem:ps. Misinformation about
iperf 2 impacts my ability to do this.

> >>> 

> >>>> In the libreqos.io project we've established a testbed where tests
can

> >>>> be plunked through various ISP plan network emulations. It's here:

> >>>> https://payne.taht.net (run bandwidth test for what's currently
hooked

> >>>> up)

> >>>> We could rather use an AS number and at least a ipv4/24 and ipv6/48
to

> >>>> leverage with that, so I don't have to nat the various emulations.

> >>>> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,

> >>>> to see more test designers setup a testbed like this to calibrate

> >>>> their own stuff.

> >>>> Presently we're able to test:

> >>>> flent

> >>>> netperf

> >>>> iperf2

> >>>> iperf3

> >>>> speedtest-cli

> >>>> crusader

> >>>> the broadband forum udp based test:

> >>>> https://github.com/BroadbandForum/obudpst

> >>>> trexx

> >>>> There's also a virtual machine setup that we can remotely drive a web

> >>>> browser from (but I didn't want to nat the results to the world) to

> >>>> test other web services.

> >>>> _______________________________________________

> >>>> Rpm mailing list

> >>>> Rpm@lists.bufferbloat.net

> >>>> https://lists.bufferbloat.net/listinfo/rpm

> >>> _______________________________________________

> >>> Starlink mailing list

> >>> Starlink@lists.bufferbloat.net

> >>> https://lists.bufferbloat.net/listinfo/starlink

> >> 

> >> _______________________________________________

> >> Starlink mailing list

> >> Starlink@lists.bufferbloat.net

> >> https://lists.bufferbloat.net/listinfo/starlink

>  

> _______________________________________________

> Starlink mailing list

> Starlink@lists.bufferbloat.net

> https://lists.bufferbloat.net/listinfo/starlink


[-- Attachment #2: Type: text/html, Size: 45666 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-12 18:02                   ` rjmcmahon
@ 2023-01-12 21:34                     ` Dick Roy
  0 siblings, 0 replies; 183+ messages in thread
From: Dick Roy @ 2023-01-12 21:34 UTC (permalink / raw)
  To: 'rjmcmahon', 'Sebastian Moeller'
  Cc: 'Rodney W. Grimes', mike.reynolds, 'libreqos',
	'David P. Reed', 'Rpm', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 13155 bytes --]

 

 

-----Original Message-----
From: rjmcmahon [mailto:rjmcmahon@rjmcmahon.com] 
Sent: Thursday, January 12, 2023 10:03 AM
To: Sebastian Moeller
Cc: Dick Roy; Rodney W. Grimes; mike.reynolds@netforecast.com; libreqos;
David P. Reed; Rpm; bloat
Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

 

For WiFi there is the TSF

 

https://en.wikipedia.org/wiki/Timing_synchronization_function

[RR] There is also a TimingAdvertisement function which can be used to
synchronize STAs to UTC time (or other specified time references . see the
802.11 standard for details . or ask me offline).  It was added in the
802.11p amendment along with OCB operation if you care to know:-) 

 

We in test & measurement use that in our internal telemetry. The TSF of 

a Wifi device only needs frequency-sync for some things typically 

related to access to the medium. A phase locked loop does it. A device 

that decides to go to sleep, as an example, will also stop its TSF 

creating a non-linearity. It's difficult to synchronize it to the system 

clock or the GPS atomic clock - though we do this for internal testing 

reasons so it can be done.

 

What's mostly missing for T&M with WiFi is the GPS atomic clock as 

that's a convenient time domain to use as the canonical domain.

 

Bob

> Hi RR,

> 

> 

>> On Jan 11, 2023, at 22:46, Dick Roy <dickroy@alum.mit.edu> wrote:

>> 

>> 

>> 

>> -----Original Message-----

>> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On 

>> Behalf Of Sebastian Moeller via Starlink

>> Sent: Wednesday, January 11, 2023 12:01 PM

>> To: Rodney W. Grimes

>> Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos; 

>> David P. Reed; Rpm; rjmcmahon; bloat

>> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in 

>> USA

>> 

>> Hi Rodney,

>> 

>> 

>> 

>> 

>> > On Jan 11, 2023, at 19:32, Rodney W. Grimes
<starlink@gndrsh.dnsmgr.net> wrote:

>> >

>> > Hello,

>> >

>> >     Yall can call me crazy if you want.. but... see below [RWG]

>> >> Hi Bib,

>> >>

>> >>

>> >>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink
<starlink@lists.bufferbloat.net> wrote:

>> >>>

>> >>> My biggest barrier is the lack of clock sync by the devices, i.e.
very limited support for PTP in data centers and in end devices. This limits
the ability to measure one way delays (OWD) and most assume that OWD is 1/2
and RTT which typically is a mistake. We know this intuitively with airplane
flight times or even car commute times where the one way time is not 1/2 a
round trip time. Google maps & directions provide a time estimate for the
one way link. It doesn't compute a round trip and divide by two.

>> >>>

>> >>> For those that can get clock sync working, the iperf 2 --trip-times
options is useful.

>> >>

>> >>    [SM] +1; and yet even with unsynchronized clocks one can try to
measure how latency changes under load and that can be done per direction.
Sure this is far inferior to real reliably measured OWDs, but if life/the
internet deals you lemons....

>> >

>> > [RWG] iperf2/iperf3, etc are already moving large amounts of data back
and forth, for that matter any rate test, why not abuse some of that data
and add the fundemental NTP clock sync data and bidirectionally pass each
others concept of "current time".  IIRC (its been 25 years since I worked on
NTP at this level) you *should* be able to get a fairly accurate clock delta
between each end, and then use that info and time stamps in the data stream
to compute OWD's.  You need to put 4 time stamps in the packet, and with
that you can compute "offset".

>> [RR] For this to work at a reasonable level of accuracy, the 

>> timestamping circuits on both ends need to be deterministic and 

>> repeatable as I recall. Any uncertainty in that process adds to 

>> synchronization errors/uncertainties.

>> 

>>       [SM] Nice idea. I would guess that all timeslot based access 

>> technologies (so starlink, docsis, GPON, LTE?) all distribute "high 

>> quality time" carefully to the "modems", so maybe all that would be 

>> needed is to expose that high quality time to the LAN side of those 

>> modems, dressed up as NTP server?

>> [RR] It's not that simple!  Distributing "high-quality time", i.e. 

>> "synchronizing all clocks" does not solve the communication problem in 

>> synchronous slotted MAC/PHYs!

> 

>     [SM] I happily believe you, but the same idea of "time slot" needs to

> be shared by all nodes, no? So the clockss need to be reasonably

> similar rate, aka synchronized (see below).

> 

> 

>>  All the technologies you mentioned above are essentially P2P, not 

>> intended for broadcast.  Point is, there is a point controller (aka 

>> PoC) often called a base station (eNodeB, gNodeB, .) that actually 

>> "controls everything that is necessary to control" at the UE including 

>> time, frequency and sampling time offsets, and these are critical to 

>> get right if you want to communicate, and they are ALL subject to the 

>> laws of physics (cf. the speed of light)! Turns out that what is 

>> necessary for the system to function anywhere near capacity, is for 

>> all the clocks governing transmissions from the UEs to be 

>> "unsynchronized" such that all the UE transmissions arrive at the PoC 

>> at the same (prescribed) time!

> 

>     [SM] Fair enough. I would call clocks that are "in sync" albeit with

> individual offsets as synchronized, but I am a layman and that might

> sound offensively wrong to experts in the field. But even without the

> naming my point is that all systems that depend on some idea of shared

> time-base are halfway there of exposing that time to end users, by

> "translating it into an NTP time source at the modem.

> 

> 

>> For some technologies, in particular 5G!, these considerations are 

>> ESSENTIAL. Feel free to scour the 3GPP LTE 5G RLC and PHY specs if you 

>> don't believe me! J

> 

>     [SM Far be it from me not to believe you, so thanks for the pointers.

> Yet, I still think that unless different nodes of a shared segment

> move at significantly different speeds, that there should be a common

> "tick-duration" for all clocks even if each clock runs at an offset...

> (I naively would try to implement something like that by trying to

> fully synchronize clocks and maintain a local offset value to convert

> from "absolute" time to "network" time, but likely because coming from

> the outside I am blissfully unaware of the detail challenges that need

> to be solved).

> 

> Regards & Thanks

>     Sebastian

> 

> 

>> 

>> 

>> >

>> >>

>> >>

>> >>>

>> >>> --trip-times

>> >>> enable the measurement of end to end write to read latencies (client
and server clocks must be synchronized)

>> > [RWG] --clock-skew

>> >     enable the measurement of the wall clock difference between sender
and receiver

>> >

>> >>

>> >>    [SM] Sweet!

>> >>

>> >> Regards

>> >>    Sebastian

>> >>

>> >>>

>> >>> Bob

>> >>>> I have many kvetches about the new latency under load tests being

>> >>>> designed and distributed over the past year. I am delighted! that
they

>> >>>> are happening, but most really need third party evaluation, and

>> >>>> calibration, and a solid explanation of what network pathologies
they

>> >>>> do and don't cover. Also a RED team attitude towards them, as well
as

>> >>>> thinking hard about what you are not measuring (operations
research).

>> >>>> I actually rather love the new cloudflare speedtest, because it
tests

>> >>>> a single TCP connection, rather than dozens, and at the same time
folk

>> >>>> are complaining that it doesn't find the actual "speed!". yet... the

>> >>>> test itself more closely emulates a user experience than
speedtest.net

>> >>>> does. I am personally pretty convinced that the fewer numbers of
flows

>> >>>> that a web page opens improves the likelihood of a good user

>> >>>> experience, but lack data on it.

>> >>>> To try to tackle the evaluation and calibration part, I've reached
out

>> >>>> to all the new test designers in the hope that we could get together

>> >>>> and produce a report of what each new test is actually doing. I've

>> >>>> tweeted, linked in, emailed, and spammed every measurement list I
know

>> >>>> of, and only to some response, please reach out to other test
designer

>> >>>> folks and have them join the rpm email list?

>> >>>> My principal kvetches in the new tests so far are:

>> >>>> 0) None of the tests last long enough.

>> >>>> Ideally there should be a mode where they at least run to "time of

>> >>>> first loss", or periodically, just run longer than the

>> >>>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons

>> >>>> there! It's really bad science to optimize the internet for 20

>> >>>> seconds. It's like optimizing a car, to handle well, for just 20

>> >>>> seconds.

>> >>>> 1) Not testing up + down + ping at the same time

>> >>>> None of the new tests actually test the same thing that the infamous

>> >>>> rrul test does - all the others still test up, then down, and ping.
It

>> >>>> was/remains my hope that the simpler parts of the flent test suite -

>> >>>> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair

>> >>>> tests would provide calibration to the test designers.

>> >>>> we've got zillions of flent results in the archive published here:

>> >>>> https://blog.cerowrt.org/post/found_in_flent/

>> >>>> ps. Misinformation about iperf 2 impacts my ability to do this.

>> >>>

>> >>>> The new tests have all added up + ping and down + ping, but not up +

>> >>>> down + ping. Why??

>> >>>> The behaviors of what happens in that case are really non-intuitive,
I

>> >>>> know, but... it's just one more phase to add to any one of those new

>> >>>> tests. I'd be deliriously happy if someone(s) new to the field

>> >>>> started doing that, even optionally, and boggled at how it defeated

>> >>>> their assumptions.

>> >>>> Among other things that would show...

>> >>>> It's the home router industry's dirty secret than darn few "gigabit"

>> >>>> home routers can actually forward in both directions at a gigabit.
I'd

>> >>>> like to smash that perception thoroughly, but given our starting
point

>> >>>> is a gigabit router was a "gigabit switch" - and historically been

>> >>>> something that couldn't even forward at 200Mbit - we have a long way

>> >>>> to go there.

>> >>>> Only in the past year have non-x86 home routers appeared that could

>> >>>> actually do a gbit in both directions.

>> >>>> 2) Few are actually testing within-stream latency

>> >>>> Apple's rpm project is making a stab in that direction. It looks

>> >>>> highly likely, that with a little more work, crusader and

>> >>>> go-responsiveness can finally start sampling the tcp RTT, loss and

>> >>>> markings, more directly. As for the rest... sampling TCP_INFO on

>> >>>> windows, and Linux, at least, always appeared simple to me, but I'm

>> >>>> discovering how hard it is by delving deep into the rust behind

>> >>>> crusader.

>> >>>> the goresponsiveness thing is also IMHO running WAY too many streams

>> >>>> at the same time, I guess motivated by an attempt to have the test

>> >>>> complete quickly?

>> >>>> B) To try and tackle the validation problem:ps. Misinformation about
iperf 2 impacts my ability to do this.

>> >>>

>> >>>> In the libreqos.io project we've established a testbed where tests
can

>> >>>> be plunked through various ISP plan network emulations. It's here:

>> >>>> https://payne.taht.net (run bandwidth test for what's currently
hooked

>> >>>> up)

>> >>>> We could rather use an AS number and at least a ipv4/24 and ipv6/48
to

>> >>>> leverage with that, so I don't have to nat the various emulations.

>> >>>> (and funding, anyone got funding?) Or, as the code is GPLv2
licensed,

>> >>>> to see more test designers setup a testbed like this to calibrate

>> >>>> their own stuff.

>> >>>> Presently we're able to test:

>> >>>> flent

>> >>>> netperf

>> >>>> iperf2

>> >>>> iperf3

>> >>>> speedtest-cli

>> >>>> crusader

>> >>>> the broadband forum udp based test:

>> >>>> https://github.com/BroadbandForum/obudpst

>> >>>> trexx

>> >>>> There's also a virtual machine setup that we can remotely drive a
web

>> >>>> browser from (but I didn't want to nat the results to the world) to

>> >>>> test other web services.

>> >>>> _______________________________________________

>> >>>> Rpm mailing list

>> >>>> Rpm@lists.bufferbloat.net

>> >>>> https://lists.bufferbloat.net/listinfo/rpm

>> >>> _______________________________________________

>> >>> Starlink mailing list

>> >>> Starlink@lists.bufferbloat.net

>> >>> https://lists.bufferbloat.net/listinfo/starlink

>> >>

>> >> _______________________________________________

>> >> Starlink mailing list

>> >> Starlink@lists.bufferbloat.net

>> >> https://lists.bufferbloat.net/listinfo/starlink

>> 

>> _______________________________________________

>> Starlink mailing list

>> Starlink@lists.bufferbloat.net

>> https://lists.bufferbloat.net/listinfo/starlink


[-- Attachment #2: Type: text/html, Size: 47179 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-12 17:49                 ` Robert McMahon
@ 2023-01-12 21:57                   ` Dick Roy
  2023-01-13  7:44                     ` Sebastian Moeller
  0 siblings, 1 reply; 183+ messages in thread
From: Dick Roy @ 2023-01-12 21:57 UTC (permalink / raw)
  To: 'Robert McMahon', 'Sebastian Moeller'
  Cc: mike.reynolds, 'libreqos', 'David P. Reed',
	'Rpm', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 10313 bytes --]

FYI .

 

https://www.fiercewireless.com/tech/cbrs-based-fwa-beats-starlink-performanc
e-madden

 

Nothing earth-shaking :-)


RR

 

  _____  

From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
Robert McMahon via Starlink
Sent: Thursday, January 12, 2023 9:50 AM
To: Sebastian Moeller
Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos; David
P. Reed; Rpm; bloat
Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

 

Hi Sebastien,

You make a good point. What I did was issue a warning if the tool found it
was being CPU limited vs i/o limited. This indicates the i/o test likely is
inaccurate from an i/o perspective, and the results are suspect. It does
this crudely by comparing the cpu thread doing stats against the traffic
threads doing i/o, which thread is waiting on the others. There is no
attempt to assess the cpu load itself. So it's designed with a singular
purpose of making sure i/o threads only block on syscalls of write and read.

I probably should revisit this both in design and implementation. Thanks for
bringing it up and all input is truly appreciated. 

Bob

On Jan 12, 2023, at 12:14 AM, Sebastian Moeller <moeller0@gmx.de> wrote:

Hi Bob,






 On Jan 11, 2023, at 21:09, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:


 


 Iperf 2 is designed to measure network i/o. Note: It doesn't have to move
large amounts of data. It can support data profiles that don't drive TCP's
CCA as an example.


 


 Two things I've been asked for and avoided:


 


 1) Integrate clock sync into iperf's test traffic



 [SM] This I understand, measurement conditions can be unsuited for tight
time synchronization...






 2) Measure and output CPU usages



 [SM] This one puzzles me, as far as I understand the only way to properly
diagnose network issues is to rule out other things like CPU overload that
can have symptoms similar to network issues. As an example, the cake qdisc
will if CPU cycles become tight first increases its internal queueing and
jitter (not consciously, it is just an observation that once cake does not
get access to the CPU as timely as it wants, queuing latency and variability
increases) and then later also shows reduced throughput, so similar things
that can happen along an e2e network path for completely different reasons,
e.g. lower level retransmissions or a variable rate link. So i would think
that checking the CPU load at least coarse would be within the scope of
network testing tools, no?





Regards


 Sebastian












 I think both of these are outside the scope of a tool designed to test
network i/o over sockets, rather these should be developed & validated
independently of a network i/o tool.


 


 Clock error really isn't about amount/frequency of traffic but rather
getting a periodic high-quality reference. I tend to use GPS pulse per
second to lock the local system oscillator to. As David says, most every
modern handheld computer has the GPS chips to do this already. So to me it
seems more of a policy choice between data center operators and device mfgs
and less of a technical issue.


 


 Bob
 Hello,


  Yall can call me crazy if you want.. but... see below [RWG]
 Hi Bib,
 On Jan 9, 2023, at 20:13, rjmcmahon via Starlink
<starlink@lists.bufferbloat.net> wrote:





 My biggest barrier is the lack of clock sync by the devices, i.e. very
limited support for PTP in data centers and in end devices. This limits the
ability to measure one way delays (OWD) and most assume that OWD is 1/2 and
RTT which typically is a mistake. We know this intuitively with airplane
flight times or even car commute times where the one way time is not 1/2 a
round trip time. Google maps & directions provide a time estimate for the
one way link. It doesn't compute a round trip and divide by two.





 For those that can get clock sync working, the iperf 2 --trip-times options
is useful.
  [SM] +1; and yet even with unsynchronized clocks one can try to measure
how latency changes under load and that can be done per direction. Sure this
is far inferior to real reliably measured OWDs, but if life/the internet
deals you lemons....
 [RWG] iperf2/iperf3, etc are already moving large amounts of data


 back and forth, for that matter any rate test, why not abuse some of


 that data and add the fundemental NTP clock sync data and


 bidirectionally pass each others concept of "current time".  IIRC (its


 been 25 years since I worked on NTP at this level) you *should* be


 able to get a fairly accurate clock delta between each end, and then


 use that info and time stamps in the data stream to compute OWD's.


 You need to put 4 time stamps in the packet, and with that you can


 compute "offset".




 --trip-times


  enable the measurement of end to end write to read latencies (client and
server clocks must be synchronized)

 [RWG] --clock-skew


  enable the measurement of the wall clock difference between sender and
receiver
  [SM] Sweet!


 Regards


  Sebastian



 Bob
 I have many kvetches about the new latency under load tests being


 designed and distributed over the past year. I am delighted! that they


 are happening, but most really need third party evaluation, and


 calibration, and a solid explanation of what network pathologies they


 do and don't cover. Also a RED team attitude towards them, as well as


 thinking hard about what you are not measuring (operations research).


 I actually rather love the new cloudflare speedtest, because it tests


 a single TCP connection, rather than dozens, and at the same time folk


 are complaining that it doesn't find the actual "speed!". yet... the


 test itself more closely emulates a user experience than speedtest.net


 does. I am personally pretty convinced that the fewer numbers of flows


 that a web page opens improves the likelihood of a good user


 experience, but lack data on it.


 To try to tackle the evaluation and calibration part, I've reached out


 to all the new test designers in the hope that we could get together


 and produce a report of what each new test is actually doing. I've


 tweeted, linked in, emailed, and spammed every measurement list I know


 of, and only to some response, please reach out to other test designer


 folks and have them join the rpm email list?


 My principal kvetches in the new tests so far are:


 0) None of the tests last long enough.


 Ideally there should be a mode where they at least run to "time of


 first loss", or periodically, just run longer than the


 industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons


 there! It's really bad science to optimize the internet for 20


 seconds. It's like optimizing a car, to handle well, for just 20


 seconds.


 1) Not testing up + down + ping at the same time


 None of the new tests actually test the same thing that the infamous


 rrul test does - all the others still test up, then down, and ping. It


 was/remains my hope that the simpler parts of the flent test suite -


 such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair


 tests would provide calibration to the test designers.


 we've got zillions of flent results in the archive published here:


 https://blog.cerowrt.org/post/found_in_flent/


 ps. Misinformation about iperf 2 impacts my ability to do this.

 
 The new tests have all added up + ping and down + ping, but not up +


 down + ping. Why??


 The behaviors of what happens in that case are really non-intuitive, I


 know, but... it's just one more phase to add to any one of those new


 tests. I'd be deliriously happy if someone(s) new to the field


 started doing that, even optionally, and boggled at how it defeated


 their assumptions.


 Among other things that would show...


 It's the home router industry's dirty secret than darn few "gigabit"


 home routers can actually forward in both directions at a gigabit. I'd


 like to smash that perception thoroughly, but given our starting point


 is a gigabit router was a "gigabit switch" - and historically been


 something that couldn't even forward at 200Mbit - we have a long way


 to go there.


 Only in the past year have non-x86 home routers appeared that could


 actually do a gbit in both directions.


 2) Few are actually testing within-stream latency


 Apple's rpm project is making a stab in that direction. It looks


 highly likely, that with a little more work, crusader and


 go-responsiveness can finally start sampling the tcp RTT, loss and


 markings, more directly. As for the rest... sampling TCP_INFO on


 windows, and Linux, at least, always appeared simple to me, but I'm


 discovering how hard it is by delving deep into the rust behind


 crusader.


 the goresponsiveness thing is also IMHO running WAY too many streams


 at the same time, I guess motivated by an attempt to have the test


 complete quickly?


 B) To try and tackle the validation problem:ps. Misinformation about iperf
2 impacts my ability to do this.

 
 In the libreqos.io project we've established a testbed where tests can


 be plunked through various ISP plan network emulations. It's here:


 https://payne.taht.net (run bandwidth test for what's currently hooked


 up)


 We could rather use an AS number and at least a ipv4/24 and ipv6/48 to


 leverage with that, so I don't have to nat the various emulations.


 (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,


 to see more test designers setup a testbed like this to calibrate


 their own stuff.


 Presently we're able to test:


 flent


 netperf


 iperf2


 iperf3


 speedtest-cli


 crusader


 the broadband forum udp based test:


 https://github.com/BroadbandForum/obudpst


 trexx


 There's also a virtual machine setup that we can remotely drive a web


 browser from (but I didn't want to nat the results to the world) to


 test other web services.





  _____  






 Rpm mailing list


 Rpm@lists.bufferbloat.net


 https://lists.bufferbloat.net/listinfo/rpm






  _____  






 Starlink mailing list


 Starlink@lists.bufferbloat.net


 https://lists.bufferbloat.net/listinfo/starlink





  _____  






 Starlink mailing list


 Starlink@lists.bufferbloat.net


 https://lists.bufferbloat.net/listinfo/starlink
 

[-- Attachment #2: Type: text/html, Size: 20462 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-12 20:39                   ` Dick Roy
@ 2023-01-13  7:33                     ` Sebastian Moeller
  2023-01-13  8:26                       ` Dick Roy
  2023-01-13  7:40                     ` rjmcmahon
  1 sibling, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-01-13  7:33 UTC (permalink / raw)
  To: dickroy, Dick Roy
  Cc: 'Rodney W. Grimes', mike.reynolds, 'libreqos',
	'David P. Reed', 'Rpm', 'rjmcmahon',
	'bloat'

[-- Attachment #1: Type: text/plain, Size: 18467 bytes --]

Hi RR,

Thanks for the detailed response below, since my point is somewhat orthogonal I opted for top-posting.
Let me take a step back here and rephrase, synchronising clocks within an acceptable range to be useful is not rocket science nor witchcraft. For measuring internet traffic 'millisecond' range seems acceptable, local networks can probably profit from finer time resolution. So I am not after e.g. clock synchronisation to participate in SDH/SONET. Heck in the toy project I am active in, we operate on load dependent delay deltas so we even ignore different time offsets and are tolerant to (mildly) different tickrates and clock skew, but it would certainly be nice to have some acceptable measure of UTC from endpoints to be able to interpret timestamps as 'absolute'. Mind you I am fine with them not being veridical absolute, but just good enough for my measurement purpose and I guess that should be within the range of the achievable. Heck, if all servers we query timestamps of would be NTP-'synchronized' and would follow the RFC recommendation to report timestamps in milliseconds past midnight UTC I would be happy.

Regards
        Sebsstian

On 12 January 2023 21:39:21 CET, Dick Roy <dickroy@alum.mit.edu> wrote:
>Hi Sebastian (et. al.),
>
> 
>
>[I'll comment up here instead of inline.]  
>
> 
>
>Let me start by saying that I have not been intimately involved with the
>IEEE 1588 effort (PTP), however I was involved in the 802.11 efforts along a
>similar vein, just adding the wireless first hop component and it's effects
>on PTP.  
>
> 
>
>What was apparent from the outset was that there was a lack of understanding
>what the terms "to synchronize" or "to be synchronized" actually mean.  It's
>not trivial . because we live in a (approximately, that's another story!)
>4-D space-time continuum where the Lorentz metric plays a critical role.
>Therein, simultaneity (aka "things happening at the same time") means the
>"distance" between two such events is zero and that distance is given by
>sqrt(x^2 + y^2 + z^2 - (ct)^2) and the "thing happening" can be the tick of
>a clock somewhere. Now since everything is relative (time with respect to
>what? / location with respect to where?) it's pretty easy to see that "if
>you don't know where you are, you can't know what time it is!" (English
>sailors of the 18th century knew this well!) Add to this the fact that if
>everything were stationary, nothing would happen (as Einstein said "Nothing
>happens until something moves!"), special relativity also pays a role.
>Clocks on GPS satellites run approx. 7usecs/day slower than those on earth
>due to their "speed" (8700 mph roughly)! Then add the consequence that
>without mass we wouldn't exist (in these forms at least:-)), and
>gravitational effects (aka General Relativity) come into play. Those turn
>out to make clocks on GPS satellites run 45usec/day faster than those on
>earth!  The net effect is that GPS clocks run about 38usec/day faster than
>clocks on earth.  So what does it mean to "synchronize to GPS"?  Point is:
>it's a non-trivial question with a very complicated answer.  The reason it
>is important to get all this right is that the "what that ties time and
>space together" is the speed of light and that turns out to be a
>"foot-per-nanosecond" in a vacuum (roughly 300m/usec).  This means if I am
>uncertain about my location to say 300 meters, then I also am not sure what
>time it is to a usec AND vice-versa! 
>
> 
>
>All that said, the simplest explanation of synchronization is probably: Two
>clocks are synchronized if, when they are brought (slowly) into physical
>proximity ("sat next to each other") in the same (quasi-)inertial frame and
>the same gravitational potential (not so obvious BTW . see the FYI below!),
>an observer of both would say "they are keeping time identically". Since
>this experiment is rarely possible, one can never be "sure" that his clock
>is synchronized to any other clock elsewhere. And what does it mean to say
>they "were synchronized" when brought together, but now they are not because
>they are now in different gravitational potentials! (FYI, there are land
>mine detectors being developed on this very principle! I know someone who
>actually worked on such a project!) 
>
> 
>
>This all gets even more complicated when dealing with large networks of
>networks in which the "speed of information transmission" can vary depending
>on the medium (cf. coaxial cables versus fiber versus microwave links!) In
>fact, the atmosphere is one of those media and variations therein result in
>the need for "GPS corrections" (cf. RTCM GPS correction messages, RTK, etc.)
>in order to get to sub-nsec/cm accuracy.  Point is if you have a set of
>nodes distributed across the country all with GPS and all "synchronized to
>GPS time", and a second identical set of nodes (with no GPS) instead
>connected with a network of cables and fiber links, all of different lengths
>and composition using different carrier frequencies (dielectric constants
>vary with frequency!) "synchronized" to some clock somewhere using NTP or
>PTP), the synchronization of the two sets will be different unless a common
>reference clock is used AND all the above effects are taken into account,
>and good luck with that! :-) 
>
> 
>
>In conclusion, if anyone tells you that clock synchronization in
>communication networks is simple ("Just use GPS!"), you should feel free to
>chuckle (under your breath if necessary:-)) 
>
> 
>
>Cheers,
>
> 
>
>RR
>
> 
>
> 
>
>  
>
> 
>
> 
>
> 
>
>-----Original Message-----
>From: Sebastian Moeller [mailto:moeller0@gmx.de] 
>Sent: Thursday, January 12, 2023 12:23 AM
>To: Dick Roy
>Cc: Rodney W. Grimes; mike.reynolds@netforecast.com; libreqos; David P.
>Reed; Rpm; rjmcmahon; bloat
>Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
>
> 
>
>Hi RR,
>
> 
>
> 
>
>> On Jan 11, 2023, at 22:46, Dick Roy <dickroy@alum.mit.edu> wrote:
>
>> 
>
>>  
>
>>  
>
>> -----Original Message-----
>
>> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf
>Of Sebastian Moeller via Starlink
>
>> Sent: Wednesday, January 11, 2023 12:01 PM
>
>> To: Rodney W. Grimes
>
>> Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos; David
>P. Reed; Rpm; rjmcmahon; bloat
>
>> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
>
>>  
>
>> Hi Rodney,
>
>>  
>
>>  
>
>>  
>
>>  
>
>> > On Jan 11, 2023, at 19:32, Rodney W. Grimes <starlink@gndrsh.dnsmgr.net>
>wrote:
>
>> > 
>
>> > Hello,
>
>> > 
>
>> >     Yall can call me crazy if you want.. but... see below [RWG]
>
>> >> Hi Bib,
>
>> >> 
>
>> >> 
>
>> >>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink
><starlink@lists.bufferbloat.net> wrote:
>
>> >>> 
>
>> >>> My biggest barrier is the lack of clock sync by the devices, i.e. very
>limited support for PTP in data centers and in end devices. This limits the
>ability to measure one way delays (OWD) and most assume that OWD is 1/2 and
>RTT which typically is a mistake. We know this intuitively with airplane
>flight times or even car commute times where the one way time is not 1/2 a
>round trip time. Google maps & directions provide a time estimate for the
>one way link. It doesn't compute a round trip and divide by two.
>
>> >>> 
>
>> >>> For those that can get clock sync working, the iperf 2 --trip-times
>options is useful.
>
>> >> 
>
>> >>    [SM] +1; and yet even with unsynchronized clocks one can try to
>measure how latency changes under load and that can be done per direction.
>Sure this is far inferior to real reliably measured OWDs, but if life/the
>internet deals you lemons....
>
>> > 
>
>> > [RWG] iperf2/iperf3, etc are already moving large amounts of data back
>and forth, for that matter any rate test, why not abuse some of that data
>and add the fundemental NTP clock sync data and bidirectionally pass each
>others concept of "current time".  IIRC (its been 25 years since I worked on
>NTP at this level) you *should* be able to get a fairly accurate clock delta
>between each end, and then use that info and time stamps in the data stream
>to compute OWD's.  You need to put 4 time stamps in the packet, and with
>that you can compute "offset".
>
>> [RR] For this to work at a reasonable level of accuracy, the timestamping
>circuits on both ends need to be deterministic and repeatable as I recall.
>Any uncertainty in that process adds to synchronization
>errors/uncertainties.
>
>>  
>
>>       [SM] Nice idea. I would guess that all timeslot based access
>technologies (so starlink, docsis, GPON, LTE?) all distribute "high quality
>time" carefully to the "modems", so maybe all that would be needed is to
>expose that high quality time to the LAN side of those modems, dressed up as
>NTP server?
>
>> [RR] It's not that simple!  Distributing "high-quality time", i.e.
>"synchronizing all clocks" does not solve the communication problem in
>synchronous slotted MAC/PHYs!
>
> 
>
>      [SM] I happily believe you, but the same idea of "time slot" needs to
>be shared by all nodes, no? So the clockss need to be reasonably similar
>rate, aka synchronized (see below).
>
> 
>
> 
>
>>  All the technologies you mentioned above are essentially P2P, not
>intended for broadcast.  Point is, there is a point controller (aka PoC)
>often called a base station (eNodeB, gNodeB, .) that actually "controls
>everything that is necessary to control" at the UE including time, frequency
>and sampling time offsets, and these are critical to get right if you want
>to communicate, and they are ALL subject to the laws of physics (cf. the
>speed of light)! Turns out that what is necessary for the system to function
>anywhere near capacity, is for all the clocks governing transmissions from
>the UEs to be "unsynchronized" such that all the UE transmissions arrive at
>the PoC at the same (prescribed) time!
>
> 
>
>      [SM] Fair enough. I would call clocks that are "in sync" albeit with
>individual offsets as synchronized, but I am a layman and that might sound
>offensively wrong to experts in the field. But even without the naming my
>point is that all systems that depend on some idea of shared time-base are
>halfway there of exposing that time to end users, by "translating it into an
>NTP time source at the modem.
>
> 
>
> 
>
>> For some technologies, in particular 5G!, these considerations are
>ESSENTIAL. Feel free to scour the 3GPP LTE 5G RLC and PHY specs if you don't
>believe me! J   
>
> 
>
>      [SM Far be it from me not to believe you, so thanks for the pointers.
>Yet, I still think that unless different nodes of a shared segment move at
>significantly different speeds, that there should be a common
>"tick-duration" for all clocks even if each clock runs at an offset... (I
>naively would try to implement something like that by trying to fully
>synchronize clocks and maintain a local offset value to convert from
>"absolute" time to "network" time, but likely because coming from the
>outside I am blissfully unaware of the detail challenges that need to be
>solved).
>
> 
>
>Regards & Thanks
>
>      Sebastian
>
> 
>
> 
>
>>  
>
>>  
>
>> > 
>
>> >> 
>
>> >> 
>
>> >>> 
>
>> >>> --trip-times
>
>> >>> enable the measurement of end to end write to read latencies (client
>and server clocks must be synchronized)
>
>> > [RWG] --clock-skew
>
>> >     enable the measurement of the wall clock difference between sender
>and receiver
>
>> > 
>
>> >> 
>
>> >>    [SM] Sweet!
>
>> >> 
>
>> >> Regards
>
>> >>    Sebastian
>
>> >> 
>
>> >>> 
>
>> >>> Bob
>
>> >>>> I have many kvetches about the new latency under load tests being
>
>> >>>> designed and distributed over the past year. I am delighted! that
>they
>
>> >>>> are happening, but most really need third party evaluation, and
>
>> >>>> calibration, and a solid explanation of what network pathologies they
>
>> >>>> do and don't cover. Also a RED team attitude towards them, as well as
>
>> >>>> thinking hard about what you are not measuring (operations research).
>
>> >>>> I actually rather love the new cloudflare speedtest, because it tests
>
>> >>>> a single TCP connection, rather than dozens, and at the same time
>folk
>
>> >>>> are complaining that it doesn't find the actual "speed!". yet... the
>
>> >>>> test itself more closely emulates a user experience than
>speedtest.net
>
>> >>>> does. I am personally pretty convinced that the fewer numbers of
>flows
>
>> >>>> that a web page opens improves the likelihood of a good user
>
>> >>>> experience, but lack data on it.
>
>> >>>> To try to tackle the evaluation and calibration part, I've reached
>out
>
>> >>>> to all the new test designers in the hope that we could get together
>
>> >>>> and produce a report of what each new test is actually doing. I've
>
>> >>>> tweeted, linked in, emailed, and spammed every measurement list I
>know
>
>> >>>> of, and only to some response, please reach out to other test
>designer
>
>> >>>> folks and have them join the rpm email list?
>
>> >>>> My principal kvetches in the new tests so far are:
>
>> >>>> 0) None of the tests last long enough.
>
>> >>>> Ideally there should be a mode where they at least run to "time of
>
>> >>>> first loss", or periodically, just run longer than the
>
>> >>>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>
>> >>>> there! It's really bad science to optimize the internet for 20
>
>> >>>> seconds. It's like optimizing a car, to handle well, for just 20
>
>> >>>> seconds.
>
>> >>>> 1) Not testing up + down + ping at the same time
>
>> >>>> None of the new tests actually test the same thing that the infamous
>
>> >>>> rrul test does - all the others still test up, then down, and ping.
>It
>
>> >>>> was/remains my hope that the simpler parts of the flent test suite -
>
>> >>>> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
>
>> >>>> tests would provide calibration to the test designers.
>
>> >>>> we've got zillions of flent results in the archive published here:
>
>> >>>> https://blog.cerowrt.org/post/found_in_flent/
>
>> >>>> ps. Misinformation about iperf 2 impacts my ability to do this.
>
>> >>> 
>
>> >>>> The new tests have all added up + ping and down + ping, but not up +
>
>> >>>> down + ping. Why??
>
>> >>>> The behaviors of what happens in that case are really non-intuitive,
>I
>
>> >>>> know, but... it's just one more phase to add to any one of those new
>
>> >>>> tests. I'd be deliriously happy if someone(s) new to the field
>
>> >>>> started doing that, even optionally, and boggled at how it defeated
>
>> >>>> their assumptions.
>
>> >>>> Among other things that would show...
>
>> >>>> It's the home router industry's dirty secret than darn few "gigabit"
>
>> >>>> home routers can actually forward in both directions at a gigabit.
>I'd
>
>> >>>> like to smash that perception thoroughly, but given our starting
>point
>
>> >>>> is a gigabit router was a "gigabit switch" - and historically been
>
>> >>>> something that couldn't even forward at 200Mbit - we have a long way
>
>> >>>> to go there.
>
>> >>>> Only in the past year have non-x86 home routers appeared that could
>
>> >>>> actually do a gbit in both directions.
>
>> >>>> 2) Few are actually testing within-stream latency
>
>> >>>> Apple's rpm project is making a stab in that direction. It looks
>
>> >>>> highly likely, that with a little more work, crusader and
>
>> >>>> go-responsiveness can finally start sampling the tcp RTT, loss and
>
>> >>>> markings, more directly. As for the rest... sampling TCP_INFO on
>
>> >>>> windows, and Linux, at least, always appeared simple to me, but I'm
>
>> >>>> discovering how hard it is by delving deep into the rust behind
>
>> >>>> crusader.
>
>> >>>> the goresponsiveness thing is also IMHO running WAY too many streams
>
>> >>>> at the same time, I guess motivated by an attempt to have the test
>
>> >>>> complete quickly?
>
>> >>>> B) To try and tackle the validation problem:ps. Misinformation about
>iperf 2 impacts my ability to do this.
>
>> >>> 
>
>> >>>> In the libreqos.io project we've established a testbed where tests
>can
>
>> >>>> be plunked through various ISP plan network emulations. It's here:
>
>> >>>> https://payne.taht.net (run bandwidth test for what's currently
>hooked
>
>> >>>> up)
>
>> >>>> We could rather use an AS number and at least a ipv4/24 and ipv6/48
>to
>
>> >>>> leverage with that, so I don't have to nat the various emulations.
>
>> >>>> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
>
>> >>>> to see more test designers setup a testbed like this to calibrate
>
>> >>>> their own stuff.
>
>> >>>> Presently we're able to test:
>
>> >>>> flent
>
>> >>>> netperf
>
>> >>>> iperf2
>
>> >>>> iperf3
>
>> >>>> speedtest-cli
>
>> >>>> crusader
>
>> >>>> the broadband forum udp based test:
>
>> >>>> https://github.com/BroadbandForum/obudpst
>
>> >>>> trexx
>
>> >>>> There's also a virtual machine setup that we can remotely drive a web
>
>> >>>> browser from (but I didn't want to nat the results to the world) to
>
>> >>>> test other web services.
>
>> >>>> _______________________________________________
>
>> >>>> Rpm mailing list
>
>> >>>> Rpm@lists.bufferbloat.net
>
>> >>>> https://lists.bufferbloat.net/listinfo/rpm
>
>> >>> _______________________________________________
>
>> >>> Starlink mailing list
>
>> >>> Starlink@lists.bufferbloat.net
>
>> >>> https://lists.bufferbloat.net/listinfo/starlink
>
>> >> 
>
>> >> _______________________________________________
>
>> >> Starlink mailing list
>
>> >> Starlink@lists.bufferbloat.net
>
>> >> https://lists.bufferbloat.net/listinfo/starlink
>
>>  
>
>> _______________________________________________
>
>> Starlink mailing list
>
>> Starlink@lists.bufferbloat.net
>
>> https://lists.bufferbloat.net/listinfo/starlink
>

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

[-- Attachment #2: Type: text/html, Size: 47677 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-12 20:39                   ` Dick Roy
  2023-01-13  7:33                     ` Sebastian Moeller
@ 2023-01-13  7:40                     ` rjmcmahon
  2023-01-13  8:10                       ` Dick Roy
  1 sibling, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-01-13  7:40 UTC (permalink / raw)
  To: dickroy
  Cc: 'Sebastian Moeller', 'Rodney W. Grimes',
	mike.reynolds, 'libreqos', 'David P. Reed',
	'Rpm', 'bloat'

Hi RR,

I believe quality GPS chips compensate for relativity in pulse per 
second which is needed to get position accuracy.

Bob
> Hi Sebastian (et. al.),
> 
> [I'll comment up here instead of inline.]
> 
> Let me start by saying that I have not been intimately involved with
> the IEEE 1588 effort (PTP), however I was involved in the 802.11
> efforts along a similar vein, just adding the wireless first hop
> component and it's effects on PTP.
> 
> What was apparent from the outset was that there was a lack of
> understanding what the terms "to synchronize" or "to be synchronized"
> actually mean.  It's not trivial … because we live in a
> (approximately, that's another story!) 4-D space-time continuum where
> the Lorentz metric plays a critical role.  Therein, simultaneity (aka
> "things happening at the same time") means the "distance" between two
> such events is zero and that distance is given by sqrt(x^2 + y^2 + z^2
> - (ct)^2) and the "thing happening" can be the tick of a clock
> somewhere. Now since everything is relative (time with respect to
> what? / location with respect to where?) it's pretty easy to see that
> "if you don't know where you are, you can't know what time it is!"
> (English sailors of the 18th century knew this well!) Add to this the
> fact that if everything were stationary, nothing would happen (as
> Einstein said "Nothing happens until something moves!"), special
> relativity also pays a role.  Clocks on GPS satellites run approx.
> 7usecs/day slower than those on earth due to their "speed" (8700 mph
> roughly)! Then add the consequence that without mass we wouldn't exist
> (in these forms at leastJ), and gravitational effects (aka General
> Relativity) come into play. Those turn out to make clocks on GPS
> satellites run 45usec/day faster than those on earth!  The net effect
> is that GPS clocks run about 38usec/day faster than clocks on earth.
> So what does it mean to "synchronize to GPS"?  Point is: it's a
> non-trivial question with a very complicated answer.  The reason it is
> important to get all this right is that the "what that ties time and
> space together" is the speed of light and that turns out to be a
> "foot-per-nanosecond" in a vacuum (roughly 300m/usec).  This means if
> I am uncertain about my location to say 300 meters, then I also am not
> sure what time it is to a usec AND vice-versa!
> 
> All that said, the simplest explanation of synchronization is
> probably: Two clocks are synchronized if, when they are brought
> (slowly) into physical proximity ("sat next to each other") in the
> same (quasi-)inertial frame and the same gravitational potential (not
> so obvious BTW … see the FYI below!), an observer of both would say
> "they are keeping time identically". Since this experiment is rarely
> possible, one can never be "sure" that his clock is synchronized to
> any other clock elsewhere. And what does it mean to say they "were
> synchronized" when brought together, but now they are not because they
> are now in different gravitational potentials! (FYI, there are land
> mine detectors being developed on this very principle! I know someone
> who actually worked on such a project!)
> 
> This all gets even more complicated when dealing with large networks
> of networks in which the "speed of information transmission" can vary
> depending on the medium (cf. coaxial cables versus fiber versus
> microwave links!) In fact, the atmosphere is one of those media and
> variations therein result in the need for "GPS corrections" (cf. RTCM
> GPS correction messages, RTK, etc.) in order to get to sub-nsec/cm
> accuracy.  Point is if you have a set of nodes distributed across the
> country all with GPS and all "synchronized to GPS time", and a second
> identical set of nodes (with no GPS) instead connected with a network
> of cables and fiber links, all of different lengths and composition
> using different carrier frequencies (dielectric constants vary with
> frequency!) "synchronized" to some clock somewhere using NTP or PTP),
> the synchronization of the two sets will be different unless a common
> reference clock is used AND all the above effects are taken into
> account, and good luck with that! J
> 
> In conclusion, if anyone tells you that clock synchronization in
> communication networks is simple ("Just use GPS!"), you should feel
> free to chuckle (under your breath if necessaryJ)
> 
> Cheers,
> 
> RR
> 
> -----Original Message-----
> From: Sebastian Moeller [mailto:moeller0@gmx.de]
> Sent: Thursday, January 12, 2023 12:23 AM
> To: Dick Roy
> Cc: Rodney W. Grimes; mike.reynolds@netforecast.com; libreqos; David
> P. Reed; Rpm; rjmcmahon; bloat
> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in
> USA
> 
> Hi RR,
> 
>> On Jan 11, 2023, at 22:46, Dick Roy <dickroy@alum.mit.edu> wrote:
> 
>> 
> 
>> 
> 
>> 
> 
>> -----Original Message-----
> 
>> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On
> Behalf Of Sebastian Moeller via Starlink
> 
>> Sent: Wednesday, January 11, 2023 12:01 PM
> 
>> To: Rodney W. Grimes
> 
>> Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos;
> David P. Reed; Rpm; rjmcmahon; bloat
> 
>> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers
> in USA
> 
>> 
> 
>> Hi Rodney,
> 
>> 
> 
>> 
> 
>> 
> 
>> 
> 
>> > On Jan 11, 2023, at 19:32, Rodney W. Grimes
> <starlink@gndrsh.dnsmgr.net> wrote:
> 
>> >
> 
>> > Hello,
> 
>> >
> 
>> >     Yall can call me crazy if you want.. but... see below [RWG]
> 
>> >> Hi Bib,
> 
>> >>
> 
>> >>
> 
>> >>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink
> <starlink@lists.bufferbloat.net> wrote:
> 
>> >>>
> 
>> >>> My biggest barrier is the lack of clock sync by the devices,
> i.e. very limited support for PTP in data centers and in end devices.
> This limits the ability to measure one way delays (OWD) and most
> assume that OWD is 1/2 and RTT which typically is a mistake. We know
> this intuitively with airplane flight times or even car commute times
> where the one way time is not 1/2 a round trip time. Google maps &
> directions provide a time estimate for the one way link. It doesn't
> compute a round trip and divide by two.
> 
>> >>>
> 
>> >>> For those that can get clock sync working, the iperf 2
> --trip-times options is useful.
> 
>> >>
> 
>> >>    [SM] +1; and yet even with unsynchronized clocks one can try
> to measure how latency changes under load and that can be done per
> direction. Sure this is far inferior to real reliably measured OWDs,
> but if life/the internet deals you lemons....
> 
>> >
> 
>> > [RWG] iperf2/iperf3, etc are already moving large amounts of data
> back and forth, for that matter any rate test, why not abuse some of
> that data and add the fundemental NTP clock sync data and
> bidirectionally pass each others concept of "current time".  IIRC (its
> been 25 years since I worked on NTP at this level) you *should* be
> able to get a fairly accurate clock delta between each end, and then
> use that info and time stamps in the data stream to compute OWD's.
> You need to put 4 time stamps in the packet, and with that you can
> compute "offset".
> 
>> [RR] For this to work at a reasonable level of accuracy, the
> timestamping circuits on both ends need to be deterministic and
> repeatable as I recall. Any uncertainty in that process adds to
> synchronization errors/uncertainties.
> 
>> 
> 
>>       [SM] Nice idea. I would guess that all timeslot based access
> technologies (so starlink, docsis, GPON, LTE?) all distribute "high
> quality time" carefully to the "modems", so maybe all that would be
> needed is to expose that high quality time to the LAN side of those
> modems, dressed up as NTP server?
> 
>> [RR] It's not that simple!  Distributing "high-quality time", i.e.
> "synchronizing all clocks" does not solve the communication problem in
> synchronous slotted MAC/PHYs!
> 
>       [SM] I happily believe you, but the same idea of "time slot"
> needs to be shared by all nodes, no? So the clockss need to be
> reasonably similar rate, aka synchronized (see below).
> 
>>  All the technologies you mentioned above are essentially P2P, not
> intended for broadcast.  Point is, there is a point controller (aka
> PoC) often called a base station (eNodeB, gNodeB, …) that actually
> "controls everything that is necessary to control" at the UE including
> time, frequency and sampling time offsets, and these are critical to
> get right if you want to communicate, and they are ALL subject to the
> laws of physics (cf. the speed of light)! Turns out that what is
> necessary for the system to function anywhere near capacity, is for
> all the clocks governing transmissions from the UEs to be
> "unsynchronized" such that all the UE transmissions arrive at the PoC
> at the same (prescribed) time!
> 
>       [SM] Fair enough. I would call clocks that are "in sync" albeit
> with individual offsets as synchronized, but I am a layman and that
> might sound offensively wrong to experts in the field. But even
> without the naming my point is that all systems that depend on some
> idea of shared time-base are halfway there of exposing that time to
> end users, by "translating it into an NTP time source at the modem.
> 
>> For some technologies, in particular 5G!, these considerations are
> ESSENTIAL. Feel free to scour the 3GPP LTE 5G RLC and PHY specs if you
> don't believe me! J
> 
>       [SM Far be it from me not to believe you, so thanks for the
> pointers. Yet, I still think that unless different nodes of a shared
> segment move at significantly different speeds, that there should be a
> common "tick-duration" for all clocks even if each clock runs at an
> offset... (I naively would try to implement something like that by
> trying to fully synchronize clocks and maintain a local offset value
> to convert from "absolute" time to "network" time, but likely because
> coming from the outside I am blissfully unaware of the detail
> challenges that need to be solved).
> 
> Regards & Thanks
> 
>       Sebastian
> 
>> 
> 
>> 
> 
>> >
> 
>> >>
> 
>> >>
> 
>> >>>
> 
>> >>> --trip-times
> 
>> >>> enable the measurement of end to end write to read latencies
> (client and server clocks must be synchronized)
> 
>> > [RWG] --clock-skew
> 
>> >     enable the measurement of the wall clock difference between
> sender and receiver
> 
>> >
> 
>> >>
> 
>> >>    [SM] Sweet!
> 
>> >>
> 
>> >> Regards
> 
>> >>    Sebastian
> 
>> >>
> 
>> >>>
> 
>> >>> Bob
> 
>> >>>> I have many kvetches about the new latency under load tests
> being
> 
>> >>>> designed and distributed over the past year. I am delighted!
> that they
> 
>> >>>> are happening, but most really need third party evaluation, and
> 
> 
>> >>>> calibration, and a solid explanation of what network
> pathologies they
> 
>> >>>> do and don't cover. Also a RED team attitude towards them, as
> well as
> 
>> >>>> thinking hard about what you are not measuring (operations
> research).
> 
>> >>>> I actually rather love the new cloudflare speedtest, because it
> tests
> 
>> >>>> a single TCP connection, rather than dozens, and at the same
> time folk
> 
>> >>>> are complaining that it doesn't find the actual "speed!".
> yet... the
> 
>> >>>> test itself more closely emulates a user experience than
> speedtest.net
> 
>> >>>> does. I am personally pretty convinced that the fewer numbers
> of flows
> 
>> >>>> that a web page opens improves the likelihood of a good user
> 
>> >>>> experience, but lack data on it.
> 
>> >>>> To try to tackle the evaluation and calibration part, I've
> reached out
> 
>> >>>> to all the new test designers in the hope that we could get
> together
> 
>> >>>> and produce a report of what each new test is actually doing.
> I've
> 
>> >>>> tweeted, linked in, emailed, and spammed every measurement list
> I know
> 
>> >>>> of, and only to some response, please reach out to other test
> designer
> 
>> >>>> folks and have them join the rpm email list?
> 
>> >>>> My principal kvetches in the new tests so far are:
> 
>> >>>> 0) None of the tests last long enough.
> 
>> >>>> Ideally there should be a mode where they at least run to "time
> of
> 
>> >>>> first loss", or periodically, just run longer than the
> 
>> >>>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be
> dragons
> 
>> >>>> there! It's really bad science to optimize the internet for 20
> 
>> >>>> seconds. It's like optimizing a car, to handle well, for just
> 20
> 
>> >>>> seconds.
> 
>> >>>> 1) Not testing up + down + ping at the same time
> 
>> >>>> None of the new tests actually test the same thing that the
> infamous
> 
>> >>>> rrul test does - all the others still test up, then down, and
> ping. It
> 
>> >>>> was/remains my hope that the simpler parts of the flent test
> suite -
> 
>> >>>> such as the tcp_up_squarewave tests, the rrul test, and the
> rtt_fair
> 
>> >>>> tests would provide calibration to the test designers.
> 
>> >>>> we've got zillions of flent results in the archive published
> here:
> 
>> >>>> https://blog.cerowrt.org/post/found_in_flent/
> 
>> >>>> ps. Misinformation about iperf 2 impacts my ability to do this.
> 
> 
>> >>>
> 
>> >>>> The new tests have all added up + ping and down + ping, but not
> up +
> 
>> >>>> down + ping. Why??
> 
>> >>>> The behaviors of what happens in that case are really
> non-intuitive, I
> 
>> >>>> know, but... it's just one more phase to add to any one of
> those new
> 
>> >>>> tests. I'd be deliriously happy if someone(s) new to the field
> 
>> >>>> started doing that, even optionally, and boggled at how it
> defeated
> 
>> >>>> their assumptions.
> 
>> >>>> Among other things that would show...
> 
>> >>>> It's the home router industry's dirty secret than darn few
> "gigabit"
> 
>> >>>> home routers can actually forward in both directions at a
> gigabit. I'd
> 
>> >>>> like to smash that perception thoroughly, but given our
> starting point
> 
>> >>>> is a gigabit router was a "gigabit switch" - and historically
> been
> 
>> >>>> something that couldn't even forward at 200Mbit - we have a
> long way
> 
>> >>>> to go there.
> 
>> >>>> Only in the past year have non-x86 home routers appeared that
> could
> 
>> >>>> actually do a gbit in both directions.
> 
>> >>>> 2) Few are actually testing within-stream latency
> 
>> >>>> Apple's rpm project is making a stab in that direction. It
> looks
> 
>> >>>> highly likely, that with a little more work, crusader and
> 
>> >>>> go-responsiveness can finally start sampling the tcp RTT, loss
> and
> 
>> >>>> markings, more directly. As for the rest... sampling TCP_INFO
> on
> 
>> >>>> windows, and Linux, at least, always appeared simple to me, but
> I'm
> 
>> >>>> discovering how hard it is by delving deep into the rust behind
> 
> 
>> >>>> crusader.
> 
>> >>>> the goresponsiveness thing is also IMHO running WAY too many
> streams
> 
>> >>>> at the same time, I guess motivated by an attempt to have the
> test
> 
>> >>>> complete quickly?
> 
>> >>>> B) To try and tackle the validation problem:ps. Misinformation
> about iperf 2 impacts my ability to do this.
> 
>> >>>
> 
>> >>>> In the libreqos.io project we've established a testbed where
> tests can
> 
>> >>>> be plunked through various ISP plan network emulations. It's
> here:
> 
>> >>>> https://payne.taht.net (run bandwidth test for what's currently
> hooked
> 
>> >>>> up)
> 
>> >>>> We could rather use an AS number and at least a ipv4/24 and
> ipv6/48 to
> 
>> >>>> leverage with that, so I don't have to nat the various
> emulations.
> 
>> >>>> (and funding, anyone got funding?) Or, as the code is GPLv2
> licensed,
> 
>> >>>> to see more test designers setup a testbed like this to
> calibrate
> 
>> >>>> their own stuff.
> 
>> >>>> Presently we're able to test:
> 
>> >>>> flent
> 
>> >>>> netperf
> 
>> >>>> iperf2
> 
>> >>>> iperf3
> 
>> >>>> speedtest-cli
> 
>> >>>> crusader
> 
>> >>>> the broadband forum udp based test:
> 
>> >>>> https://github.com/BroadbandForum/obudpst
> 
>> >>>> trexx
> 
>> >>>> There's also a virtual machine setup that we can remotely drive
> a web
> 
>> >>>> browser from (but I didn't want to nat the results to the
> world) to
> awhile
>> >>>> test other web services.
> 
>> >>>> _______________________________________________
> 
>> >>>> Rpm mailing list
> 
>> >>>> Rpm@lists.bufferbloat.net
> 
>> >>>> https://lists.bufferbloat.net/listinfo/rpm
> 
>> >>> _______________________________________________
> 
>> >>> Starlink mailing list
> 
>> >>> Starlink@lists.bufferbloat.net
> 
>> >>> https://lists.bufferbloat.net/listinfo/starlink
> 
>> >>
> 
>> >> _______________________________________________
> 
>> >> Starlink mailing list
> 
>> >> Starlink@lists.bufferbloat.net
> 
>> >> https://lists.bufferbloat.net/listinfo/starlink
> 
>> 
> 
>> _______________________________________________
> 
>> Starlink mailing list
> 
>> Starlink@lists.bufferbloat.net
> 
>> https://lists.bufferbloat.net/listinfo/starlink

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-12 21:57                   ` Dick Roy
@ 2023-01-13  7:44                     ` Sebastian Moeller
  2023-01-13  8:01                       ` Dick Roy
  0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-01-13  7:44 UTC (permalink / raw)
  To: dickroy, Dick Roy, 'Robert McMahon'
  Cc: mike.reynolds, 'libreqos', 'David P. Reed',
	'Rpm', 'bloat'

Hi RR

On 12 January 2023 22:57:32 CET, Dick Roy <dickroy@alum.mit.edu> wrote:
>FYI .
>
> 
>
>https://www.fiercewireless.com/tech/cbrs-based-fwa-beats-starlink-performanc
>e-madden
>

[SM] He is so close:
'Speed tests don’t tell us much about the capacity of the network, or the reliability of the network, or the true latency with larger packet sizes. Packet loss testing can help to fill in key missing information to give the end customer the smooth experience they’re looking for.'
and
'Packets received over 250 ms latency are considered too late to be useful for video conferencing.'

He actually reports both loss numbers and delay > 250ms, so in spite arguing that loss is the relevant metric he already dips his toes into the latency issue... I wonder whether his view will refine over time now that he apparently moved from a link with 8% packet loss to one with a more sane 0.1% loss rate (no idea how he measured lossrate though, or latency). I guess this shows that there is no single solution for all links, it really matters where one starts which of throughput, delay, loss is the most painful and hence the dimension in need of a fix first.

Regards
        Sebastian



> 
>
>Nothing earth-shaking :-)
>
>
>RR
>
> 
>
>  _____  
>
>From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
>Robert McMahon via Starlink
>Sent: Thursday, January 12, 2023 9:50 AM
>To: Sebastian Moeller
>Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos; David
>P. Reed; Rpm; bloat
>Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
>
> 
>
>Hi Sebastien,
>
>You make a good point. What I did was issue a warning if the tool found it
>was being CPU limited vs i/o limited. This indicates the i/o test likely is
>inaccurate from an i/o perspective, and the results are suspect. It does
>this crudely by comparing the cpu thread doing stats against the traffic
>threads doing i/o, which thread is waiting on the others. There is no
>attempt to assess the cpu load itself. So it's designed with a singular
>purpose of making sure i/o threads only block on syscalls of write and read.
>
>I probably should revisit this both in design and implementation. Thanks for
>bringing it up and all input is truly appreciated. 
>
>Bob
>
>On Jan 12, 2023, at 12:14 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>
>Hi Bob,
>
>
>
>
>
>
> On Jan 11, 2023, at 21:09, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>
>
> 
>
>
> Iperf 2 is designed to measure network i/o. Note: It doesn't have to move
>large amounts of data. It can support data profiles that don't drive TCP's
>CCA as an example.
>
>
> 
>
>
> Two things I've been asked for and avoided:
>
>
> 
>
>
> 1) Integrate clock sync into iperf's test traffic
>
>
>
> [SM] This I understand, measurement conditions can be unsuited for tight
>time synchronization...
>
>
>
>
>
>
> 2) Measure and output CPU usages
>
>
>
> [SM] This one puzzles me, as far as I understand the only way to properly
>diagnose network issues is to rule out other things like CPU overload that
>can have symptoms similar to network issues. As an example, the cake qdisc
>will if CPU cycles become tight first increases its internal queueing and
>jitter (not consciously, it is just an observation that once cake does not
>get access to the CPU as timely as it wants, queuing latency and variability
>increases) and then later also shows reduced throughput, so similar things
>that can happen along an e2e network path for completely different reasons,
>e.g. lower level retransmissions or a variable rate link. So i would think
>that checking the CPU load at least coarse would be within the scope of
>network testing tools, no?
>
>
>
>
>
>Regards
>
>
> Sebastian
>
>
>
>
>
>
>
>
>
>
>
>
> I think both of these are outside the scope of a tool designed to test
>network i/o over sockets, rather these should be developed & validated
>independently of a network i/o tool.
>
>
> 
>
>
> Clock error really isn't about amount/frequency of traffic but rather
>getting a periodic high-quality reference. I tend to use GPS pulse per
>second to lock the local system oscillator to. As David says, most every
>modern handheld computer has the GPS chips to do this already. So to me it
>seems more of a policy choice between data center operators and device mfgs
>and less of a technical issue.
>
>
> 
>
>
> Bob
> Hello,
>
>
>  Yall can call me crazy if you want.. but... see below [RWG]
> Hi Bib,
> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink
><starlink@lists.bufferbloat.net> wrote:
>
>
>
>
>
> My biggest barrier is the lack of clock sync by the devices, i.e. very
>limited support for PTP in data centers and in end devices. This limits the
>ability to measure one way delays (OWD) and most assume that OWD is 1/2 and
>RTT which typically is a mistake. We know this intuitively with airplane
>flight times or even car commute times where the one way time is not 1/2 a
>round trip time. Google maps & directions provide a time estimate for the
>one way link. It doesn't compute a round trip and divide by two.
>
>
>
>
>
> For those that can get clock sync working, the iperf 2 --trip-times options
>is useful.
>  [SM] +1; and yet even with unsynchronized clocks one can try to measure
>how latency changes under load and that can be done per direction. Sure this
>is far inferior to real reliably measured OWDs, but if life/the internet
>deals you lemons....
> [RWG] iperf2/iperf3, etc are already moving large amounts of data
>
>
> back and forth, for that matter any rate test, why not abuse some of
>
>
> that data and add the fundemental NTP clock sync data and
>
>
> bidirectionally pass each others concept of "current time".  IIRC (its
>
>
> been 25 years since I worked on NTP at this level) you *should* be
>
>
> able to get a fairly accurate clock delta between each end, and then
>
>
> use that info and time stamps in the data stream to compute OWD's.
>
>
> You need to put 4 time stamps in the packet, and with that you can
>
>
> compute "offset".
>
>
>
>
> --trip-times
>
>
>  enable the measurement of end to end write to read latencies (client and
>server clocks must be synchronized)
>
> [RWG] --clock-skew
>
>
>  enable the measurement of the wall clock difference between sender and
>receiver
>  [SM] Sweet!
>
>
> Regards
>
>
>  Sebastian
>
>
>
> Bob
> I have many kvetches about the new latency under load tests being
>
>
> designed and distributed over the past year. I am delighted! that they
>
>
> are happening, but most really need third party evaluation, and
>
>
> calibration, and a solid explanation of what network pathologies they
>
>
> do and don't cover. Also a RED team attitude towards them, as well as
>
>
> thinking hard about what you are not measuring (operations research).
>
>
> I actually rather love the new cloudflare speedtest, because it tests
>
>
> a single TCP connection, rather than dozens, and at the same time folk
>
>
> are complaining that it doesn't find the actual "speed!". yet... the
>
>
> test itself more closely emulates a user experience than speedtest.net
>
>
> does. I am personally pretty convinced that the fewer numbers of flows
>
>
> that a web page opens improves the likelihood of a good user
>
>
> experience, but lack data on it.
>
>
> To try to tackle the evaluation and calibration part, I've reached out
>
>
> to all the new test designers in the hope that we could get together
>
>
> and produce a report of what each new test is actually doing. I've
>
>
> tweeted, linked in, emailed, and spammed every measurement list I know
>
>
> of, and only to some response, please reach out to other test designer
>
>
> folks and have them join the rpm email list?
>
>
> My principal kvetches in the new tests so far are:
>
>
> 0) None of the tests last long enough.
>
>
> Ideally there should be a mode where they at least run to "time of
>
>
> first loss", or periodically, just run longer than the
>
>
> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>
>
> there! It's really bad science to optimize the internet for 20
>
>
> seconds. It's like optimizing a car, to handle well, for just 20
>
>
> seconds.
>
>
> 1) Not testing up + down + ping at the same time
>
>
> None of the new tests actually test the same thing that the infamous
>
>
> rrul test does - all the others still test up, then down, and ping. It
>
>
> was/remains my hope that the simpler parts of the flent test suite -
>
>
> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
>
>
> tests would provide calibration to the test designers.
>
>
> we've got zillions of flent results in the archive published here:
>
>
> https://blog.cerowrt.org/post/found_in_flent/
>
>
> ps. Misinformation about iperf 2 impacts my ability to do this.
>
> 
> The new tests have all added up + ping and down + ping, but not up +
>
>
> down + ping. Why??
>
>
> The behaviors of what happens in that case are really non-intuitive, I
>
>
> know, but... it's just one more phase to add to any one of those new
>
>
> tests. I'd be deliriously happy if someone(s) new to the field
>
>
> started doing that, even optionally, and boggled at how it defeated
>
>
> their assumptions.
>
>
> Among other things that would show...
>
>
> It's the home router industry's dirty secret than darn few "gigabit"
>
>
> home routers can actually forward in both directions at a gigabit. I'd
>
>
> like to smash that perception thoroughly, but given our starting point
>
>
> is a gigabit router was a "gigabit switch" - and historically been
>
>
> something that couldn't even forward at 200Mbit - we have a long way
>
>
> to go there.
>
>
> Only in the past year have non-x86 home routers appeared that could
>
>
> actually do a gbit in both directions.
>
>
> 2) Few are actually testing within-stream latency
>
>
> Apple's rpm project is making a stab in that direction. It looks
>
>
> highly likely, that with a little more work, crusader and
>
>
> go-responsiveness can finally start sampling the tcp RTT, loss and
>
>
> markings, more directly. As for the rest... sampling TCP_INFO on
>
>
> windows, and Linux, at least, always appeared simple to me, but I'm
>
>
> discovering how hard it is by delving deep into the rust behind
>
>
> crusader.
>
>
> the goresponsiveness thing is also IMHO running WAY too many streams
>
>
> at the same time, I guess motivated by an attempt to have the test
>
>
> complete quickly?
>
>
> B) To try and tackle the validation problem:ps. Misinformation about iperf
>2 impacts my ability to do this.
>
> 
> In the libreqos.io project we've established a testbed where tests can
>
>
> be plunked through various ISP plan network emulations. It's here:
>
>
> https://payne.taht.net (run bandwidth test for what's currently hooked
>
>
> up)
>
>
> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
>
>
> leverage with that, so I don't have to nat the various emulations.
>
>
> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
>
>
> to see more test designers setup a testbed like this to calibrate
>
>
> their own stuff.
>
>
> Presently we're able to test:
>
>
> flent
>
>
> netperf
>
>
> iperf2
>
>
> iperf3
>
>
> speedtest-cli
>
>
> crusader
>
>
> the broadband forum udp based test:
>
>
> https://github.com/BroadbandForum/obudpst
>
>
> trexx
>
>
> There's also a virtual machine setup that we can remotely drive a web
>
>
> browser from (but I didn't want to nat the results to the world) to
>
>
> test other web services.
>
>
>
>
>
>  _____  
>
>
>
>
>
>
> Rpm mailing list
>
>
> Rpm@lists.bufferbloat.net
>
>
> https://lists.bufferbloat.net/listinfo/rpm
>
>
>
>
>
>
>  _____  
>
>
>
>
>
>
> Starlink mailing list
>
>
> Starlink@lists.bufferbloat.net
>
>
> https://lists.bufferbloat.net/listinfo/starlink
>
>
>
>
>
>  _____  
>
>
>
>
>
>
> Starlink mailing list
>
>
> Starlink@lists.bufferbloat.net
>
>
> https://lists.bufferbloat.net/listinfo/starlink
> 

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-13  7:44                     ` Sebastian Moeller
@ 2023-01-13  8:01                       ` Dick Roy
  0 siblings, 0 replies; 183+ messages in thread
From: Dick Roy @ 2023-01-13  8:01 UTC (permalink / raw)
  To: 'Sebastian Moeller', 'Robert McMahon'
  Cc: mike.reynolds, 'libreqos', 'David P. Reed',
	'Rpm', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 13323 bytes --]

 

 

-----Original Message-----
From: Sebastian Moeller [mailto:moeller0@gmx.de] 
Sent: Thursday, January 12, 2023 11:45 PM
To: dickroy@alum.mit.edu; Dick Roy; 'Robert McMahon'
Cc: mike.reynolds@netforecast.com; 'libreqos'; 'David P. Reed'; 'Rpm';
'bloat'
Subject: RE: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

 

Hi RR

 

On 12 January 2023 22:57:32 CET, Dick Roy <dickroy@alum.mit.edu> wrote:

>FYI .

> 

> 

> 

>https://www.fiercewireless.com/tech/cbrs-based-fwa-beats-starlink-performan
c

>e-madden

> 

 

[SM] He is so close:

[RR] Which is why I posted the link :-)  I knew you'd latch on to his
thread! 

  

'Speed tests don't tell us much about the capacity of the network, or the
reliability of the network, or the true latency with larger packet sizes.
Packet loss testing can help to fill in key missing information to give the
end customer the smooth experience they're looking for.'

and

'Packets received over 250 ms latency are considered too late to be useful
for video conferencing.'

 

He actually reports both loss numbers and delay > 250ms, so in spite arguing
that loss is the relevant metric he already dips his toes into the latency
issue... I wonder whether his view will refine over time now that he
apparently moved from a link with 8% packet loss to one with a more sane
0.1% loss rate (no idea how he measured lossrate though, or latency). I
guess this shows that there is no single solution for all links, it really
matters where one starts which of throughput, delay, loss is the most
painful and hence the dimension in need of a fix first.

 

Regards

        Sebastian

 

 

 

> 

> 

>Nothing earth-shaking :-)

> 

> 

>RR

> 

> 

> 

>  _____  

> 

>From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of

>Robert McMahon via Starlink

>Sent: Thursday, January 12, 2023 9:50 AM

>To: Sebastian Moeller

>Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos; David

>P. Reed; Rpm; bloat

>Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

> 

> 

> 

>Hi Sebastien,

> 

>You make a good point. What I did was issue a warning if the tool found it

>was being CPU limited vs i/o limited. This indicates the i/o test likely is

>inaccurate from an i/o perspective, and the results are suspect. It does

>this crudely by comparing the cpu thread doing stats against the traffic

>threads doing i/o, which thread is waiting on the others. There is no

>attempt to assess the cpu load itself. So it's designed with a singular

>purpose of making sure i/o threads only block on syscalls of write and
read.

> 

>I probably should revisit this both in design and implementation. Thanks
for

>bringing it up and all input is truly appreciated. 

> 

>Bob

> 

>On Jan 12, 2023, at 12:14 AM, Sebastian Moeller <moeller0@gmx.de> wrote:

> 

>Hi Bob,

> 

> 

> 

> 

> 

> 

> On Jan 11, 2023, at 21:09, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:

> 

> 

> 

> 

> 

> Iperf 2 is designed to measure network i/o. Note: It doesn't have to move

>large amounts of data. It can support data profiles that don't drive TCP's

>CCA as an example.

> 

> 

> 

> 

> 

> Two things I've been asked for and avoided:

> 

> 

> 

> 

> 

> 1) Integrate clock sync into iperf's test traffic

> 

> 

> 

> [SM] This I understand, measurement conditions can be unsuited for tight

>time synchronization...

> 

> 

> 

> 

> 

> 

> 2) Measure and output CPU usages

> 

> 

> 

> [SM] This one puzzles me, as far as I understand the only way to properly

>diagnose network issues is to rule out other things like CPU overload that

>can have symptoms similar to network issues. As an example, the cake qdisc

>will if CPU cycles become tight first increases its internal queueing and

>jitter (not consciously, it is just an observation that once cake does not

>get access to the CPU as timely as it wants, queuing latency and
variability

>increases) and then later also shows reduced throughput, so similar things

>that can happen along an e2e network path for completely different reasons,

>e.g. lower level retransmissions or a variable rate link. So i would think

>that checking the CPU load at least coarse would be within the scope of

>network testing tools, no?

> 

> 

> 

> 

> 

>Regards

> 

> 

> Sebastian

> 

> 

> 

> 

> 

> 

> 

> 

> 

> 

> 

> 

> I think both of these are outside the scope of a tool designed to test

>network i/o over sockets, rather these should be developed & validated

>independently of a network i/o tool.

> 

> 

> 

> 

> 

> Clock error really isn't about amount/frequency of traffic but rather

>getting a periodic high-quality reference. I tend to use GPS pulse per

>second to lock the local system oscillator to. As David says, most every

>modern handheld computer has the GPS chips to do this already. So to me it

>seems more of a policy choice between data center operators and device mfgs

>and less of a technical issue.

> 

> 

> 

> 

> 

> Bob

> Hello,

> 

> 

>  Yall can call me crazy if you want.. but... see below [RWG]

> Hi Bib,

> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink

><starlink@lists.bufferbloat.net> wrote:

> 

> 

> 

> 

> 

> My biggest barrier is the lack of clock sync by the devices, i.e. very

>limited support for PTP in data centers and in end devices. This limits the

>ability to measure one way delays (OWD) and most assume that OWD is 1/2 and

>RTT which typically is a mistake. We know this intuitively with airplane

>flight times or even car commute times where the one way time is not 1/2 a

>round trip time. Google maps & directions provide a time estimate for the

>one way link. It doesn't compute a round trip and divide by two.

> 

> 

> 

> 

> 

> For those that can get clock sync working, the iperf 2 --trip-times
options

>is useful.

>  [SM] +1; and yet even with unsynchronized clocks one can try to measure

>how latency changes under load and that can be done per direction. Sure
this

>is far inferior to real reliably measured OWDs, but if life/the internet

>deals you lemons....

> [RWG] iperf2/iperf3, etc are already moving large amounts of data

> 

> 

> back and forth, for that matter any rate test, why not abuse some of

> 

> 

> that data and add the fundemental NTP clock sync data and

> 

> 

> bidirectionally pass each others concept of "current time".  IIRC (its

> 

> 

> been 25 years since I worked on NTP at this level) you *should* be

> 

> 

> able to get a fairly accurate clock delta between each end, and then

> 

> 

> use that info and time stamps in the data stream to compute OWD's.

> 

> 

> You need to put 4 time stamps in the packet, and with that you can

> 

> 

> compute "offset".

> 

> 

> 

> 

> --trip-times

> 

> 

>  enable the measurement of end to end write to read latencies (client and

>server clocks must be synchronized)

> 

> [RWG] --clock-skew

> 

> 

>  enable the measurement of the wall clock difference between sender and

>receiver

>  [SM] Sweet!

> 

> 

> Regards

> 

> 

>  Sebastian

> 

> 

> 

> Bob

> I have many kvetches about the new latency under load tests being

> 

> 

> designed and distributed over the past year. I am delighted! that they

> 

> 

> are happening, but most really need third party evaluation, and

> 

> 

> calibration, and a solid explanation of what network pathologies they

> 

> 

> do and don't cover. Also a RED team attitude towards them, as well as

> 

> 

> thinking hard about what you are not measuring (operations research).

> 

> 

> I actually rather love the new cloudflare speedtest, because it tests

> 

> 

> a single TCP connection, rather than dozens, and at the same time folk

> 

> 

> are complaining that it doesn't find the actual "speed!". yet... the

> 

> 

> test itself more closely emulates a user experience than speedtest.net

> 

> 

> does. I am personally pretty convinced that the fewer numbers of flows

> 

> 

> that a web page opens improves the likelihood of a good user

> 

> 

> experience, but lack data on it.

> 

> 

> To try to tackle the evaluation and calibration part, I've reached out

> 

> 

> to all the new test designers in the hope that we could get together

> 

> 

> and produce a report of what each new test is actually doing. I've

> 

> 

> tweeted, linked in, emailed, and spammed every measurement list I know

> 

> 

> of, and only to some response, please reach out to other test designer

> 

> 

> folks and have them join the rpm email list?

> 

> 

> My principal kvetches in the new tests so far are:

> 

> 

> 0) None of the tests last long enough.

> 

> 

> Ideally there should be a mode where they at least run to "time of

> 

> 

> first loss", or periodically, just run longer than the

> 

> 

> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons

> 

> 

> there! It's really bad science to optimize the internet for 20

> 

> 

> seconds. It's like optimizing a car, to handle well, for just 20

> 

> 

> seconds.

> 

> 

> 1) Not testing up + down + ping at the same time

> 

> 

> None of the new tests actually test the same thing that the infamous

> 

> 

> rrul test does - all the others still test up, then down, and ping. It

> 

> 

> was/remains my hope that the simpler parts of the flent test suite -

> 

> 

> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair

> 

> 

> tests would provide calibration to the test designers.

> 

> 

> we've got zillions of flent results in the archive published here:

> 

> 

> https://blog.cerowrt.org/post/found_in_flent/

> 

> 

> ps. Misinformation about iperf 2 impacts my ability to do this.

> 

> 

> The new tests have all added up + ping and down + ping, but not up +

> 

> 

> down + ping. Why??

> 

> 

> The behaviors of what happens in that case are really non-intuitive, I

> 

> 

> know, but... it's just one more phase to add to any one of those new

> 

> 

> tests. I'd be deliriously happy if someone(s) new to the field

> 

> 

> started doing that, even optionally, and boggled at how it defeated

> 

> 

> their assumptions.

> 

> 

> Among other things that would show...

> 

> 

> It's the home router industry's dirty secret than darn few "gigabit"

> 

> 

> home routers can actually forward in both directions at a gigabit. I'd

> 

> 

> like to smash that perception thoroughly, but given our starting point

> 

> 

> is a gigabit router was a "gigabit switch" - and historically been

> 

> 

> something that couldn't even forward at 200Mbit - we have a long way

> 

> 

> to go there.

> 

> 

> Only in the past year have non-x86 home routers appeared that could

> 

> 

> actually do a gbit in both directions.

> 

> 

> 2) Few are actually testing within-stream latency

> 

> 

> Apple's rpm project is making a stab in that direction. It looks

> 

> 

> highly likely, that with a little more work, crusader and

> 

> 

> go-responsiveness can finally start sampling the tcp RTT, loss and

> 

> 

> markings, more directly. As for the rest... sampling TCP_INFO on

> 

> 

> windows, and Linux, at least, always appeared simple to me, but I'm

> 

> 

> discovering how hard it is by delving deep into the rust behind

> 

> 

> crusader.

> 

> 

> the goresponsiveness thing is also IMHO running WAY too many streams

> 

> 

> at the same time, I guess motivated by an attempt to have the test

> 

> 

> complete quickly?

> 

> 

> B) To try and tackle the validation problem:ps. Misinformation about iperf

>2 impacts my ability to do this.

> 

> 

> In the libreqos.io project we've established a testbed where tests can

> 

> 

> be plunked through various ISP plan network emulations. It's here:

> 

> 

> https://payne.taht.net (run bandwidth test for what's currently hooked

> 

> 

> up)

> 

> 

> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to

> 

> 

> leverage with that, so I don't have to nat the various emulations.

> 

> 

> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,

> 

> 

> to see more test designers setup a testbed like this to calibrate

> 

> 

> their own stuff.

> 

> 

> Presently we're able to test:

> 

> 

> flent

> 

> 

> netperf

> 

> 

> iperf2

> 

> 

> iperf3

> 

> 

> speedtest-cli

> 

> 

> crusader

> 

> 

> the broadband forum udp based test:

> 

> 

> https://github.com/BroadbandForum/obudpst

> 

> 

> trexx

> 

> 

> There's also a virtual machine setup that we can remotely drive a web

> 

> 

> browser from (but I didn't want to nat the results to the world) to

> 

> 

> test other web services.

> 

> 

> 

> 

> 

>  _____  

> 

> 

> 

> 

> 

> 

> Rpm mailing list

> 

> 

> Rpm@lists.bufferbloat.net

> 

> 

> https://lists.bufferbloat.net/listinfo/rpm

> 

> 

> 

> 

> 

> 

>  _____  

> 

> 

> 

> 

> 

> 

> Starlink mailing list

> 

> 

> Starlink@lists.bufferbloat.net

> 

> 

> https://lists.bufferbloat.net/listinfo/starlink

> 

> 

> 

> 

> 

>  _____  

> 

> 

> 

> 

> 

> 

> Starlink mailing list

> 

> 

> Starlink@lists.bufferbloat.net

> 

> 

> https://lists.bufferbloat.net/listinfo/starlink

> 

 

-- 

Sent from my Android device with K-9 Mail. Please excuse my brevity.


[-- Attachment #2: Type: text/html, Size: 85508 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-13  7:40                     ` rjmcmahon
@ 2023-01-13  8:10                       ` Dick Roy
  2023-01-15 23:09                         ` rjmcmahon
  0 siblings, 1 reply; 183+ messages in thread
From: Dick Roy @ 2023-01-13  8:10 UTC (permalink / raw)
  To: 'rjmcmahon'
  Cc: 'Sebastian Moeller', 'Rodney W. Grimes',
	mike.reynolds, 'libreqos', 'David P. Reed',
	'Rpm', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 18900 bytes --]

 

 

-----Original Message-----
From: rjmcmahon [mailto:rjmcmahon@rjmcmahon.com] 
Sent: Thursday, January 12, 2023 11:40 PM
To: dickroy@alum.mit.edu
Cc: 'Sebastian Moeller'; 'Rodney W. Grimes'; mike.reynolds@netforecast.com;
'libreqos'; 'David P. Reed'; 'Rpm'; 'bloat'
Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

 

Hi RR,

 

I believe quality GPS chips compensate for relativity in pulse per 

second which is needed to get position accuracy.

[RR] Of course they do.  That 38usec/day really matters! They assume they
know what the gravitational potential is where they are, and they can
estimate the potential at the satellites so they can compensate, and they
do.  Point is, a GPS unit at Lake Tahoe (6250') runs faster than the one in
San Francisco (sea level).  How do you think these two "should be
synchronized"!   How do you define "synchronization" in this case?  You
synchronize those two clocks, then what about all the other clocks at Lake
Tahoe (or SF or anywhere in between for that matter :-))??? These are not
trivial questions. However if all one cares about is seconds or
milliseconds, then you can argue that we (earthlings on planet earth) can
"sweep such facts under the proverbial rug" for the purposes of latency in
communication networks and that's certainly doable.  Don't tell that to the
guys whose protocols require "synchronization of all unit to nanoseconds"
though!  They will be very, very unhappy :-) :-) And you know who you are
:-) :-) 

 

:-)

 

Bob

> Hi Sebastian (et. al.),

> 

> [I'll comment up here instead of inline.]

> 

> Let me start by saying that I have not been intimately involved with

> the IEEE 1588 effort (PTP), however I was involved in the 802.11

> efforts along a similar vein, just adding the wireless first hop

> component and it's effects on PTP.

> 

> What was apparent from the outset was that there was a lack of

> understanding what the terms "to synchronize" or "to be synchronized"

> actually mean.  It's not trivial . because we live in a

> (approximately, that's another story!) 4-D space-time continuum where

> the Lorentz metric plays a critical role.  Therein, simultaneity (aka

> "things happening at the same time") means the "distance" between two

> such events is zero and that distance is given by sqrt(x^2 + y^2 + z^2

> - (ct)^2) and the "thing happening" can be the tick of a clock

> somewhere. Now since everything is relative (time with respect to

> what? / location with respect to where?) it's pretty easy to see that

> "if you don't know where you are, you can't know what time it is!"

> (English sailors of the 18th century knew this well!) Add to this the

> fact that if everything were stationary, nothing would happen (as

> Einstein said "Nothing happens until something moves!"), special

> relativity also pays a role.  Clocks on GPS satellites run approx.

> 7usecs/day slower than those on earth due to their "speed" (8700 mph

> roughly)! Then add the consequence that without mass we wouldn't exist

> (in these forms at leastJ), and gravitational effects (aka General

> Relativity) come into play. Those turn out to make clocks on GPS

> satellites run 45usec/day faster than those on earth!  The net effect

> is that GPS clocks run about 38usec/day faster than clocks on earth.

> So what does it mean to "synchronize to GPS"?  Point is: it's a

> non-trivial question with a very complicated answer.  The reason it is

> important to get all this right is that the "what that ties time and

> space together" is the speed of light and that turns out to be a

> "foot-per-nanosecond" in a vacuum (roughly 300m/usec).  This means if

> I am uncertain about my location to say 300 meters, then I also am not

> sure what time it is to a usec AND vice-versa!

> 

> All that said, the simplest explanation of synchronization is

> probably: Two clocks are synchronized if, when they are brought

> (slowly) into physical proximity ("sat next to each other") in the

> same (quasi-)inertial frame and the same gravitational potential (not

> so obvious BTW . see the FYI below!), an observer of both would say

> "they are keeping time identically". Since this experiment is rarely

> possible, one can never be "sure" that his clock is synchronized to

> any other clock elsewhere. And what does it mean to say they "were

> synchronized" when brought together, but now they are not because they

> are now in different gravitational potentials! (FYI, there are land

> mine detectors being developed on this very principle! I know someone

> who actually worked on such a project!)

> 

> This all gets even more complicated when dealing with large networks

> of networks in which the "speed of information transmission" can vary

> depending on the medium (cf. coaxial cables versus fiber versus

> microwave links!) In fact, the atmosphere is one of those media and

> variations therein result in the need for "GPS corrections" (cf. RTCM

> GPS correction messages, RTK, etc.) in order to get to sub-nsec/cm

> accuracy.  Point is if you have a set of nodes distributed across the

> country all with GPS and all "synchronized to GPS time", and a second

> identical set of nodes (with no GPS) instead connected with a network

> of cables and fiber links, all of different lengths and composition

> using different carrier frequencies (dielectric constants vary with

> frequency!) "synchronized" to some clock somewhere using NTP or PTP),

> the synchronization of the two sets will be different unless a common

> reference clock is used AND all the above effects are taken into

> account, and good luck with that! J

> 

> In conclusion, if anyone tells you that clock synchronization in

> communication networks is simple ("Just use GPS!"), you should feel

> free to chuckle (under your breath if necessaryJ)

> 

> Cheers,

> 

> RR

> 

> -----Original Message-----

> From: Sebastian Moeller [mailto:moeller0@gmx.de]

> Sent: Thursday, January 12, 2023 12:23 AM

> To: Dick Roy

> Cc: Rodney W. Grimes; mike.reynolds@netforecast.com; libreqos; David

> P. Reed; Rpm; rjmcmahon; bloat

> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in

> USA

> 

> Hi RR,

> 

>> On Jan 11, 2023, at 22:46, Dick Roy <dickroy@alum.mit.edu> wrote:

> 

>> 

> 

>> 

> 

>> 

> 

>> -----Original Message-----

> 

>> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On

> Behalf Of Sebastian Moeller via Starlink

> 

>> Sent: Wednesday, January 11, 2023 12:01 PM

> 

>> To: Rodney W. Grimes

> 

>> Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos;

> David P. Reed; Rpm; rjmcmahon; bloat

> 

>> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers

> in USA

> 

>> 

> 

>> Hi Rodney,

> 

>> 

> 

>> 

> 

>> 

> 

>> 

> 

>> > On Jan 11, 2023, at 19:32, Rodney W. Grimes

> <starlink@gndrsh.dnsmgr.net> wrote:

> 

>> >

> 

>> > Hello,

> 

>> >

> 

>> >     Yall can call me crazy if you want.. but... see below [RWG]

> 

>> >> Hi Bib,

> 

>> >>

> 

>> >>

> 

>> >>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink

> <starlink@lists.bufferbloat.net> wrote:

> 

>> >>>

> 

>> >>> My biggest barrier is the lack of clock sync by the devices,

> i.e. very limited support for PTP in data centers and in end devices.

> This limits the ability to measure one way delays (OWD) and most

> assume that OWD is 1/2 and RTT which typically is a mistake. We know

> this intuitively with airplane flight times or even car commute times

> where the one way time is not 1/2 a round trip time. Google maps &

> directions provide a time estimate for the one way link. It doesn't

> compute a round trip and divide by two.

> 

>> >>>

> 

>> >>> For those that can get clock sync working, the iperf 2

> --trip-times options is useful.

> 

>> >>

> 

>> >>    [SM] +1; and yet even with unsynchronized clocks one can try

> to measure how latency changes under load and that can be done per

> direction. Sure this is far inferior to real reliably measured OWDs,

> but if life/the internet deals you lemons....

> 

>> >

> 

>> > [RWG] iperf2/iperf3, etc are already moving large amounts of data

> back and forth, for that matter any rate test, why not abuse some of

> that data and add the fundemental NTP clock sync data and

> bidirectionally pass each others concept of "current time".  IIRC (its

> been 25 years since I worked on NTP at this level) you *should* be

> able to get a fairly accurate clock delta between each end, and then

> use that info and time stamps in the data stream to compute OWD's.

> You need to put 4 time stamps in the packet, and with that you can

> compute "offset".

> 

>> [RR] For this to work at a reasonable level of accuracy, the

> timestamping circuits on both ends need to be deterministic and

> repeatable as I recall. Any uncertainty in that process adds to

> synchronization errors/uncertainties.

> 

>> 

> 

>>       [SM] Nice idea. I would guess that all timeslot based access

> technologies (so starlink, docsis, GPON, LTE?) all distribute "high

> quality time" carefully to the "modems", so maybe all that would be

> needed is to expose that high quality time to the LAN side of those

> modems, dressed up as NTP server?

> 

>> [RR] It's not that simple!  Distributing "high-quality time", i.e.

> "synchronizing all clocks" does not solve the communication problem in

> synchronous slotted MAC/PHYs!

> 

>       [SM] I happily believe you, but the same idea of "time slot"

> needs to be shared by all nodes, no? So the clockss need to be

> reasonably similar rate, aka synchronized (see below).

> 

>>  All the technologies you mentioned above are essentially P2P, not

> intended for broadcast.  Point is, there is a point controller (aka

> PoC) often called a base station (eNodeB, gNodeB, .) that actually

> "controls everything that is necessary to control" at the UE including

> time, frequency and sampling time offsets, and these are critical to

> get right if you want to communicate, and they are ALL subject to the

> laws of physics (cf. the speed of light)! Turns out that what is

> necessary for the system to function anywhere near capacity, is for

> all the clocks governing transmissions from the UEs to be

> "unsynchronized" such that all the UE transmissions arrive at the PoC

> at the same (prescribed) time!

> 

>       [SM] Fair enough. I would call clocks that are "in sync" albeit

> with individual offsets as synchronized, but I am a layman and that

> might sound offensively wrong to experts in the field. But even

> without the naming my point is that all systems that depend on some

> idea of shared time-base are halfway there of exposing that time to

> end users, by "translating it into an NTP time source at the modem.

> 

>> For some technologies, in particular 5G!, these considerations are

> ESSENTIAL. Feel free to scour the 3GPP LTE 5G RLC and PHY specs if you

> don't believe me! J

> 

>       [SM Far be it from me not to believe you, so thanks for the

> pointers. Yet, I still think that unless different nodes of a shared

> segment move at significantly different speeds, that there should be a

> common "tick-duration" for all clocks even if each clock runs at an

> offset... (I naively would try to implement something like that by

> trying to fully synchronize clocks and maintain a local offset value

> to convert from "absolute" time to "network" time, but likely because

> coming from the outside I am blissfully unaware of the detail

> challenges that need to be solved).

> 

> Regards & Thanks

> 

>       Sebastian

> 

>> 

> 

>> 

> 

>> >

> 

>> >>

> 

>> >>

> 

>> >>>

> 

>> >>> --trip-times

> 

>> >>> enable the measurement of end to end write to read latencies

> (client and server clocks must be synchronized)

> 

>> > [RWG] --clock-skew

> 

>> >     enable the measurement of the wall clock difference between

> sender and receiver

> 

>> >

> 

>> >>

> 

>> >>    [SM] Sweet!

> 

>> >>

> 

>> >> Regards

> 

>> >>    Sebastian

> 

>> >>

> 

>> >>>

> 

>> >>> Bob

> 

>> >>>> I have many kvetches about the new latency under load tests

> being

> 

>> >>>> designed and distributed over the past year. I am delighted!

> that they

> 

>> >>>> are happening, but most really need third party evaluation, and

> 

> 

>> >>>> calibration, and a solid explanation of what network

> pathologies they

> 

>> >>>> do and don't cover. Also a RED team attitude towards them, as

> well as

> 

>> >>>> thinking hard about what you are not measuring (operations

> research).

> 

>> >>>> I actually rather love the new cloudflare speedtest, because it

> tests

> 

>> >>>> a single TCP connection, rather than dozens, and at the same

> time folk

> 

>> >>>> are complaining that it doesn't find the actual "speed!".

> yet... the

> 

>> >>>> test itself more closely emulates a user experience than

> speedtest.net

> 

>> >>>> does. I am personally pretty convinced that the fewer numbers

> of flows

> 

>> >>>> that a web page opens improves the likelihood of a good user

> 

>> >>>> experience, but lack data on it.

> 

>> >>>> To try to tackle the evaluation and calibration part, I've

> reached out

> 

>> >>>> to all the new test designers in the hope that we could get

> together

> 

>> >>>> and produce a report of what each new test is actually doing.

> I've

> 

>> >>>> tweeted, linked in, emailed, and spammed every measurement list

> I know

> 

>> >>>> of, and only to some response, please reach out to other test

> designer

> 

>> >>>> folks and have them join the rpm email list?

> 

>> >>>> My principal kvetches in the new tests so far are:

> 

>> >>>> 0) None of the tests last long enough.

> 

>> >>>> Ideally there should be a mode where they at least run to "time

> of

> 

>> >>>> first loss", or periodically, just run longer than the

> 

>> >>>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be

> dragons

> 

>> >>>> there! It's really bad science to optimize the internet for 20

> 

>> >>>> seconds. It's like optimizing a car, to handle well, for just

> 20

> 

>> >>>> seconds.

> 

>> >>>> 1) Not testing up + down + ping at the same time

> 

>> >>>> None of the new tests actually test the same thing that the

> infamous

> 

>> >>>> rrul test does - all the others still test up, then down, and

> ping. It

> 

>> >>>> was/remains my hope that the simpler parts of the flent test

> suite -

> 

>> >>>> such as the tcp_up_squarewave tests, the rrul test, and the

> rtt_fair

> 

>> >>>> tests would provide calibration to the test designers.

> 

>> >>>> we've got zillions of flent results in the archive published

> here:

> 

>> >>>> https://blog.cerowrt.org/post/found_in_flent/

> 

>> >>>> ps. Misinformation about iperf 2 impacts my ability to do this.

> 

> 

>> >>>

> 

>> >>>> The new tests have all added up + ping and down + ping, but not

> up +

> 

>> >>>> down + ping. Why??

> 

>> >>>> The behaviors of what happens in that case are really

> non-intuitive, I

> 

>> >>>> know, but... it's just one more phase to add to any one of

> those new

> 

>> >>>> tests. I'd be deliriously happy if someone(s) new to the field

> 

>> >>>> started doing that, even optionally, and boggled at how it

> defeated

> 

>> >>>> their assumptions.

> 

>> >>>> Among other things that would show...

> 

>> >>>> It's the home router industry's dirty secret than darn few

> "gigabit"

> 

>> >>>> home routers can actually forward in both directions at a

> gigabit. I'd

> 

>> >>>> like to smash that perception thoroughly, but given our

> starting point

> 

>> >>>> is a gigabit router was a "gigabit switch" - and historically

> been

> 

>> >>>> something that couldn't even forward at 200Mbit - we have a

> long way

> 

>> >>>> to go there.

> 

>> >>>> Only in the past year have non-x86 home routers appeared that

> could

> 

>> >>>> actually do a gbit in both directions.

> 

>> >>>> 2) Few are actually testing within-stream latency

> 

>> >>>> Apple's rpm project is making a stab in that direction. It

> looks

> 

>> >>>> highly likely, that with a little more work, crusader and

> 

>> >>>> go-responsiveness can finally start sampling the tcp RTT, loss

> and

> 

>> >>>> markings, more directly. As for the rest... sampling TCP_INFO

> on

> 

>> >>>> windows, and Linux, at least, always appeared simple to me, but

> I'm

> 

>> >>>> discovering how hard it is by delving deep into the rust behind

> 

> 

>> >>>> crusader.

> 

>> >>>> the goresponsiveness thing is also IMHO running WAY too many

> streams

> 

>> >>>> at the same time, I guess motivated by an attempt to have the

> test

> 

>> >>>> complete quickly?

> 

>> >>>> B) To try and tackle the validation problem:ps. Misinformation

> about iperf 2 impacts my ability to do this.

> 

>> >>>

> 

>> >>>> In the libreqos.io project we've established a testbed where

> tests can

> 

>> >>>> be plunked through various ISP plan network emulations. It's

> here:

> 

>> >>>> https://payne.taht.net (run bandwidth test for what's currently

> hooked

> 

>> >>>> up)

> 

>> >>>> We could rather use an AS number and at least a ipv4/24 and

> ipv6/48 to

> 

>> >>>> leverage with that, so I don't have to nat the various

> emulations.

> 

>> >>>> (and funding, anyone got funding?) Or, as the code is GPLv2

> licensed,

> 

>> >>>> to see more test designers setup a testbed like this to

> calibrate

> 

>> >>>> their own stuff.

> 

>> >>>> Presently we're able to test:

> 

>> >>>> flent

> 

>> >>>> netperf

> 

>> >>>> iperf2

> 

>> >>>> iperf3

> 

>> >>>> speedtest-cli

> 

>> >>>> crusader

> 

>> >>>> the broadband forum udp based test:

> 

>> >>>> https://github.com/BroadbandForum/obudpst

> 

>> >>>> trexx

> 

>> >>>> There's also a virtual machine setup that we can remotely drive

> a web

> 

>> >>>> browser from (but I didn't want to nat the results to the

> world) to

> awhile

>> >>>> test other web services.

> 

>> >>>> _______________________________________________

> 

>> >>>> Rpm mailing list

> 

>> >>>> Rpm@lists.bufferbloat.net

> 

>> >>>> https://lists.bufferbloat.net/listinfo/rpm

> 

>> >>> _______________________________________________

> 

>> >>> Starlink mailing list

> 

>> >>> Starlink@lists.bufferbloat.net

> 

>> >>> https://lists.bufferbloat.net/listinfo/starlink

> 

>> >>

> 

>> >> _______________________________________________

> 

>> >> Starlink mailing list

> 

>> >> Starlink@lists.bufferbloat.net

> 

>> >> https://lists.bufferbloat.net/listinfo/starlink

> 

>> 

> 

>> _______________________________________________

> 

>> Starlink mailing list

> 

>> Starlink@lists.bufferbloat.net

> 

>> https://lists.bufferbloat.net/listinfo/starlink


[-- Attachment #2: Type: text/html, Size: 91806 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-13  7:33                     ` Sebastian Moeller
@ 2023-01-13  8:26                       ` Dick Roy
  0 siblings, 0 replies; 183+ messages in thread
From: Dick Roy @ 2023-01-13  8:26 UTC (permalink / raw)
  To: 'Sebastian Moeller'
  Cc: 'Rodney W. Grimes', mike.reynolds, 'libreqos',
	'David P. Reed', 'Rpm', 'rjmcmahon',
	'bloat'

[-- Attachment #1: Type: text/plain, Size: 17792 bytes --]

 

 

  _____  

From: Sebastian Moeller [mailto:moeller0@gmx.de] 
Sent: Thursday, January 12, 2023 11:33 PM
To: dickroy@alum.mit.edu; Dick Roy
Cc: 'Rodney W. Grimes'; mike.reynolds@netforecast.com; 'libreqos'; 'David P.
Reed'; 'Rpm'; 'rjmcmahon'; 'bloat'
Subject: RE: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

 

Hi RR,

Thanks for the detailed response below, since my point is somewhat
orthogonal I opted for top-posting.
Let me take a step back here and rephrase, synchronising clocks within an
acceptable range to be useful is not rocket science nor witchcraft. For
measuring internet traffic 'millisecond' range seems acceptable, local
networks can probably profit from finer time resolution. So I am not after
e.g. clock synchronisation to participate in SDH/SONET. Heck in the toy
project I am active in, we operate on load dependent delay deltas so we even
ignore different time offsets and are tolerant to (mildly) different
tickrates and clock skew, but it would certainly be nice to have some
acceptable measure of UTC from endpoints to be able to interpret timestamps
as 'absolute'. Mind you I am fine with them not being veridical absolute,
but just good enough for my measurement purpose and I guess that should be
within the range of the achievable. Heck, if all servers we query timestamps
of would be NTP-'synchronized' and would follow the RFC recommendation to
report timestamps in milliseconds past midnight UTC I would be happy.

[RR] Yup!  All true. Hence my post that obviously passed this one in the
ether! :-) :-) 



Regards
        Sebsstian

On 12 January 2023 21:39:21 CET, Dick Roy <dickroy@alum.mit.edu> wrote:

Hi Sebastian (et. al.),

 

[I'll comment up here instead of inline.]  

 

Let me start by saying that I have not been intimately involved with the
IEEE 1588 effort (PTP), however I was involved in the 802.11 efforts along a
similar vein, just adding the wireless first hop component and it's effects
on PTP.  

 

What was apparent from the outset was that there was a lack of understanding
what the terms "to synchronize" or "to be synchronized" actually mean.  It's
not trivial . because we live in a (approximately, that's another story!)
4-D space-time continuum where the Lorentz metric plays a critical role.
Therein, simultaneity (aka "things happening at the same time") means the
"distance" between two such events is zero and that distance is given by
sqrt(x^2 + y^2 + z^2 - (ct)^2) and the "thing happening" can be the tick of
a clock somewhere. Now since everything is relative (time with respect to
what? / location with respect to where?) it's pretty easy to see that "if
you don't know where you are, you can't know what time it is!" (English
sailors of the 18th century knew this well!) Add to this the fact that if
everything were stationary, nothing would happen (as Einstein said "Nothing
happens until something moves!"), special relativity also pays a role.
Clocks on GPS satellites run approx. 7usecs/day slower than those on earth
due to their "speed" (8700 mph roughly)! Then add the consequence that
without mass we wouldn't exist (in these forms at least:-)), and
gravitational effects (aka General Relativity) come into play. Those turn
out to make clocks on GPS satellites run 45usec/day faster than those on
earth!  The net effect is that GPS clocks run about 38usec/day faster than
clocks on earth.  So what does it mean to "synchronize to GPS"?  Point is:
it's a non-trivial question with a very complicated answer.  The reason it
is important to get all this right is that the "what that ties time and
space together" is the speed of light and that turns out to be a
"foot-per-nanosecond" in a vacuum (roughly 300m/usec).  This means if I am
uncertain about my location to say 300 meters, then I also am not sure what
time it is to a usec AND vice-versa! 

 

All that said, the simplest explanation of synchronization is probably: Two
clocks are synchronized if, when they are brought (slowly) into physical
proximity ("sat next to each other") in the same (quasi-)inertial frame and
the same gravitational potential (not so obvious BTW . see the FYI below!),
an observer of both would say "they are keeping time identically". Since
this experiment is rarely possible, one can never be "sure" that his clock
is synchronized to any other clock elsewhere. And what does it mean to say
they "were synchronized" when brought together, but now they are not because
they are now in different gravitational potentials! (FYI, there are land
mine detectors being developed on this very principle! I know someone who
actually worked on such a project!) 

 

This all gets even more complicated when dealing with large networks of
networks in which the "speed of information transmission" can vary depending
on the medium (cf. coaxial cables versus fiber versus microwave links!) In
fact, the atmosphere is one of those media and variations therein result in
the need for "GPS corrections" (cf. RTCM GPS correction messages, RTK, etc.)
in order to get to sub-nsec/cm accuracy.  Point is if you have a set of
nodes distributed across the country all with GPS and all "synchronized to
GPS time", and a second identical set of nodes (with no GPS) instead
connected with a network of cables and fiber links, all of different lengths
and composition using different carrier frequencies (dielectric constants
vary with frequency!) "synchronized" to some clock somewhere using NTP or
PTP), the synchronization of the two sets will be different unless a common
reference clock is used AND all the above effects are taken into account,
and good luck with that! :-) 

 

In conclusion, if anyone tells you that clock synchronization in
communication networks is simple ("Just use GPS!"), you should feel free to
chuckle (under your breath if necessary:-)) 

 

Cheers,

 

RR

 

 

  

 

 

 

-----Original Message-----
From: Sebastian Moeller [mailto:moeller0@gmx.de] 
Sent: Thursday, January 12, 2023 12:23 AM
To: Dick Roy
Cc: Rodney W. Grimes; mike.reynolds@netforecast.com; libreqos; David P.
Reed; Rpm; rjmcmahon; bloat
Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

 

Hi RR,

 

 

> On Jan 11, 2023, at 22:46, Dick Roy <dickroy@alum.mit.edu> wrote:

> 

>  

>  

> -----Original Message-----

> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf
Of Sebastian Moeller via Starlink

> Sent: Wednesday, January 11, 2023 12:01 PM

> To: Rodney W. Grimes

> Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com; libreqos; David
P. Reed; Rpm; rjmcmahon; bloat

> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

>  

> Hi Rodney,

>  

>  

>  

>  

> > On Jan 11, 2023, at 19:32, Rodney W. Grimes <starlink@gndrsh.dnsmgr.net>
wrote:

> > 

> > Hello,

> > 

> >     Yall can call me crazy if you want.. but... see below [RWG]

> >> Hi Bib,

> >> 

> >> 

> >>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink
<starlink@lists.bufferbloat.net> wrote:

> >>> 

> >>> My biggest barrier is the lack of clock sync by the devices, i.e. very
limited support for PTP in data centers and in end devices. This limits the
ability to measure one way delays (OWD) and most assume that OWD is 1/2 and
RTT which typically is a mistake. We know this intuitively with airplane
flight times or even car commute times where the one way time is not 1/2 a
round trip time. Google maps & directions provide a time estimate for the
one way link. It doesn't compute a round trip and divide by two.

> >>> 

> >>> For those that can get clock sync working, the iperf 2 --trip-times
options is useful.

> >> 

> >>    [SM] +1; and yet even with unsynchronized clocks one can try to
measure how latency changes under load and that can be done per direction.
Sure this is far inferior to real reliably measured OWDs, but if life/the
internet deals you lemons....

> > 

> > [RWG] iperf2/iperf3, etc are already moving large amounts of data back
and forth, for that matter any rate test, why not abuse some of that data
and add the fundemental NTP clock sync data and bidirectionally pass each
others concept of "current time".  IIRC (its been 25 years since I worked on
NTP at this level) you *should* be able to get a fairly accurate clock delta
between each end, and then use that info and time stamps in the data stream
to compute OWD's.  You need to put 4 time stamps in the packet, and with
that you can compute "offset".

> [RR] For this to work at a reasonable level of accuracy, the timestamping
circuits on both ends need to be deterministic and repeatable as I recall.
Any uncertainty in that process adds to synchronization
errors/uncertainties.

>  

>       [SM] Nice idea. I would guess that all timeslot based access
technologies (so starlink, docsis, GPON, LTE?) all distribute "high quality
time" carefully to the "modems", so maybe all that would be needed is to
expose that high quality time to the LAN side of those modems, dressed up as
NTP server?

> [RR] It's not that simple!  Distributing "high-quality time", i.e.
"synchronizing all clocks" does not solve the communication problem in
synchronous slotted MAC/PHYs!

 

      [SM] I happily believe you, but the same idea of "time slot" needs to
be shared by all nodes, no? So the clockss need to be reasonably similar
rate, aka synchronized (see below).

 

 

>  All the technologies you mentioned above are essentially P2P, not
intended for broadcast.  Point is, there is a point controller (aka PoC)
often called a base station (eNodeB, gNodeB, .) that actually "controls
everything that is necessary to control" at the UE including time, frequency
and sampling time offsets, and these are critical to get right if you want
to communicate, and they are ALL subject to the laws of physics (cf. the
speed of light)! Turns out that what is necessary for the system to function
anywhere near capacity, is for all the clocks governing transmissions from
the UEs to be "unsynchronized" such that all the UE transmissions arrive at
the PoC at the same (prescribed) time!

 

      [SM] Fair enough. I would call clocks that are "in sync" albeit with
individual offsets as synchronized, but I am a layman and that might sound
offensively wrong to experts in the field. But even without the naming my
point is that all systems that depend on some idea of shared time-base are
halfway there of exposing that time to end users, by "translating it into an
NTP time source at the modem.

 

 

> For some technologies, in particular 5G!, these considerations are
ESSENTIAL. Feel free to scour the 3GPP LTE 5G RLC and PHY specs if you don't
believe me! J   

 

      [SM Far be it from me not to believe you, so thanks for the pointers.
Yet, I still think that unless different nodes of a shared segment move at
significantly different speeds, that there should be a common
"tick-duration" for all clocks even if each clock runs at an offset... (I
naively would try to implement something like that by trying to fully
synchronize clocks and maintain a local offset value to convert from
"absolute" time to "network" time, but likely because coming from the
outside I am blissfully unaware of the detail challenges that need to be
solved).

 

Regards & Thanks

      Sebastian

 

 

>  

>  

> > 

> >> 

> >> 

> >>> 

> >>> --trip-times

> >>> enable the measurement of end to end write to read latencies (client
and server clocks must be synchronized)

> > [RWG] --clock-skew

> >     enable the measurement of the wall clock difference between sender
and receiver

> > 

> >> 

> >>    [SM] Sweet!

> >> 

> >> Regards

> >>    Sebastian

> >> 

> >>> 

> >>> Bob

> >>>> I have many kvetches about the new latency under load tests being

> >>>> designed and distributed over the past year. I am delighted! that
they

> >>>> are happening, but most really need third party evaluation, and

> >>>> calibration, and a solid explanation of what network pathologies they

> >>>> do and don't cover. Also a RED team attitude towards them, as well as

> >>>> thinking hard about what you are not measuring (operations research).

> >>>> I actually rather love the new cloudflare speedtest, because it tests

> >>>> a single TCP connection, rather than dozens, and at the same time
folk

> >>>> are complaining that it doesn't find the actual "speed!". yet... the

> >>>> test itself more closely emulates a user experience than
speedtest.net

> >>>> does. I am personally pretty convinced that the fewer numbers of
flows

> >>>> that a web page opens improves the likelihood of a good user

> >>>> experience, but lack data on it.

> >>>> To try to tackle the evaluation and calibration part, I've reached
out

> >>>> to all the new test designers in the hope that we could get together

> >>>> and produce a report of what each new test is actually doing. I've

> >>>> tweeted, linked in, emailed, and spammed every measurement list I
know

> >>>> of, and only to some response, please reach out to other test
designer

> >>>> folks and have them join the rpm email list?

> >>>> My principal kvetches in the new tests so far are:

> >>>> 0) None of the tests last long enough.

> >>>> Ideally there should be a mode where they at least run to "time of

> >>>> first loss", or periodically, just run longer than the

> >>>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons

> >>>> there! It's really bad science to optimize the internet for 20

> >>>> seconds. It's like optimizing a car, to handle well, for just 20

> >>>> seconds.

> >>>> 1) Not testing up + down + ping at the same time

> >>>> None of the new tests actually test the same thing that the infamous

> >>>> rrul test does - all the others still test up, then down, and ping.
It

> >>>> was/remains my hope that the simpler parts of the flent test suite -

> >>>> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair

> >>>> tests would provide calibration to the test designers.

> >>>> we've got zillions of flent results in the archive published here:

> >>>> https://blog.cerowrt.org/post/found_in_flent/

> >>>> ps. Misinformation about iperf 2 impacts my ability to do this.

> >>> 

> >>>> The new tests have all added up + ping and down + ping, but not up +

> >>>> down + ping. Why??

> >>>> The behaviors of what happens in that case are really non-intuitive,
I

> >>>> know, but... it's just one more phase to add to any one of those new

> >>>> tests. I'd be deliriously happy if someone(s) new to the field

> >>>> started doing that, even optionally, and boggled at how it defeated

> >>>> their assumptions.

> >>>> Among other things that would show...

> >>>> It's the home router industry's dirty secret than darn few "gigabit"

> >>>> home routers can actually forward in both directions at a gigabit.
I'd

> >>>> like to smash that perception thoroughly, but given our starting
point

> >>>> is a gigabit router was a "gigabit switch" - and historically been

> >>>> something that couldn't even forward at 200Mbit - we have a long way

> >>>> to go there.

> >>>> Only in the past year have non-x86 home routers appeared that could

> >>>> actually do a gbit in both directions.

> >>>> 2) Few are actually testing within-stream latency

> >>>> Apple's rpm project is making a stab in that direction. It looks

> >>>> highly likely, that with a little more work, crusader and

> >>>> go-responsiveness can finally start sampling the tcp RTT, loss and

> >>>> markings, more directly. As for the rest... sampling TCP_INFO on

> >>>> windows, and Linux, at least, always appeared simple to me, but I'm

> >>>> discovering how hard it is by delving deep into the rust behind

> >>>> crusader.

> >>>> the goresponsiveness thing is also IMHO running WAY too many streams

> >>>> at the same time, I guess motivated by an attempt to have the test

> >>>> complete quickly?

> >>>> B) To try and tackle the validation problem:ps. Misinformation about
iperf 2 impacts my ability to do this.

> >>> 

> >>>> In the libreqos.io project we've established a testbed where tests
can

> >>>> be plunked through various ISP plan network emulations. It's here:

> >>>> https://payne.taht.net (run bandwidth test for what's currently
hooked

> >>>> up)

> >>>> We could rather use an AS number and at least a ipv4/24 and ipv6/48
to

> >>>> leverage with that, so I don't have to nat the various emulations.

> >>>> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,

> >>>> to see more test designers setup a testbed like this to calibrate

> >>>> their own stuff.

> >>>> Presently we're able to test:

> >>>> flent

> >>>> netperf

> >>>> iperf2

> >>>> iperf3

> >>>> speedtest-cli

> >>>> crusader

> >>>> the broadband forum udp based test:

> >>>> https://github.com/BroadbandForum/obudpst

> >>>> trexx

> >>>> There's also a virtual machine setup that we can remotely drive a web

> >>>> browser from (but I didn't want to nat the results to the world) to

> >>>> test other web services.

> >>>> _______________________________________________

> >>>> Rpm mailing list

> >>>> Rpm@lists.bufferbloat.net

> >>>> https://lists.bufferbloat.net/listinfo/rpm

> >>> _______________________________________________

> >>> Starlink mailing list

> >>> Starlink@lists.bufferbloat.net

> >>> https://lists.bufferbloat.net/listinfo/starlink

> >> 

> >> _______________________________________________

> >> Starlink mailing list

> >> Starlink@lists.bufferbloat.net

> >> https://lists.bufferbloat.net/listinfo/starlink

>  

> _______________________________________________

> Starlink mailing list

> Starlink@lists.bufferbloat.net

> https://lists.bufferbloat.net/listinfo/starlink

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


[-- Attachment #2: Type: text/html, Size: 50849 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA
  2023-01-13  8:10                       ` Dick Roy
@ 2023-01-15 23:09                         ` rjmcmahon
  0 siblings, 0 replies; 183+ messages in thread
From: rjmcmahon @ 2023-01-15 23:09 UTC (permalink / raw)
  To: dickroy
  Cc: 'Sebastian Moeller', 'Rodney W. Grimes',
	mike.reynolds, 'libreqos', 'David P. Reed',
	'Rpm', 'bloat'

hmm, interesting. I'm thinking that GPS PPS is sufficient from iperf 2 & 
classical mechanics perspective.

Have you looked at white rabbit per CERN?

https://kt.cern/article/white-rabbit-cern-born-open-source-technology-sets-new-global-standard-empowering-world#:~:text=White%20Rabbit%20(WR)%20is%20a,the%20field%20of%20particle%20physics.

This discussion does make me question if there is a better metric than 
one way delay, i.e. "speed of causality as limited by network i/o" taken 
per each end of the e2e path? My expertise is quite limited w/respect to 
relativity so I don't know if the below makes any sense or not. I also 
think a core issue is the simultaneity of the start which isn't obvious 
on how to discern.

Does comparing the write blocking times (or frequency) histograms to the 
read blocking times (or frequency) histograms which are coupled by tcp's 
control loop do anything useful? The blocking occurs because of a 
coupling & awating per the remote. Then compare those against a write to 
read thread on the same chip (which I think should be the same in each 
reference frame and the fastest i/o possible for an end.) The frequency 
differences might be due to what you call "interruptions" & one way 
delays (& error) assuming all else equal??

Thanks in advance for any thoughts on this.

Bob
> -----Original Message-----
> From: rjmcmahon [mailto:rjmcmahon@rjmcmahon.com]
> Sent: Thursday, January 12, 2023 11:40 PM
> To: dickroy@alum.mit.edu
> Cc: 'Sebastian Moeller'; 'Rodney W. Grimes';
> mike.reynolds@netforecast.com; 'libreqos'; 'David P. Reed'; 'Rpm';
> 'bloat'
> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in
> USA
> 
> Hi RR,
> 
> I believe quality GPS chips compensate for relativity in pulse per
> 
> second which is needed to get position accuracy.
> 
> _[RR] Of course they do.  That 38usec/day really matters! They assume
> they know what the gravitational potential is where they are, and they
> can estimate the potential at the satellites so they can compensate,
> and they do.  Point is, a GPS unit at Lake Tahoe (6250') runs faster
> than the one in San Francisco (sea level).  How do you think these two
> "should be synchronized"!   How do you define "synchronization" in
> this case?  You synchronize those two clocks, then what about all the
> other clocks at Lake Tahoe (or SF or anywhere in between for that
> matter __J)??? These are not trivial questions. However if all one
> cares about is seconds or milliseconds, then you can argue that we
> (earthlings on planet earth) can "sweep such facts under the
> proverbial rug" for the purposes of latency in communication networks
> and that's certainly doable.  Don't tell that to the guys whose
> protocols require "synchronization of all unit to nanoseconds" though!
>  They will be very, very unhappy __J __J And you know who you are __J
> __J _
> 
> _ _
> 
> _J_
> 
> Bob
> 
>> Hi Sebastian (et. al.),
> 
>> 
> 
>> [I'll comment up here instead of inline.]
> 
>> 
> 
>> Let me start by saying that I have not been intimately involved with
> 
> 
>> the IEEE 1588 effort (PTP), however I was involved in the 802.11
> 
>> efforts along a similar vein, just adding the wireless first hop
> 
>> component and it's effects on PTP.
> 
>> 
> 
>> What was apparent from the outset was that there was a lack of
> 
>> understanding what the terms "to synchronize" or "to be
> synchronized"
> 
>> actually mean.  It's not trivial … because we live in a
> 
>> (approximately, that's another story!) 4-D space-time continuum
> where
> 
>> the Lorentz metric plays a critical role.  Therein, simultaneity
> (aka
> 
>> "things happening at the same time") means the "distance" between
> two
> 
>> such events is zero and that distance is given by sqrt(x^2 + y^2 +
> z^2
> 
>> - (ct)^2) and the "thing happening" can be the tick of a clock
> 
>> somewhere. Now since everything is relative (time with respect to
> 
>> what? / location with respect to where?) it's pretty easy to see
> that
> 
>> "if you don't know where you are, you can't know what time it is!"
> 
>> (English sailors of the 18th century knew this well!) Add to this
> the
> 
>> fact that if everything were stationary, nothing would happen (as
> 
>> Einstein said "Nothing happens until something moves!"), special
> 
>> relativity also pays a role.  Clocks on GPS satellites run approx.
> 
>> 7usecs/day slower than those on earth due to their "speed" (8700 mph
> 
> 
>> roughly)! Then add the consequence that without mass we wouldn't
> exist
> 
>> (in these forms at leastJ), and gravitational effects (aka General
> 
>> Relativity) come into play. Those turn out to make clocks on GPS
> 
>> satellites run 45usec/day faster than those on earth!  The net
> effect
> 
>> is that GPS clocks run about 38usec/day faster than clocks on earth.
> 
> 
>> So what does it mean to "synchronize to GPS"?  Point is: it's a
> 
>> non-trivial question with a very complicated answer.  The reason it
> is
> 
>> important to get all this right is that the "what that ties time and
> 
> 
>> space together" is the speed of light and that turns out to be a
> 
>> "foot-per-nanosecond" in a vacuum (roughly 300m/usec).  This means
> if
> 
>> I am uncertain about my location to say 300 meters, then I also am
> not
> 
>> sure what time it is to a usec AND vice-versa!
> 
>> 
> 
>> All that said, the simplest explanation of synchronization is
> 
>> probably: Two clocks are synchronized if, when they are brought
> 
>> (slowly) into physical proximity ("sat next to each other") in the
> 
>> same (quasi-)inertial frame and the same gravitational potential
> (not
> 
>> so obvious BTW … see the FYI below!), an observer of both would
> say
> 
>> "they are keeping time identically". Since this experiment is rarely
> 
> 
>> possible, one can never be "sure" that his clock is synchronized to
> 
>> any other clock elsewhere. And what does it mean to say they "were
> 
>> synchronized" when brought together, but now they are not because
> they
> 
>> are now in different gravitational potentials! (FYI, there are land
> 
>> mine detectors being developed on this very principle! I know
> someone
> 
>> who actually worked on such a project!)
> 
>> 
> 
>> This all gets even more complicated when dealing with large networks
> 
> 
>> of networks in which the "speed of information transmission" can
> vary
> 
>> depending on the medium (cf. coaxial cables versus fiber versus
> 
>> microwave links!) In fact, the atmosphere is one of those media and
> 
>> variations therein result in the need for "GPS corrections" (cf.
> RTCM
> 
>> GPS correction messages, RTK, etc.) in order to get to sub-nsec/cm
> 
>> accuracy.  Point is if you have a set of nodes distributed across
> the
> 
>> country all with GPS and all "synchronized to GPS time", and a
> second
> 
>> identical set of nodes (with no GPS) instead connected with a
> network
> 
>> of cables and fiber links, all of different lengths and composition
> 
>> using different carrier frequencies (dielectric constants vary with
> 
>> frequency!) "synchronized" to some clock somewhere using NTP or
> PTP),
> 
>> the synchronization of the two sets will be different unless a
> common
> 
>> reference clock is used AND all the above effects are taken into
> 
>> account, and good luck with that! J
> 
>> 
> 
>> In conclusion, if anyone tells you that clock synchronization in
> 
>> communication networks is simple ("Just use GPS!"), you should feel
> 
>> free to chuckle (under your breath if necessaryJ)
> 
>> 
> 
>> Cheers,
> 
>> 
> 
>> RR
> 
>> 
> 
>> -----Original Message-----
> 
>> From: Sebastian Moeller [mailto:moeller0@gmx.de]
> 
>> Sent: Thursday, January 12, 2023 12:23 AM
> 
>> To: Dick Roy
> 
>> Cc: Rodney W. Grimes; mike.reynolds@netforecast.com; libreqos; David
> 
> 
>> P. Reed; Rpm; rjmcmahon; bloat
> 
>> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers
> in
> 
>> USA
> 
>> 
> 
>> Hi RR,
> 
>> 
> 
>>> On Jan 11, 2023, at 22:46, Dick Roy <dickroy@alum.mit.edu> wrote:
> 
>> 
> 
>>> 
> 
>> 
> 
>>> 
> 
>> 
> 
>>> 
> 
>> 
> 
>>> -----Original Message-----
> 
>> 
> 
>>> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On
> 
>> Behalf Of Sebastian Moeller via Starlink
> 
>> 
> 
>>> Sent: Wednesday, January 11, 2023 12:01 PM
> 
>> 
> 
>>> To: Rodney W. Grimes
> 
>> 
> 
>>> Cc: Dave Taht via Starlink; mike.reynolds@netforecast.com;
> libreqos;
> 
>> David P. Reed; Rpm; rjmcmahon; bloat
> 
>> 
> 
>>> Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers
> 
>> in USA
> 
>> 
> 
>>> 
> 
>> 
> 
>>> Hi Rodney,
> 
>> 
> 
>>> 
> 
>> 
> 
>>> 
> 
>> 
> 
>>> 
> 
>> 
> 
>>> 
> 
>> 
> 
>>> > On Jan 11, 2023, at 19:32, Rodney W. Grimes
> 
>> <starlink@gndrsh.dnsmgr.net> wrote:
> 
>> 
> 
>>> >
> 
>> 
> 
>>> > Hello,
> 
>> 
> 
>>> >
> 
>> 
> 
>>> >     Yall can call me crazy if you want.. but... see below [RWG]
> 
>> 
> 
>>> >> Hi Bib,
> 
>> 
> 
>>> >>
> 
>> 
> 
>>> >>
> 
>> 
> 
>>> >>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink
> 
>> <starlink@lists.bufferbloat.net> wrote:
> 
>> 
> 
>>> >>>
> 
>> 
> 
>>> >>> My biggest barrier is the lack of clock sync by the devices,
> 
>> i.e. very limited support for PTP in data centers and in end
> devices.
> 
>> This limits the ability to measure one way delays (OWD) and most
> 
>> assume that OWD is 1/2 and RTT which typically is a mistake. We know
> 
> 
>> this intuitively with airplane flight times or even car commute
> times
> 
>> where the one way time is not 1/2 a round trip time. Google maps &
> 
>> directions provide a time estimate for the one way link. It doesn't
> 
>> compute a round trip and divide by two.
> 
>> 
> 
>>> >>>
> 
>> 
> 
>>> >>> For those that can get clock sync working, the iperf 2
> 
>> --trip-times options is useful.
> 
>> 
> 
>>> >>
> 
>> 
> 
>>> >>    [SM] +1; and yet even with unsynchronized clocks one can try
> 
>> to measure how latency changes under load and that can be done per
> 
>> direction. Sure this is far inferior to real reliably measured OWDs,
> 
> 
>> but if life/the internet deals you lemons....
> 
>> 
> 
>>> >
> 
>> 
> 
>>> > [RWG] iperf2/iperf3, etc are already moving large amounts of data
> 
> 
>> back and forth, for that matter any rate test, why not abuse some of
> 
> 
>> that data and add the fundemental NTP clock sync data and
> 
>> bidirectionally pass each others concept of "current time".  IIRC
> (its
> 
>> been 25 years since I worked on NTP at this level) you *should* be
> 
>> able to get a fairly accurate clock delta between each end, and then
> 
> 
>> use that info and time stamps in the data stream to compute OWD's.
> 
>> You need to put 4 time stamps in the packet, and with that you can
> 
>> compute "offset".
> 
>> 
> 
>>> [RR] For this to work at a reasonable level of accuracy, the
> 
>> timestamping circuits on both ends need to be deterministic and
> 
>> repeatable as I recall. Any uncertainty in that process adds to
> 
>> synchronization errors/uncertainties.
> 
>> 
> 
>>> 
> 
>> 
> 
>>>       [SM] Nice idea. I would guess that all timeslot based access
> 
>> technologies (so starlink, docsis, GPON, LTE?) all distribute "high
> 
>> quality time" carefully to the "modems", so maybe all that would be
> 
>> needed is to expose that high quality time to the LAN side of those
> 
>> modems, dressed up as NTP server?
> 
>> 
> 
>>> [RR] It's not that simple!  Distributing "high-quality time", i.e.
> 
>> "synchronizing all clocks" does not solve the communication problem
> in
> 
>> synchronous slotted MAC/PHYs!
> 
>> 
> 
>>       [SM] I happily believe you, but the same idea of "time slot"
> 
>> needs to be shared by all nodes, no? So the clockss need to be
> 
>> reasonably similar rate, aka synchronized (see below).
> 
>> 
> 
>>>  All the technologies you mentioned above are essentially P2P, not
> 
>> intended for broadcast.  Point is, there is a point controller (aka
> 
>> PoC) often called a base station (eNodeB, gNodeB, …) that actually
> 
> 
>> "controls everything that is necessary to control" at the UE
> including
> 
>> time, frequency and sampling time offsets, and these are critical to
> 
> 
>> get right if you want to communicate, and they are ALL subject to
> the
> 
>> laws of physics (cf. the speed of light)! Turns out that what is
> 
>> necessary for the system to function anywhere near capacity, is for
> 
>> all the clocks governing transmissions from the UEs to be
> 
>> "unsynchronized" such that all the UE transmissions arrive at the
> PoC
> 
>> at the same (prescribed) time!
> 
>> 
> 
>>       [SM] Fair enough. I would call clocks that are "in sync"
> albeit
> 
>> with individual offsets as synchronized, but I am a layman and that
> 
>> might sound offensively wrong to experts in the field. But even
> 
>> without the naming my point is that all systems that depend on some
> 
>> idea of shared time-base are halfway there of exposing that time to
> 
>> end users, by "translating it into an NTP time source at the modem.
> 
>> 
> 
>>> For some technologies, in particular 5G!, these considerations are
> 
>> ESSENTIAL. Feel free to scour the 3GPP LTE 5G RLC and PHY specs if
> you
> 
>> don't believe me! J
> 
>> 
> 
>>       [SM Far be it from me not to believe you, so thanks for the
> 
>> pointers. Yet, I still think that unless different nodes of a shared
> 
> 
>> segment move at significantly different speeds, that there should be
> a
> 
>> common "tick-duration" for all clocks even if each clock runs at an
> 
>> offset... (I naively would try to implement something like that by
> 
>> trying to fully synchronize clocks and maintain a local offset value
> 
> 
>> to convert from "absolute" time to "network" time, but likely
> because
> 
>> coming from the outside I am blissfully unaware of the detail
> 
>> challenges that need to be solved).
> 
>> 
> 
>> Regards & Thanks
> 
>> 
> 
>>       Sebastian
> 
>> 
> 
>>> 
> 
>> 
> 
>>> 
> 
>> 
> 
>>> >
> 
>> 
> 
>>> >>
> 
>> 
> 
>>> >>
> 
>> 
> 
>>> >>>
> 
>> 
> 
>>> >>> --trip-times
> 
>> 
> 
>>> >>> enable the measurement of end to end write to read latencies
> 
>> (client and server clocks must be synchronized)
> 
>> 
> 
>>> > [RWG] --clock-skew
> 
>> 
> 
>>> >     enable the measurement of the wall clock difference between
> 
>> sender and receiver
> 
>> 
> 
>>> >
> 
>> 
> 
>>> >>
> 
>> 
> 
>>> >>    [SM] Sweet!
> 
>> 
> 
>>> >>
> 
>> 
> 
>>> >> Regards
> 
>> 
> 
>>> >>    Sebastian
> 
>> 
> 
>>> >>
> 
>> 
> 
>>> >>>
> 
>> 
> 
>>> >>> Bob
> 
>> 
> 
>>> >>>> I have many kvetches about the new latency under load tests
> 
>> being
> 
>> 
> 
>>> >>>> designed and distributed over the past year. I am delighted!
> 
>> that they
> 
>> 
> 
>>> >>>> are happening, but most really need third party evaluation,
> and
> 
>> 
> 
>> 
> 
>>> >>>> calibration, and a solid explanation of what network
> 
>> pathologies they
> 
>> 
> 
>>> >>>> do and don't cover. Also a RED team attitude towards them, as
> 
>> well as
> 
>> 
> 
>>> >>>> thinking hard about what you are not measuring (operations
> 
>> research).
> 
>> 
> 
>>> >>>> I actually rather love the new cloudflare speedtest, because
> it
> 
>> tests
> 
>> 
> 
>>> >>>> a single TCP connection, rather than dozens, and at the same
> 
>> time folk
> 
>> 
> 
>>> >>>> are complaining that it doesn't find the actual "speed!".
> 
>> yet... the
> 
>> 
> 
>>> >>>> test itself more closely emulates a user experience than
> 
>> speedtest.net
> 
>> 
> 
>>> >>>> does. I am personally pretty convinced that the fewer numbers
> 
>> of flows
> 
>> 
> 
>>> >>>> that a web page opens improves the likelihood of a good user
> 
>> 
> 
>>> >>>> experience, but lack data on it.
> 
>> 
> 
>>> >>>> To try to tackle the evaluation and calibration part, I've
> 
>> reached out
> 
>> 
> 
>>> >>>> to all the new test designers in the hope that we could get
> 
>> together
> 
>> 
> 
>>> >>>> and produce a report of what each new test is actually doing.
> 
>> I've
> 
>> 
> 
>>> >>>> tweeted, linked in, emailed, and spammed every measurement
> list
> 
>> I know
> 
>> 
> 
>>> >>>> of, and only to some response, please reach out to other test
> 
>> designer
> 
>> 
> 
>>> >>>> folks and have them join the rpm email list?
> 
>> 
> 
>>> >>>> My principal kvetches in the new tests so far are:
> 
>> 
> 
>>> >>>> 0) None of the tests last long enough.
> 
>> 
> 
>>> >>>> Ideally there should be a mode where they at least run to
> "time
> 
>> of
> 
>> 
> 
>>> >>>> first loss", or periodically, just run longer than the
> 
>> 
> 
>>> >>>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be
> 
>> dragons
> 
>> 
> 
>>> >>>> there! It's really bad science to optimize the internet for 20
> 
> 
>> 
> 
>>> >>>> seconds. It's like optimizing a car, to handle well, for just
> 
>> 20
> 
>> 
> 
>>> >>>> seconds.
> 
>> 
> 
>>> >>>> 1) Not testing up + down + ping at the same time
> 
>> 
> 
>>> >>>> None of the new tests actually test the same thing that the
> 
>> infamous
> 
>> 
> 
>>> >>>> rrul test does - all the others still test up, then down, and
> 
>> ping. It
> 
>> 
> 
>>> >>>> was/remains my hope that the simpler parts of the flent test
> 
>> suite -
> 
>> 
> 
>>> >>>> such as the tcp_up_squarewave tests, the rrul test, and the
> 
>> rtt_fair
> 
>> 
> 
>>> >>>> tests would provide calibration to the test designers.
> 
>> 
> 
>>> >>>> we've got zillions of flent results in the archive published
> 
>> here:
> 
>> 
> 
>>> >>>> https://blog.cerowrt.org/post/found_in_flent/
> 
>> 
> 
>>> >>>> ps. Misinformation about iperf 2 impacts my ability to do
> this.
> 
>> 
> 
>> 
> 
>>> >>>
> 
>> 
> 
>>> >>>> The new tests have all added up + ping and down + ping, but
> not
> 
>> up +
> 
>> 
> 
>>> >>>> down + ping. Why??
> 
>> 
> 
>>> >>>> The behaviors of what happens in that case are really
> 
>> non-intuitive, I
> 
>> 
> 
>>> >>>> know, but... it's just one more phase to add to any one of
> 
>> those new
> 
>> 
> 
>>> >>>> tests. I'd be deliriously happy if someone(s) new to the field
> 
> 
>> 
> 
>>> >>>> started doing that, even optionally, and boggled at how it
> 
>> defeated
> 
>> 
> 
>>> >>>> their assumptions.
> 
>> 
> 
>>> >>>> Among other things that would show...
> 
>> 
> 
>>> >>>> It's the home router industry's dirty secret than darn few
> 
>> "gigabit"
> 
>> 
> 
>>> >>>> home routers can actually forward in both directions at a
> 
>> gigabit. I'd
> 
>> 
> 
>>> >>>> like to smash that perception thoroughly, but given our
> 
>> starting point
> 
>> 
> 
>>> >>>> is a gigabit router was a "gigabit switch" - and historically
> 
>> been
> 
>> 
> 
>>> >>>> something that couldn't even forward at 200Mbit - we have a
> 
>> long way
> 
>> 
> 
>>> >>>> to go there.
> 
>> 
> 
>>> >>>> Only in the past year have non-x86 home routers appeared that
> 
>> could
> 
>> 
> 
>>> >>>> actually do a gbit in both directions.
> 
>> 
> 
>>> >>>> 2) Few are actually testing within-stream latency
> 
>> 
> 
>>> >>>> Apple's rpm project is making a stab in that direction. It
> 
>> looks
> 
>> 
> 
>>> >>>> highly likely, that with a little more work, crusader and
> 
>> 
> 
>>> >>>> go-responsiveness can finally start sampling the tcp RTT, loss
> 
> 
>> and
> 
>> 
> 
>>> >>>> markings, more directly. As for the rest... sampling TCP_INFO
> 
>> on
> 
>> 
> 
>>> >>>> windows, and Linux, at least, always appeared simple to me,
> but
> 
>> I'm
> 
>> 
> 
>>> >>>> discovering how hard it is by delving deep into the rust
> behind
> 
>> 
> 
>> 
> 
>>> >>>> crusader.
> 
>> 
> 
>>> >>>> the goresponsiveness thing is also IMHO running WAY too many
> 
>> streams
> 
>> 
> 
>>> >>>> at the same time, I guess motivated by an attempt to have the
> 
>> test
> 
>> 
> 
>>> >>>> complete quickly?
> 
>> 
> 
>>> >>>> B) To try and tackle the validation problem:ps. Misinformation
> 
> 
>> about iperf 2 impacts my ability to do this.
> 
>> 
> 
>>> >>>
> 
>> 
> 
>>> >>>> In the libreqos.io project we've established a testbed where
> 
>> tests can
> 
>> 
> 
>>> >>>> be plunked through various ISP plan network emulations. It's
> 
>> here:
> 
>> 
> 
>>> >>>> https://payne.taht.net (run bandwidth test for what's
> currently
> 
>> hooked
> 
>> 
> 
>>> >>>> up)
> 
>> 
> 
>>> >>>> We could rather use an AS number and at least a ipv4/24 and
> 
>> ipv6/48 to
> 
>> 
> 
>>> >>>> leverage with that, so I don't have to nat the various
> 
>> emulations.
> 
>> 
> 
>>> >>>> (and funding, anyone got funding?) Or, as the code is GPLv2
> 
>> licensed,
> 
>> 
> 
>>> >>>> to see more test designers setup a testbed like this to
> 
>> calibrate
> 
>> 
> 
>>> >>>> their own stuff.
> 
>> 
> 
>>> >>>> Presently we're able to test:
> 
>> 
> 
>>> >>>> flent
> 
>> 
> 
>>> >>>> netperf
> 
>> 
> 
>>> >>>> iperf2
> 
>> 
> 
>>> >>>> iperf3
> 
>> 
> 
>>> >>>> speedtest-cli
> 
>> 
> 
>>> >>>> crusader
> 
>> 
> 
>>> >>>> the broadband forum udp based test:
> 
>> 
> 
>>> >>>> https://github.com/BroadbandForum/obudpst
> 
>> 
> 
>>> >>>> trexx
> 
>> 
> 
>>> >>>> There's also a virtual machine setup that we can remotely
> drive
> 
>> a web
> 
>> 
> 
>>> >>>> browser from (but I didn't want to nat the results to the
> 
>> world) to
> 
>> awhile
> 
>>> >>>> test other web services.
> 
>> 
> 
>>> >>>> _______________________________________________
> 
>> 
> 
>>> >>>> Rpm mailing list
> 
>> 
> 
>>> >>>> Rpm@lists.bufferbloat.net
> 
>> 
> 
>>> >>>> https://lists.bufferbloat.net/listinfo/rpm
> 
>> 
> 
>>> >>> _______________________________________________
> 
>> 
> 
>>> >>> Starlink mailing list
> 
>> 
> 
>>> >>> Starlink@lists.bufferbloat.net
> 
>> 
> 
>>> >>> https://lists.bufferbloat.net/listinfo/starlink
> 
>> 
> 
>>> >>
> 
>> 
> 
>>> >> _______________________________________________
> 
>> 
> 
>>> >> Starlink mailing list
> 
>> 
> 
>>> >> Starlink@lists.bufferbloat.net
> 
>> 
> 
>>> >> https://lists.bufferbloat.net/listinfo/starlink
> 
>> 
> 
>>> 
> 
>> 
> 
>>> _______________________________________________
> 
>> 
> 
>>> Starlink mailing list
> 
>> 
> 
>>> Starlink@lists.bufferbloat.net
> 
>> 
> 
>>> https://lists.bufferbloat.net/listinfo/starlink

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [EXTERNAL] Re: [Starlink] Researchers Seeking Probe Volunteers in USA
  2023-01-09 19:56           ` dan
  2023-01-09 21:00             ` rjmcmahon
@ 2023-03-13 10:02             ` Sebastian Moeller
  2023-03-13 15:08               ` [LibreQoS] [Starlink] [Rpm] [EXTERNAL] " Jeremy Austin
  1 sibling, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-13 10:02 UTC (permalink / raw)
  To: dan
  Cc: rjmcmahon, Dave Taht via Starlink, Rpm, Livingood, Jason,
	libreqos, bloat

Hi Dan,


> On Jan 9, 2023, at 20:56, dan via Rpm <rpm@lists.bufferbloat.net> wrote:
> 
> I'm not offering a complete solution here....  I'm not so keen on
> speed tests.  It's akin to testing your car's performance by flooring
> it til you hit the governor and hard breaking til you stop *while in
> traffic*.   That doesn't demonstrate the utility of the car.
> 
> Data is already being transferred, let's measure that.  

	[SM] For a home link that means you need to measure on the router, as end-hosts will only ever see the fraction of traffic they sink/source themselves...

>  Doing some
> routine simple tests intentionally during low, mid, high congestion
> periods to see how the service is actually performing for the end
> user.

	[SM] No ISP I know of publishes which periods are low, mid, high congestion so end-users will need to make some assumptions here (e.g. by looking at per day load graphs of big traffic exchanges like DE-CIX here https://www.de-cix.net/en/locations/frankfurt/statistics )


>  You don't need to generate the traffic on a link to measure how
> much traffic a link can handle.

	[SM] OK, I will bite, how do you measure achievable throughput without actually generating it? Packet-pair techniques are notoriously imprecise and have funny failure modes.


>  And determining congestion on a
> service in a fairly rudimentary way would be frequent latency tests to
> 'known good' service ie high capacity services that are unlikely to
> experience congestion.

	[SM] Yes, that sort of works, see e.g. https://github.com/lynxthecat/cake-autorate for a home-made approach by non-networking people to estimate whether the immediate load is at capacity or not, and using that information to control a traffic shaper to "bound" latency under load.

> 
> There are few use cases that matche a 2 minute speed test outside of
> 'wonder what my internet connection can do'.

	[SM] I would have agreed some months ago, but ever since the kids started to play more modern games than tetris/minecraft long duration multi-flow downloads have become a staple in our networking. OK, noone really cares about the intra-flow latency of these download flows, but we do care that the rest of our traffic stays responsive.


>  And in those few use
> cases such as a big file download, a routine latency test is a really
> great measure of the quality of a service.  Sure, troubleshooting by
> the ISP might include a full bore multi-minute speed test but that's
> really not useful for the consumer.

	[SM] I mildly disagree, if it is informative for the ISP's technicians it is also informative for the end-customers; not all ISPs are so enlightened that they pro-actively solve issues for their customers (but some are!) so occasionally it helps to be able to do such diagnostic measurements one-self. 


> 
> Further, exposing this data to the end users, IMO, is likely better as
> a chart of congestion and flow durations and some scoring.  ie, slice
> out 7-8pm, during this segment you were able to pull 427Mbps without
> congestion, netflix or streaming service use approximately 6% of
> capacity.  Your service was busy for 100% of this time ( likely
> measuring buffer bloat ).    Expressed as a pretty chart with consumer
> friendly language.

	[SM] Sounds nice.


> When you guys are talking about per segment latency testing, you're
> really talking about metrics for operators to be concerned with, not
> end users.  It's useless information for them.

	[SM] Well is it really useless? If I know the to be expected latency-under-load increase I can eye-ball e.h. how far away a server I can still interact with in a "snappy" way. 


>  I had a woman about 2
> months ago complain about her frame rates because her internet
> connection was 15 emm ess's and that was terrible and I needed to fix
> it.  (slow computer was the problem, obviously) but that data from
> speedtest.net didn't actually help her at all, it just confused her.

	[SM The solution to lack of knowledge, IMHO should be to teach people what they need to know, not hide information that could be mis-interpreted (because that applies to all information).


> 
> Running timed speed tests at 3am (Eero, I'm looking at you) is pretty
> pointless.  

	[SM] I would argue that this is likely a decent period to establish baseline values for uncongested conditions (that is uncongested by other traffic sources than the measuring network).

> Running speed tests during busy hours is a little bit
> harmful overall considering it's pushing into oversells on every ISP.

	[SM] Oversell, or under-provisioning, IMHO is a viable technique to reduce costs, but it is not an excuse for short-shifting one's customers; if an ISP advertised and sells X Mbps, he/she needs to be willing to actually deliver independent on how "active" a given shared segment is. By this I do NOT mean that the contracted speed needs to be available 100% at all times, but that there is a reasonably high chance of getting close to the contracted rates. If that means either increasing prices to match cost targets or reduce maximally advertised contracted rates, or going to completely different kind of contracts (say, 1/Nth of a Gigabit link with equitable sharing among all N users on the link). Under-provisioning is fine as an optimization method to increase profitability, but IMHO no excuse on not delivering on one's contract.

> I could talk endlessly about how useless speed tests are to end user experience.

	[SM] My take on this is a satisfied customer is unlikely to make a big fuss. And delivering great responsiveness is a great way for an ISP to make end-customers care less about achievable throughput. Yes, some will, e.g. gamers that insist on loading multi-gigabit updates just before playing instead of over-night (a strategy I have some sympathy for, shutting down power consumers fully over night instead of wasting watts on "stand-by" of some sort is a more reliable way to save power/cost).

Regards
	Sebastian


> 
> 
> On Mon, Jan 9, 2023 at 12:20 PM rjmcmahon via LibreQoS
> <libreqos@lists.bufferbloat.net> wrote:
>> 
>> User based, long duration tests seem fundamentally flawed. QoE for users
>> is driven by user expectations. And if a user won't wait on a long test
>> they for sure aren't going to wait minutes for a web page download. If
>> it's a long duration use case, e.g. a file download, then latency isn't
>> typically driving QoE.
>> 
>> Not: Even for internal tests, we try to keep our automated tests down to
>> 2 seconds. There are reasons to test for minutes (things like phy cals
>> in our chips) but it's more of the exception than the rule.
>> 
>> Bob
>>>> 0) None of the tests last long enough.
>>> 
>>> The user-initiated ones tend to be shorter - likely because the
>>> average user does not want to wait several minutes for a test to
>>> complete. But IMO this is where a test platform like SamKnows, Ookla's
>>> embedded client, NetMicroscope, and others can come in - since they
>>> run in the background on some randomized schedule w/o user
>>> intervention. Thus, the user's time-sensitivity is no longer a factor
>>> and a longer duration test can be performed.
>>> 
>>>> 1) Not testing up + down + ping at the same time
>>> 
>>> You should consider publishing a LUL BCP I-D in the IRTF/IETF - like in
>>> IPPM...
>>> 
>>> JL
>>> 
>>> _______________________________________________
>>> Rpm mailing list
>>> Rpm@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm
>> _______________________________________________
>> LibreQoS mailing list
>> LibreQoS@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/libreqos
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 10:02             ` Sebastian Moeller
@ 2023-03-13 15:08               ` Jeremy Austin
  2023-03-13 15:50                 ` Sebastian Moeller
  2023-03-13 16:04                 ` [LibreQoS] UnderBloat on fiber and wisps Dave Taht
  0 siblings, 2 replies; 183+ messages in thread
From: Jeremy Austin @ 2023-03-13 15:08 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: dan, Rpm, libreqos, Dave Taht via Starlink, rjmcmahon, bloat

[-- Attachment #1: Type: text/plain, Size: 1275 bytes --]

On Mon, Mar 13, 2023 at 3:02 AM Sebastian Moeller via Starlink <
starlink@lists.bufferbloat.net> wrote:

> Hi Dan,
>
>
> > On Jan 9, 2023, at 20:56, dan via Rpm <rpm@lists.bufferbloat.net> wrote:
> >
> >  You don't need to generate the traffic on a link to measure how
> > much traffic a link can handle.
>
>         [SM] OK, I will bite, how do you measure achievable throughput
> without actually generating it? Packet-pair techniques are notoriously
> imprecise and have funny failure modes.
>

I am also looking forward to the full answer to this question. While one
can infer when a link is saturated by mapping network topology onto latency
sampling, it can have on the order of 30% error, given that there are
multiple causes of increased latency beyond proximal congestion.

A question I commonly ask network engineers or academics is "How can I
accurately distinguish a constraint in supply from a reduction in demand?"

-- 
--
Jeremy Austin
Sr. Product Manager
Preseem | Aterlo Networks
preseem.com

Book a Call: https://app.hubspot.com/meetings/jeremy548
Phone: 1-833-733-7336 x718
Email: jeremy@preseem.com

Stay Connected with Newsletters & More:
*https://preseem.com/stay-connected/* <https://preseem.com/stay-connected/>

[-- Attachment #2: Type: text/html, Size: 2685 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 15:08               ` [LibreQoS] [Starlink] [Rpm] [EXTERNAL] " Jeremy Austin
@ 2023-03-13 15:50                 ` Sebastian Moeller
  2023-03-13 16:06                   ` [LibreQoS] [Bloat] " Dave Taht
  2023-03-13 16:12                   ` [LibreQoS] " dan
  2023-03-13 16:04                 ` [LibreQoS] UnderBloat on fiber and wisps Dave Taht
  1 sibling, 2 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-13 15:50 UTC (permalink / raw)
  To: Jeremy Austin
  Cc: dan, Rpm, libreqos, Dave Taht via Starlink, rjmcmahon, bloat

Hi Jeremy,

> On Mar 13, 2023, at 16:08, Jeremy Austin <jeremy@aterlo.com> wrote:
> 
> 
> 
> On Mon, Mar 13, 2023 at 3:02 AM Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net> wrote:
> Hi Dan,
> 
> 
> > On Jan 9, 2023, at 20:56, dan via Rpm <rpm@lists.bufferbloat.net> wrote:
> >
> >  You don't need to generate the traffic on a link to measure how
> > much traffic a link can handle.
> 
>         [SM] OK, I will bite, how do you measure achievable throughput without actually generating it? Packet-pair techniques are notoriously imprecise and have funny failure modes.
> 
> I am also looking forward to the full answer to this question. While one can infer when a link is saturated by mapping network topology onto latency sampling, it can have on the order of 30% error, given that there are multiple causes of increased latency beyond proximal congestion.

	So in the "autorates" a family of automatic tracking/setting methods for a cake shaper that (in friendly competition to each other) we use active measurements of RTT/OWD increases and there we try to vary our set of reflectors and then take a vote over a set of reflectors to decide "is it cake^W congestion", that helps to weed out a few alternative reasons for congestion detection (like distal congestion to individual reflectors). But that dies not answer the tricky question how to estimate capacity without actually creating a sufficient load (and doubly so on variable rate links).


> A question I commonly ask network engineers or academics is "How can I accurately distinguish a constraint in suppl from a reduction in demand?"

	Good question. The autorates can not, but then they do not need to as they basically work by upping the shaper limit in correlation with the offered load until it detects sufficiently increased delay and reduces the shaper rates. A reduction n demand will lead to a reduction in load and bufferbloat... so the shaper is adapted based on the demand, aka "give the user as much thoughput as can be done within the users configured delay threshold, but not more"...

If we had a reliable method to "measure how much traffic a link can handle." without having to track load and delay that would save us a ton of work ;)

Regards
	Sebastian


> 
> -- 
> --
> Jeremy Austin
> Sr. Product Manager
> Preseem | Aterlo Networks
> preseem.com
> 
> Book a Call: https://app.hubspot.com/meetings/jeremy548
> Phone: 1-833-733-7336 x718
> Email: jeremy@preseem.com
> 
> Stay Connected with Newsletters & More: https://preseem.com/stay-connected/


^ permalink raw reply	[flat|nested] 183+ messages in thread

* [LibreQoS] UnderBloat on fiber and wisps
  2023-03-13 15:08               ` [LibreQoS] [Starlink] [Rpm] [EXTERNAL] " Jeremy Austin
  2023-03-13 15:50                 ` Sebastian Moeller
@ 2023-03-13 16:04                 ` Dave Taht
  2023-03-13 16:09                   ` Sebastian Moeller
  1 sibling, 1 reply; 183+ messages in thread
From: Dave Taht @ 2023-03-13 16:04 UTC (permalink / raw)
  To: Jeremy Austin
  Cc: Sebastian Moeller, Dave Taht via Starlink, dan, libreqos, Rpm, bloat

On Mon, Mar 13, 2023 at 8:08 AM Jeremy Austin via Rpm
<rpm@lists.bufferbloat.net> wrote:
>
>
>
> On Mon, Mar 13, 2023 at 3:02 AM Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>> Hi Dan,
>>
>>
>> > On Jan 9, 2023, at 20:56, dan via Rpm <rpm@lists.bufferbloat.net> wrote:
>> >
>> >  You don't need to generate the traffic on a link to measure how
>> > much traffic a link can handle.
>>
>>         [SM] OK, I will bite, how do you measure achievable throughput without actually generating it? Packet-pair techniques are notoriously imprecise and have funny failure modes.
>
>
> I am also looking forward to the full answer to this question. While one can infer when a link is saturated by mapping network topology onto latency sampling, it can have on the order of 30% error, given that there are multiple causes of increased latency beyond proximal congestion.
>
> A question I commonly ask network engineers or academics is "How can I accurately distinguish a constraint in supply from a reduction in demand?"

This is an insanely good point. In looking over the wisp
configurations I have to date, many are using SFQ which has a default
packet limit of 128 packets. Many are using SFQ with a *even shorter*
packet limit, which looks good on speedtests which open many flows
(keown's sqrt(flows) for bdp), but is *lousy* for allowing a single
flow to achieve full rate (the more common case for end-user QoE).

I have in general tried to get mikrotik folk at least, to switch away
from fifos, red, and sfq to wards fq_codel or cake at the defaults
(good to 10Gbit) in part, due to this.

I think SFQ 128 really starts tapping out on most networks at around
the 200Mbit level, and above 400, really, really does not have enough
queue, so the net result is that wisps attempting to provide higher
levels of service are not actually providing it in the real world, an
accidental constraint in supply.

I have a blog piece, long in draft, called  "underbloat", talking to
this. Also I have no seen multiple fiber installs that had had a
reasonable 50ms FIFO buffer for 100Mbit, but when upgraded to 1gbit,
left it at 5ms, which has bad sideffects for all traffic.

To me it looks also that at least some ubnt radios are FQd and underbuffered.

> --
> --
> Jeremy Austin
> Sr. Product Manager
> Preseem | Aterlo Networks
> preseem.com
>
> Book a Call: https://app.hubspot.com/meetings/jeremy548
> Phone: 1-833-733-7336 x718
> Email: jeremy@preseem.com
>
> Stay Connected with Newsletters & More: https://preseem.com/stay-connected/
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm



-- 
Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 15:50                 ` Sebastian Moeller
@ 2023-03-13 16:06                   ` Dave Taht
  2023-03-13 16:19                     ` Sebastian Moeller
  2023-03-13 16:12                   ` [LibreQoS] " dan
  1 sibling, 1 reply; 183+ messages in thread
From: Dave Taht @ 2023-03-13 16:06 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Jeremy Austin, Dave Taht via Starlink, dan, libreqos, Rpm,
	rjmcmahon, bloat

On Mon, Mar 13, 2023 at 8:50 AM Sebastian Moeller via Bloat
<bloat@lists.bufferbloat.net> wrote:
>
> Hi Jeremy,
>
> > On Mar 13, 2023, at 16:08, Jeremy Austin <jeremy@aterlo.com> wrote:
> >
> >
> >
> > On Mon, Mar 13, 2023 at 3:02 AM Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net> wrote:
> > Hi Dan,
> >
> >
> > > On Jan 9, 2023, at 20:56, dan via Rpm <rpm@lists.bufferbloat.net> wrote:
> > >
> > >  You don't need to generate the traffic on a link to measure how
> > > much traffic a link can handle.
> >
> >         [SM] OK, I will bite, how do you measure achievable throughput without actually generating it? Packet-pair techniques are notoriously imprecise and have funny failure modes.
> >
> > I am also looking forward to the full answer to this question. While one can infer when a link is saturated by mapping network topology onto latency sampling, it can have on the order of 30% error, given that there are multiple causes of increased latency beyond proximal congestion.
>
>         So in the "autorates" a family of automatic tracking/setting methods for a cake shaper that (in friendly competition to each other) we use active measurements of RTT/OWD increases and there we try to vary our set of reflectors and then take a vote over a set of reflectors to decide "is it cake^W congestion", that helps to weed out a few alternative reasons for congestion detection (like distal congestion to individual reflectors). But that dies not answer the tricky question how to estimate capacity without actually creating a sufficient load (and doubly so on variable rate links).
>
>
> > A question I commonly ask network engineers or academics is "How can I accurately distinguish a constraint in suppl from a reduction in demand?"
>
>         Good question. The autorates can not, but then they do not need to as they basically work by upping the shaper limit in correlation with the offered load until it detects sufficiently increased delay and reduces the shaper rates. A reduction n demand will lead to a reduction in load and bufferbloat... so the shaper is adapted based on the demand, aka "give the user as much thoughput as can be done within the users configured delay threshold, but not more"...
>
> If we had a reliable method to "measure how much traffic a link can handle." without having to track load and delay that would save us a ton of work ;)

My hope has generally been that a public API to how much bandwidth the
ISP can reliabily provide at that moment would arise. There is one for
at least one PPOe server, and I thought about trying to define one for
dhcp and dhcpv6, but a mere get request to some kind of json that did
up/down/link type would be nice.


>
> Regards
>         Sebastian
>
>
> >
> > --
> > --
> > Jeremy Austin
> > Sr. Product Manager
> > Preseem | Aterlo Networks
> > preseem.com
> >
> > Book a Call: https://app.hubspot.com/meetings/jeremy548
> > Phone: 1-833-733-7336 x718
> > Email: jeremy@preseem.com
> >
> > Stay Connected with Newsletters & More: https://preseem.com/stay-connected/
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 
Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] UnderBloat on fiber and wisps
  2023-03-13 16:04                 ` [LibreQoS] UnderBloat on fiber and wisps Dave Taht
@ 2023-03-13 16:09                   ` Sebastian Moeller
  0 siblings, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-13 16:09 UTC (permalink / raw)
  To: Dave Täht
  Cc: Jeremy Austin, Dave Taht via Starlink, dan, libreqos, Rpm, bloat



> On Mar 13, 2023, at 17:04, Dave Taht <dave.taht@gmail.com> wrote:
> 
> On Mon, Mar 13, 2023 at 8:08 AM Jeremy Austin via Rpm
> <rpm@lists.bufferbloat.net> wrote:
>> 
>> 
>> 
>> On Mon, Mar 13, 2023 at 3:02 AM Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net> wrote:
>>> 
>>> Hi Dan,
>>> 
>>> 
>>>> On Jan 9, 2023, at 20:56, dan via Rpm <rpm@lists.bufferbloat.net> wrote:
>>>> 
>>>> You don't need to generate the traffic on a link to measure how
>>>> much traffic a link can handle.
>>> 
>>>        [SM] OK, I will bite, how do you measure achievable throughput without actually generating it? Packet-pair techniques are notoriously imprecise and have funny failure modes.
>> 
>> 
>> I am also looking forward to the full answer to this question. While one can infer when a link is saturated by mapping network topology onto latency sampling, it can have on the order of 30% error, given that there are multiple causes of increased latency beyond proximal congestion.
>> 
>> A question I commonly ask network engineers or academics is "How can I accurately distinguish a constraint in supply from a reduction in demand?"
> 
> This is an insanely good point. In looking over the wisp
> configurations I have to date, many are using SFQ which has a default
> packet limit of 128 packets. Many are using SFQ with a *even shorter*
> packet limit, which looks good on speedtests which open many flows
> (keown's sqrt(flows) for bdp), but is *lousy* for allowing a single
> flow to achieve full rate (the more common case for end-user QoE).
> 
> I have in general tried to get mikrotik folk at least, to switch away
> from fifos, red, and sfq to wards fq_codel or cake at the defaults
> (good to 10Gbit) in part, due to this.
> 
> I think SFQ 128 really starts tapping out on most networks at around
> the 200Mbit level, and above 400, really, really does not have enough
> queue, so the net result is that wisps attempting to provide higher
> levels of service are not actually providing it in the real world, an
> accidental constraint in supply.
> 
> I have a blog piece, long in draft, called  "underbloat", talking to
> this. Also I have no seen multiple fiber installs that had had a
> reasonable 50ms FIFO buffer for 100Mbit, but when upgraded to 1gbit,
> left it at 5ms, which has bad sideffects for all traffic.
> 
> To me it looks also that at least some ubnt radios are FQd and underbuffered.

	This is why I tend to describe bufferbloat as a problem of over-sized and under-managed buffers hoping to imply that reducing the buffersize is not the only or even best remedy here. Once proberly managed large buffers do no harm (except wasting memory for most of the time, but since that buys some resilience that is not that bad).

Regards
	Sebastian

P.S.: This is a bit of a pendulum thing where one simplistic "solution" too-large-buffers gets replaced with another simplistic solution too-small-buffers ;)



> 
>> --
>> --
>> Jeremy Austin
>> Sr. Product Manager
>> Preseem | Aterlo Networks
>> preseem.com
>> 
>> Book a Call: https://app.hubspot.com/meetings/jeremy548
>> Phone: 1-833-733-7336 x718
>> Email: jeremy@preseem.com
>> 
>> Stay Connected with Newsletters & More: https://preseem.com/stay-connected/
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
> 
> 
> 
> -- 
> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
> Dave Täht CEO, TekLibre, LLC


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 15:50                 ` Sebastian Moeller
  2023-03-13 16:06                   ` [LibreQoS] [Bloat] " Dave Taht
@ 2023-03-13 16:12                   ` dan
  2023-03-13 16:36                     ` Sebastian Moeller
  1 sibling, 1 reply; 183+ messages in thread
From: dan @ 2023-03-13 16:12 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Jeremy Austin, Rpm, libreqos, Dave Taht via Starlink, rjmcmahon, bloat

" [SM] For a home link that means you need to measure on the router,
as end-hosts will only ever see the fraction of traffic they
sink/source themselves..."
&
 [SM] OK, I will bite, how do you measure achievable throughput
without actually generating it? Packet-pair techniques are notoriously
imprecise and have funny failure modes.

High water mark on their router.   Highwater mark on our CPE, on our
shaper, etc.  Modern services are very happy to burst traffic.  Nearly
every customer we have will hit the top of their service place each
day, even if only briefly and even if their average usage is quite
low.  Customers on 600Mbps mmwave services have a usage charge that is
flat lines and ~600Mbps blips.

"  [SM] No ISP I know of publishes which periods are low, mid, high
congestion so end-users will need to make some assumptions here (e.g.
by looking at per day load graphs of big traffic exchanges like DE-CIX
here https://www.de-cix.net/en/locations/frankfurt/statistics )"

You read this wrong.  Consumer routers run their daily speeds tests in
the middle of the night.  Eero at 3am for example.  Netgear 230-430am.
THAT is a bad measurement of the experience the consumer will have.
It's essentially useless data for the consumer unless they are
scheduling their downloads at 3am.  Only a speed test during use hours
is useful and that's also basically destructive unless a shaper makes
sure it isn't.

re per segment latency tests " [SM] Well is it really useless? If I
know the to be expected latency-under-load increase I can eye-ball
e.h. how far away a server I can still interact with in a "snappy"
way."

Yes it's completely useless to the customer.  only their service
latency matters.  My (ISP) latency from hop 2 to 3 on the network has
zero value to them.  only the aggregate.  per segment latency testing
is ONLY valuable to the service providers for us to troubleshoot,
repair, and upgrade.  Even if a consumer does a traceroute and get's
that 'one way' testing, it's irrelevant as they can't do anything
about latency at hop 8 etc, and often they actually don't know which
hops are which because they'll hidden in a tunnel/MPLS/etc.



On Mon, Mar 13, 2023 at 9:50 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Jeremy,
>
> > On Mar 13, 2023, at 16:08, Jeremy Austin <jeremy@aterlo.com> wrote:
> >
> >
> >
> > On Mon, Mar 13, 2023 at 3:02 AM Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net> wrote:
> > Hi Dan,
> >
> >
> > > On Jan 9, 2023, at 20:56, dan via Rpm <rpm@lists.bufferbloat.net> wrote:
> > >
> > >  You don't need to generate the traffic on a link to measure how
> > > much traffic a link can handle.
> >
> >         [SM] OK, I will bite, how do you measure achievable throughput without actually generating it? Packet-pair techniques are notoriously imprecise and have funny failure modes.
> >
> > I am also looking forward to the full answer to this question. While one can infer when a link is saturated by mapping network topology onto latency sampling, it can have on the order of 30% error, given that there are multiple causes of increased latency beyond proximal congestion.
>
>         So in the "autorates" a family of automatic tracking/setting methods for a cake shaper that (in friendly competition to each other) we use active measurements of RTT/OWD increases and there we try to vary our set of reflectors and then take a vote over a set of reflectors to decide "is it cake^W congestion", that helps to weed out a few alternative reasons for congestion detection (like distal congestion to individual reflectors). But that dies not answer the tricky question how to estimate capacity without actually creating a sufficient load (and doubly so on variable rate links).
>
>
> > A question I commonly ask network engineers or academics is "How can I accurately distinguish a constraint in suppl from a reduction in demand?"
>
>         Good question. The autorates can not, but then they do not need to as they basically work by upping the shaper limit in correlation with the offered load until it detects sufficiently increased delay and reduces the shaper rates. A reduction n demand will lead to a reduction in load and bufferbloat... so the shaper is adapted based on the demand, aka "give the user as much thoughput as can be done within the users configured delay threshold, but not more"...
>
> If we had a reliable method to "measure how much traffic a link can handle." without having to track load and delay that would save us a ton of work ;)
>
> Regards
>         Sebastian
>
>
> >
> > --
> > --
> > Jeremy Austin
> > Sr. Product Manager
> > Preseem | Aterlo Networks
> > preseem.com
> >
> > Book a Call: https://app.hubspot.com/meetings/jeremy548
> > Phone: 1-833-733-7336 x718
> > Email: jeremy@preseem.com
> >
> > Stay Connected with Newsletters & More: https://preseem.com/stay-connected/
>

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 16:06                   ` [LibreQoS] [Bloat] " Dave Taht
@ 2023-03-13 16:19                     ` Sebastian Moeller
  0 siblings, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-13 16:19 UTC (permalink / raw)
  To: Dave Täht
  Cc: Jeremy Austin, Dave Taht via Starlink, dan, libreqos, Rpm,
	rjmcmahon, bloat

Hi Dave,

> On Mar 13, 2023, at 17:06, Dave Taht <dave.taht@gmail.com> wrote:
> 
> On Mon, Mar 13, 2023 at 8:50 AM Sebastian Moeller via Bloat
> <bloat@lists.bufferbloat.net> wrote:
>> 
>> Hi Jeremy,
>> 
>>> On Mar 13, 2023, at 16:08, Jeremy Austin <jeremy@aterlo.com> wrote:
>>> 
>>> 
>>> 
>>> On Mon, Mar 13, 2023 at 3:02 AM Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net> wrote:
>>> Hi Dan,
>>> 
>>> 
>>>> On Jan 9, 2023, at 20:56, dan via Rpm <rpm@lists.bufferbloat.net> wrote:
>>>> 
>>>> You don't need to generate the traffic on a link to measure how
>>>> much traffic a link can handle.
>>> 
>>>        [SM] OK, I will bite, how do you measure achievable throughput without actually generating it? Packet-pair techniques are notoriously imprecise and have funny failure modes.
>>> 
>>> I am also looking forward to the full answer to this question. While one can infer when a link is saturated by mapping network topology onto latency sampling, it can have on the order of 30% error, given that there are multiple causes of increased latency beyond proximal congestion.
>> 
>>        So in the "autorates" a family of automatic tracking/setting methods for a cake shaper that (in friendly competition to each other) we use active measurements of RTT/OWD increases and there we try to vary our set of reflectors and then take a vote over a set of reflectors to decide "is it cake^W congestion", that helps to weed out a few alternative reasons for congestion detection (like distal congestion to individual reflectors). But that dies not answer the tricky question how to estimate capacity without actually creating a sufficient load (and doubly so on variable rate links).
>> 
>> 
>>> A question I commonly ask network engineers or academics is "How can I accurately distinguish a constraint in suppl from a reduction in demand?"
>> 
>>        Good question. The autorates can not, but then they do not need to as they basically work by upping the shaper limit in correlation with the offered load until it detects sufficiently increased delay and reduces the shaper rates. A reduction n demand will lead to a reduction in load and bufferbloat... so the shaper is adapted based on the demand, aka "give the user as much thoughput as can be done within the users configured delay threshold, but not more"...
>> 
>> If we had a reliable method to "measure how much traffic a link can handle." without having to track load and delay that would save us a ton of work ;)
> 
> My hope has generally been that a public API to how much bandwidth the
> ISP can reliabily provide at that moment would arise. There is one for
> at least one PPOe server, and I thought about trying to define one for
> dhcp and dhcpv6, but a mere get request to some kind of json that did
> up/down/link type would be nice.

	[SM] The incumbent telco over here (and one of its competitors) indeed encode the rate the traffic shaper for an individual link is set via the PPPoE AuthACK message field (both ISPs use a slightly different format with one giving net rate and the other gross rates, but that can be dealt with). However this is not adjusted during load periods. So this still describes the most likely bottleneck link well (albeit only once per PPPoE session, so either every 24 hours or in the limit every 180 days with the incumbent) but does little for transient congestion of up- or downstream elements (in their defense the incumbent mostly removed its old aggregation network between DSLAMs and PPPOE termination points so there is little on that side than can get congested).
If this was a pony ranch, I would ask for:
downstream gross shaper rate
downstream per packet overhead
downstream MTU
downstream mpu

upstream gross shaper rate
upstream per packet overhead
upstream MTU
upstream mpu

the last three likely are identical for both directions, so we are essentially talking about 5 numbers, of which only two can be expected to fluctuate under load.

Now, a competent ISP would ask why do my users want to know that and the simply implement sufficiently competent AQM/traffic shaping at the download side ;) in addition to these 5 numbers.

Regards
Sebastian


> 
> 
>> 
>> Regards
>>        Sebastian
>> 
>> 
>>> 
>>> --
>>> --
>>> Jeremy Austin
>>> Sr. Product Manager
>>> Preseem | Aterlo Networks
>>> preseem.com
>>> 
>>> Book a Call: https://app.hubspot.com/meetings/jeremy548
>>> Phone: 1-833-733-7336 x718
>>> Email: jeremy@preseem.com
>>> 
>>> Stay Connected with Newsletters & More: https://preseem.com/stay-connected/
>> 
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
> 
> 
> 
> -- 
> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
> Dave Täht CEO, TekLibre, LLC


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 16:12                   ` [LibreQoS] " dan
@ 2023-03-13 16:36                     ` Sebastian Moeller
  2023-03-13 17:26                       ` dan
  0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-13 16:36 UTC (permalink / raw)
  To: dan
  Cc: Jeremy Austin, Rpm, libreqos, Dave Taht via Starlink, rjmcmahon, bloat

Hi Dan,


> On Mar 13, 2023, at 17:12, dan <dandenson@gmail.com> wrote:
> 
> " [SM] For a home link that means you need to measure on the router,
> as end-hosts will only ever see the fraction of traffic they
> sink/source themselves..."
> &
> [SM] OK, I will bite, how do you measure achievable throughput
> without actually generating it? Packet-pair techniques are notoriously
> imprecise and have funny failure modes.
> 
> High water mark on their router.  

	[SM] Nope, my router is connected to my (bridged) modem via gigabit ethernet, with out a traffic shaper there is never going to be any noticeable water mark on the router side... sure the modem will built up a queue, but alas it does not expose the length of that DSL queue to me... A high water mark on my traffic shaped router informs me about my shaper setting (which I already know, after al I set it) but little about the capacity over the bottleneck link. And we are still talking about the easy egress direction, in the download direction Jeremy's question aplied is the achieved thoughput I measure limited by the link's capacity of are there simply not enoiugh packet available/sent to fill the pipe.

> Highwater mark on our CPE, on our
> shaper, etc.  Modern services are very happy to burst traffic.

	[SM] Yes, this is also one of the readons, why too-little-buffering is problematic, I like the Nichols/Jacobsen analogy of buffers as shiock (burst) absorbers.

>  Nearly
> every customer we have will hit the top of their service place each
> day, even if only briefly and even if their average usage is quite
> low.  Customers on 600Mbps mmwave services have a usage charge that is
> flat lines and ~600Mbps blips.

	[SM] Fully agree. most links are essentially idle most of the time, but that does not answer what instantaneous capacity is actually available, no?

> 
> "  [SM] No ISP I know of publishes which periods are low, mid, high
> congestion so end-users will need to make some assumptions here (e.g.
> by looking at per day load graphs of big traffic exchanges like DE-CIX
> here https://www.de-cix.net/en/locations/frankfurt/statistics )"
> 
> You read this wrong.  Consumer routers run their daily speeds tests in
> the middle of the night.

	[SM] So on my turris omnia I run a speedtest roughly every 2 hours exactly so I get coverage through low and high demand epochs. The only consumer router I know that does repeated tests is the IQrouter, which as far as I know schedules them regularly so it can adjust the traffic shaper to still deliver acceptale responsiveness even during peak hour.


>  Eero at 3am for example.  Netgear 230-430am.

	[SM] That sounds"specisl" not a useless daa point per se, but of limited utility during normal usage times.

> THAT is a bad measurement of the experience the consumer will have.

	[SM] Sure, but it still gives a usable reference for "what is the best my ISP actually delivers" if if the odds are stacked in his direction.

> It's essentially useless data for the consumer unless they are
> scheduling their downloads at 3am.  Only a speed test during use hours
> is useful and that's also basically destructive unless a shaper makes
> sure it isn't.
> 
> re per segment latency tests " [SM] Well is it really useless? If I
> know the to be expected latency-under-load increase I can eye-ball
> e.h. how far away a server I can still interact with in a "snappy"
> way."
> 
> Yes it's completely useless to the customer.  only their service
> latency matters.

	[SM] There is no single "service latency" it really depends on he specific network paths to the remote end and back. Unless you are talking about the latency over the access link only tere we have a single number but one of limited utility.


>  My (ISP) latency from hop 2 to 3 on the network has
> zero value to them.  only the aggregate.  per segment latency testing
> is ONLY valuable to the service providers for us to troubleshoot,
> repair, and upgrade.  Even if a consumer does a traceroute and get's
> that 'one way' testing, it's irrelevant as they can't do anything
> about latency at hop 8 etc, and often they actually don't know which
> hops are which because they'll hidden in a tunnel/MPLS/etc.

	[SM] Yes, end-users can do little, but not nothing, e.g. one can often work-around shitty peering by using a VPN to route one's packets into an AS that is both well connected with one's ISP as well as with one's remote ASs. And I accept your point of one-way testing, getting a remote site at the ight location to do e.g. reverse tracerputes mtrs is tricky (sometimes RIPE ATLAS can help) to impossible (like my ISP that does not offer even simple lookingglas servers at all)).


> 
> 
> 
> On Mon, Mar 13, 2023 at 9:50 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>> 
>> Hi Jeremy,
>> 
>>> On Mar 13, 2023, at 16:08, Jeremy Austin <jeremy@aterlo.com> wrote:
>>> 
>>> 
>>> 
>>> On Mon, Mar 13, 2023 at 3:02 AM Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net> wrote:
>>> Hi Dan,
>>> 
>>> 
>>>> On Jan 9, 2023, at 20:56, dan via Rpm <rpm@lists.bufferbloat.net> wrote:
>>>> 
>>>> You don't need to generate the traffic on a link to measure how
>>>> much traffic a link can handle.
>>> 
>>>        [SM] OK, I will bite, how do you measure achievable throughput without actually generating it? Packet-pair techniques are notoriously imprecise and have funny failure modes.
>>> 
>>> I am also looking forward to the full answer to this question. While one can infer when a link is saturated by mapping network topology onto latency sampling, it can have on the order of 30% error, given that there are multiple causes of increased latency beyond proximal congestion.
>> 
>>        So in the "autorates" a family of automatic tracking/setting methods for a cake shaper that (in friendly competition to each other) we use active measurements of RTT/OWD increases and there we try to vary our set of reflectors and then take a vote over a set of reflectors to decide "is it cake^W congestion", that helps to weed out a few alternative reasons for congestion detection (like distal congestion to individual reflectors). But that dies not answer the tricky question how to estimate capacity without actually creating a sufficient load (and doubly so on variable rate links).
>> 
>> 
>>> A question I commonly ask network engineers or academics is "How can I accurately distinguish a constraint in suppl from a reduction in demand?"
>> 
>>        Good question. The autorates can not, but then they do not need to as they basically work by upping the shaper limit in correlation with the offered load until it detects sufficiently increased delay and reduces the shaper rates. A reduction n demand will lead to a reduction in load and bufferbloat... so the shaper is adapted based on the demand, aka "give the user as much thoughput as can be done within the users configured delay threshold, but not more"...
>> 
>> If we had a reliable method to "measure how much traffic a link can handle." without having to track load and delay that would save us a ton of work ;)
>> 
>> Regards
>>        Sebastian
>> 
>> 
>>> 
>>> --
>>> --
>>> Jeremy Austin
>>> Sr. Product Manager
>>> Preseem | Aterlo Networks
>>> preseem.com
>>> 
>>> Book a Call: https://app.hubspot.com/meetings/jeremy548
>>> Phone: 1-833-733-7336 x718
>>> Email: jeremy@preseem.com
>>> 
>>> Stay Connected with Newsletters & More: https://preseem.com/stay-connected/
>> 


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 16:36                     ` Sebastian Moeller
@ 2023-03-13 17:26                       ` dan
  2023-03-13 17:37                         ` Jeremy Austin
  2023-03-13 18:14                         ` Sebastian Moeller
  0 siblings, 2 replies; 183+ messages in thread
From: dan @ 2023-03-13 17:26 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Jeremy Austin, Rpm, libreqos, Dave Taht via Starlink, rjmcmahon, bloat

On Mon, Mar 13, 2023 at 10:36 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Dan,
>
>
> > On Mar 13, 2023, at 17:12, dan <dandenson@gmail.com> wrote:
> >...
> >
> > High water mark on their router.
>
>         [SM] Nope, my router is connected to my (bridged) modem via gigabit ethernet, with out a traffic shaper there is never going to be any noticeable water mark on the router side... sure the modem will built up a queue, but alas it does not expose the length of that DSL queue to me... A high water mark on my traffic shaped router informs me about my shaper setting (which I already know, after al I set it) but little about the capacity over the bottleneck link. And we are still talking about the easy egress direction, in the download direction Jeremy's question aplied is the achieved thoughput I measure limited by the link's capacity of are there simply not enoiugh packet available/sent to fill the pipe.
>

And yet it can still see the flow of data on it's ports.  The queue is
irelevant to the measurement of data across a port.  turn off the
shaper and run anything.  run your speed test.  don't look at the
speed test results, just use it to generate some traffic.  you'll find
your peak and where you hit the buffers on the DSL modem by measuring
on the interface and measuring latency.  That speed test isn't giving
you this data and more than Disney+, other than you get to pick when
it runs.

> > Highwater mark on our CPE, on our
> > shaper, etc.  Modern services are very happy to burst traffic.
>
>         [SM] Yes, this is also one of the readons, why too-little-buffering is problematic, I like the Nichols/Jacobsen analogy of buffers as shiock (burst) absorbers.
>
> >  Nearly
> > every customer we have will hit the top of their service place each
> > day, even if only briefly and even if their average usage is quite
> > low.  Customers on 600Mbps mmwave services have a usage charge that is
> > flat lines and ~600Mbps blips.
>
>         [SM] Fully agree. most links are essentially idle most of the time, but that does not answer what instantaneous capacity is actually available, no?

yes, because most services burst.  That Roku Ultra or Apple TV is
going to running a 'speed test' every time it goes to fill it's
buffer.  Windows and Apple updates are asking for everything.  Again,
I'm measuring even the lowly grandma's house as consuming the entire
connection for a few seconds before it sits idle for a minute.  That
instantaneous capacity is getting used up so long as there is a
device/connection on the network capable of using it up.

>
> >
> > "  [SM] No ISP I know of publishes which periods are low, mid, high
> > congestion so end-users will need to make some assumptions here (e.g.
> > by looking at per day load graphs of big traffic exchanges like DE-CIX
> > here https://www.de-cix.net/en/locations/frankfurt/statistics )"
> >
> > You read this wrong.  Consumer routers run their daily speeds tests in
> > the middle of the night.
>
>         [SM] So on my turris omnia I run a speedtest roughly every 2 hours exactly so I get coverage through low and high demand epochs. The only consumer router I know that does repeated tests is the IQrouter, which as far as I know schedules them regularly so it can adjust the traffic shaper to still deliver acceptale responsiveness even during peak hour.

Consider this.   Customer under load, using their plan to the maximum,
speed test fires up adding more constraint.  Speed test is a stress
test, not a capacity test.  Speed test cannot return actual capacity
because it's being used by other services AND the rest of the internet
is in the way of accuracy as well, unless of course you prioritize the
speed test and then you cause an effective outage or you're running a
speed test on-net which isn't an 'internet' test, it's a network test.
Guess what the only way to get an actual measure of the capacity is?
my way.  measure what's passing the interface and measure what happens
to a reliable latency test during that time.

>
>
> >  Eero at 3am for example.  Netgear 230-430am.
>
>         [SM] That sounds"specisl" not a useless daa point per se, but of limited utility during normal usage times.

In practical terms, useless.  Like measuring how freeway congestion
affects commutes at 3am.

>
> > THAT is a bad measurement of the experience the consumer will have.
>
>         [SM] Sure, but it still gives a usable reference for "what is the best my ISP actually delivers" if if the odds are stacked in his direction.

ehh...... what the ISP could deliver if all other considerations are
removed.  I mean, it'd be a synthetic test in any other scenario and
the only reason it's not is because it's on real hardware.  I don't
have a single subscriber on network that can't get 2-3x their plan
speed at 3am if I opened up their shaper.  Very narrow use case here
from a consumer point of view.   Eero runs speed tests at 3am every
single day on a few hundred subs on my network and they look AMAZING
every time.  no surprise.

>
> > It's essentially useless data for ...
>
>         [SM] There is no single "service latency" it really depends on he specific network paths to the remote end and back. Unless you are talking about the latency over the access link only tere we have a single number but one of limited utility.

The intermediate hops are still useless to the consumer.  Only the
latency to their door so to speak.  again, hop 2 to hop 3 on my
network gives them absolutely nothing.

>
>
> >  My (ISP) latency from hop 2 to 3 on the network has
> ...> > hops are which because they'll hidden in a tunnel/MPLS/etc.
>
>         [SM] Yes, end-users can do little, but not nothing, e.g. one can often work-around shitty peering by using a VPN to route one's packets into an AS that is both well connected with one's ISP as well as with one's remote ASs. And I accept your point of one-way testing, getting a remote site at the ight location to do e.g. reverse tracerputes mtrs is tricky (sometimes RIPE ATLAS can help) to impossible (like my ISP that does not offer even simple lookingglas servers at all)).

This is a REALLY narrow use case. Also, irrelevant.  Consumer can test
to their target, to the datacenter, and datacenter to target and
compare, and do that in reverse to get bi-directional latency.  per
hop latency is still of zero benefit to them because they can't use
that in any practical way.  Like 1 in maybe a few thousand consumers
might be able to use this data to identify the slow hop and find a
datacenter before that hop to route around it and they get about 75%
of the way with a traditional trace router. and then of course they've
added VPN overheads so are they really getting an improvement?


I'm not saying that testing is bad in any way, I'm saying that 'speed
tests' as they are generally understood in this industry are a bad
method.  Run 10 speed tests, get 10 results.  Run a speed test while
netflix buffers, get a bad result.  Run a speed test from a weak wifi
connection, get a bad result.  A tool that is inherently flawed
because it's methodology is flawed is of no use to find the truth.

If you troubleshoot your ISP based on speed tests you will be chasing
your tail.  Meanwhile, that internet facing interface can see the true
numbers the entire time.  The consumer is pulling their full capacity
on almost all links routinely even if briefly and can be nudged into
pushing more a dozen ways (including a speed test).  The only thing
lacking is a latency measurement of some sort.  Preseem and Libreqos's
TCP measurements on the head end are awesome, but that's not available
on the subscriber's side but if it were, there's the full testing
suite.  how much peak data, what happened to latency.  If you could
get data from the ISP's head end to diff you'd have internet vs isp
latencies.    'speed test' is a stress test or a burn in test in
effect.

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 17:26                       ` dan
@ 2023-03-13 17:37                         ` Jeremy Austin
  2023-03-13 18:34                           ` Sebastian Moeller
  2023-03-13 18:14                         ` Sebastian Moeller
  1 sibling, 1 reply; 183+ messages in thread
From: Jeremy Austin @ 2023-03-13 17:37 UTC (permalink / raw)
  To: dan
  Cc: Sebastian Moeller, Rpm, libreqos, Dave Taht via Starlink,
	rjmcmahon, bloat

[-- Attachment #1: Type: text/plain, Size: 2074 bytes --]

On Mon, Mar 13, 2023 at 10:26 AM dan <dandenson@gmail.com> wrote:


If you troubleshoot your ISP based on speed tests you will be chasing
your tail.  Meanwhile, that internet facing interface can see the true
numbers the entire time.  The consumer is pulling their full capacity
on almost all links routinely even if briefly and can be nudged into
pushing more a dozen ways (including a speed test).  The only thing
lacking is a latency measurement of some sort.  Preseem and Libreqos's
TCP measurements on the head end are awesome, but that's not available
on the subscriber's side but if it were, there's the full testing
suite.  how much peak data, what happened to latency.  If you could
get data from the ISP's head end to diff you'd have internet vs isp
latencies.    'speed test' is a stress test or a burn in test in
effect.


I cannot upvote this enough. I call speed tests — and in fact any packet
injection doing more than a bare minimum probe — destructive testing, and
said as much to NTIA recently.

The *big problem* (emphasis mine) is that the recent BEAD NOFO, pumping
tens of billions of dollars into broadband, has *speedtests* as the "proof"
that an ISP is delivering.

It's one thing to solve this problem at the ISP and consumer level. It's
another to solve it at the political level. In this case, I think it's
incumbent on ISPs to atone for former sins — now that we know that speed
tests are not just bad but actively misleading, we need to provide real
tools and education.

Going back to my previous comment, and no disrespect meant to the CAKE
autorate detection: "How do we distinguish between constrained supply and
reduced demand *without injecting packets or layer violations*?"

-- 
--
Jeremy Austin
Sr. Product Manager
Preseem | Aterlo Networks
preseem.com

Book a Call: https://app.hubspot.com/meetings/jeremy548
Phone: 1-833-733-7336 x718
Email: jeremy@preseem.com

Stay Connected with Newsletters & More:
*https://preseem.com/stay-connected/* <https://preseem.com/stay-connected/>

[-- Attachment #2: Type: text/html, Size: 4194 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 17:26                       ` dan
  2023-03-13 17:37                         ` Jeremy Austin
@ 2023-03-13 18:14                         ` Sebastian Moeller
  2023-03-13 18:42                           ` rjmcmahon
  2023-03-13 19:33                           ` [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA dan
  1 sibling, 2 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-13 18:14 UTC (permalink / raw)
  To: dan
  Cc: Jeremy Austin, Rpm, libreqos, Dave Taht via Starlink, rjmcmahon, bloat

Hi Dan,


> On Mar 13, 2023, at 18:26, dan <dandenson@gmail.com> wrote:
> 
> On Mon, Mar 13, 2023 at 10:36 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>> 
>> Hi Dan,
>> 
>> 
>>> On Mar 13, 2023, at 17:12, dan <dandenson@gmail.com> wrote:
>>> ...
>>> 
>>> High water mark on their router.
>> 
>>        [SM] Nope, my router is connected to my (bridged) modem via gigabit ethernet, with out a traffic shaper there is never going to be any noticeable water mark on the router side... sure the modem will built up a queue, but alas it does not expose the length of that DSL queue to me... A high water mark on my traffic shaped router informs me about my shaper setting (which I already know, after al I set it) but little about the capacity over the bottleneck link. And we are still talking about the easy egress direction, in the download direction Jeremy's question aplied is the achieved thoughput I measure limited by the link's capacity of are there simply not enoiugh packet available/sent to fill the pipe.
>> 
> 
> And yet it can still see the flow of data on it's ports.  The queue is
> irelevant to the measurement of data across a port.

	I respectfully disagree, if say, my modem had a 4 GB queue I could theoretically burst ~4GB worth of data at line rate into that buffer without learning anything about the the modem-link capacity.


>  turn off the
> shaper and run anything.  run your speed test.  don't look at the
> speed test results, just use it to generate some traffic.  you'll find
> your peak and where you hit the buffers on the DSL modem by measuring
> on the interface and measuring latency.  

	Peak of what? Exactly? The peak sending rate of my router is well known, its 1 Gbps gross ethernet rate...


> That speed test isn't giving
> you this data and more than Disney+, other than you get to pick when
> it runs.

	Hrm, no we sre back at actually saturating the link, 


> 
>>> Highwater mark on our CPE, on our
>>> shaper, etc.  Modern services are very happy to burst traffic.
>> 
>>        [SM] Yes, this is also one of the readons, why too-little-buffering is problematic, I like the Nichols/Jacobsen analogy of buffers as shiock (burst) absorbers.
>> 
>>> Nearly
>>> every customer we have will hit the top of their service place each
>>> day, even if only briefly and even if their average usage is quite
>>> low.  Customers on 600Mbps mmwave services have a usage charge that is
>>> flat lines and ~600Mbps blips.
>> 
>>        [SM] Fully agree. most links are essentially idle most of the time, but that does not answer what instantaneous capacity is actually available, no?
> 
> yes, because most services burst.  That Roku Ultra or Apple TV is
> going to running a 'speed test' every time it goes to fill it's
> buffer.

	[SM] not really, given enough capacity, typical streaming protocols will actually not hit the ceiling, at least the one's I look at every now and then tend to stay well below actual capacity of the link.

>  Windows and Apple updates are asking for everything.  Again,
> I'm measuring even the lowly grandma's house as consuming the entire
> connection for a few seconds before it sits idle for a minute.  That
> instantaneous capacity is getting used up so long as there is a
> device/connection on the network capable of using it up.

	[SM] But my problem is that on variable rate links I want to measure the instantaneous capacity such that I can do adaptive admission control and avpid over filling my modem's DSL buffers (I wish they would do something like BQL, but alas they don't).


> 
>> 
>>> 
>>> "  [SM] No ISP I know of publishes which periods are low, mid, high
>>> congestion so end-users will need to make some assumptions here (e.g.
>>> by looking at per day load graphs of big traffic exchanges like DE-CIX
>>> here https://www.de-cix.net/en/locations/frankfurt/statistics )"
>>> 
>>> You read this wrong.  Consumer routers run their daily speeds tests in
>>> the middle of the night.
>> 
>>        [SM] So on my turris omnia I run a speedtest roughly every 2 hours exactly so I get coverage through low and high demand epochs. The only consumer router I know that does repeated tests is the IQrouter, which as far as I know schedules them regularly so it can adjust the traffic shaper to still deliver acceptale responsiveness even during peak hour.
> 
> Consider this.   Customer under load, using their plan to the maximum,
> speed test fires up adding more constraint.  Speed test is a stress
> test, not a capacity test.

	[SM] With competent AQM (like cake on ingress and egress configured for per-internal-IP isolation) I do not even notice whether a speedtes runs or not, and from the reported capacity I can estimate the concurrent load from other endhosts in my network.


>  Speed test cannot return actual capacity
> because it's being used by other services AND the rest of the internet
> is in the way of accuracy as well, unless of course you prioritize the
> speed test and then you cause an effective outage or you're running a
> speed test on-net which isn't an 'internet' test, it's a network test.

	[SM] Conventional capcaity tests give a decent enough estimate of current capacity to be useful, I could not care less that they are potential not perfect, sorry. The question still is how to estimate capacity without loading the link...


> Guess what the only way to get an actual measure of the capacity is?
> my way.  measure what's passing the interface and measure what happens
> to a reliable latency test during that time.

	[SM] This is, respectfully, what we do in cake-autorate, but that requires an actual load and only accidentally detects the capacity, if a high enough load is sustained long enough to evoke a latency increase. But I knew that already, what you initially wrote sounded to me like you had a method to detect instantaneous capacity without needing to generate load. (BTW, in cake-autorate we do not generate an artificial load (only artificial/active latency probes) but use the organic user generated traffic as load generator*).

*) If all endhosts are idle we do not care much about the capacity, only if there is traffic, however the quicker we can estimate the capacity the tigher our controller can operate.


> 
>> 
>> 
>>> Eero at 3am for example.  Netgear 230-430am.
>> 
>>        [SM] That sounds"specisl" not a useless daa point per se, but of limited utility during normal usage times.
> 
> In practical terms, useless.  Like measuring how freeway congestion
> affects commutes at 3am.

	[SM] That is not "useless" sorry, it gives my a lower bound for my compute (or allows to estimate a lower duration of a transfer of a given size). But I agree it does little to inform me what to expect during peak hour.


> 
>> 
>>> THAT is a bad measurement of the experience the consumer will have.
>> 
>>        [SM] Sure, but it still gives a usable reference for "what is the best my ISP actually delivers" if if the odds are stacked in his direction.
> 
> ehh...... what the ISP could deliver if all other considerations are
> removed.

	[SM] No, this is still a test of the real existing network...

>  I mean, it'd be a synthetic test in any other scenario and
> the only reason it's not is because it's on real hardware.  I don't
> have a single subscriber on network that can't get 2-3x their plan
> speed at 3am if I opened up their shaper.  Very narrow use case here
> from a consumer point of view.   Eero runs speed tests at 3am every
> single day on a few hundred subs on my network and they look AMAZING
> every time.  no surprise.

	[SM] While I defend some utility for such a test on pronciple, I agree that if eero only runs a single test 3 AM is not the best time to do that, except for night owls.

> 
>> 
>>> It's essentially useless data for ...
>> 
>>        [SM] There is no single "service latency" it really depends on he specific network paths to the remote end and back. Unless you are talking about the latency over the access link only tere we have a single number but one of limited utility.
> 
> The intermediate hops are still useless to the consumer.  Only the
> latency to their door so to speak.  again, hop 2 to hop 3 on my
> network gives them absolutely nothing.

	[SM] I agree if these are mandatory hops I need to traverse every time, but if these are host I can potentially avoid then this changes, even though I am now trying to gsame my ISP to some degree which in the long run is a loosing proposition.


> 
>> 
>> 
>>> My (ISP) latency from hop 2 to 3 on the network has
>> ...> > hops are which because they'll hidden in a tunnel/MPLS/etc.
>> 
>>        [SM] Yes, end-users can do little, but not nothing, e.g. one can often work-around shitty peering by using a VPN to route one's packets into an AS that is both well connected with one's ISP as well as with one's remote ASs. And I accept your point of one-way testing, getting a remote site at the ight location to do e.g. reverse tracerputes mtrs is tricky (sometimes RIPE ATLAS can help) to impossible (like my ISP that does not offer even simple lookingglas servers at all)).
> 
> This is a REALLY narrow use case. Also, irrelevant.

	[SM] You would think, would you ;). However over here the T1 incumbent telco plays "peering games" and purposefully runs its transit links too hot so in primetime traffic coming from content providers via transit suffers. The telco's idea here is to incentivize these content providers to buy "transit" from that telco that happens to cost integer multiples of transit and hence will only ever be used to access this telco's customers if a content provider actually buys in.
As an end-user of that telco, I have three options:
a) switch content providers
b) switch ISP
c) route around my ISPs unfriendly peering

(Personally even though not directly affected by this I opted for b) and found a better connected yet still cheaper ISP).

I am not alone in this, actually a lot of gamers do something similar using gaming oriented VPN services. But then gamers are a bit like audiophiles to me, some of the things they do look like cargo-cult to me, but I digress/


>  Consumer can test
> to their target, to the datacenter, and datacenter to target and
> compare, and do that in reverse to get bi-directional latency.  

	[SM] I have been tought thst does not actually work as the true return path is essentially invisible without a probe for a reverse traceroute at the site of the remote server, no?



> per
> hop latency is still of zero benefit to them because they can't use
> that in any practical way.  

	[SM] And again I disagree, I can within reason diagnose congested path elements from an mtr... say if on a path across three AS that at best takes 10 milliseconds, I see at primetime that from the link between AS1 and AS2 all hops including the endpoint show an RTT of say 100ms, I can form the hypothsis that somewhere between AS1 and AS2 there is a undue queue build-up. Pratically this can mean I need to rpute my own traffic differentky, either by VPN, or by switching the used application content provider hoping to avoid the apparently congested link. What I can not do is fix the problem, that is true ;)



> Like 1 in maybe a few thousand consumers
> might be able to use this data to identify the slow hop and find a
> datacenter before that hop to route around it and they get about 75%
> of the way with a traditional trace router. and then of course they've
> added VPN overheads so are they really getting an improvement?

	[SM] In the german telco case peak rate to some datacenters/VPS (single-homed at cogent) dropped into the low Kbps range, while a VPN route-around returned that into the high double digit Mbps, so yes the improvement can be tangible. Again, my soluton to that possibility was to "vote with my feet" and change ISPs (a pity, because outside of that unpleasant peering/transit behaviour that telco is a pretty competent ISP; case in point the transit links run too hot, but are are competently managed to actually stay at the selected "temperature" and do not progress into to total overload territory.)


> I'm not saying that testing is bad in any way, I'm saying that 'speed
> tests' as they are generally understood in this industry are a bad
> method.  

	[SM] +1, with that I can agree. But I see some mild improvements, with e.g. Ookla reporting latency numbers from during the load phases. Sure the chosen measure inter-quartil mean, is sub-optimal, but infinitely better than hat they had before, no latecny under load numbers.


> Run 10 speed tests, get 10 results.  Run a speed test while
> netflix buffers, get a bad result.  Run a speed test from a weak wifi
> connection, get a bad result.  A tool that is inherently flawed
> because it's methodology is flawed is of no use to find the truth.

	[SM] For most end users speedtests are the one convenient way of generating saturating loads. BUt saturating loads by them selves are not that useful.

> 
> If you troubleshoot your ISP based on speed tests you will be chasing
> your tail.  

	My most recent attempt was with mtr traces to document packetloss only when "packed" into one specific IP range, and that packetloss happens even without load at night, so no speedtest required (I did run a few speed.cloudflare.com tests, but mainly because they contain a very simple and short packet loss test, that finishes a tad earlier than my go to 'mtr -ezbw -c 1000 IP_HERE' packet loss test ;) )


> Meanwhile, that internet facing interface can see the true
> numbers the entire time.

	[SM] Only averaged over time...


>  The consumer is pulling their full capacity
> on almost all links routinely even if briefly and can be nudged into
> pushing more a dozen ways (including a speed test).  The only thing
> lacking is a latency measurement of some sort.  Preseem and Libreqos's
> TCP measurements on the head end are awesome, but that's not available
> on the subscriber's side but if it were, there's the full testing
> suite.  how much peak data, what happened to latency.  If you could
> get data from the ISP's head end to diff you'd have internet vs isp
> latencies.    'speed test' is a stress test or a burn in test in
> effect.

	[SM] I still agree "speedtests" are misnames capacity tests and do have their value (e.g. over here the main determinant of internet access price is the headline capacity number) we even have an "official" capacity test blessed by the national regulatory agency that can be used to defend consumer rights against those ISP that over-promise but under-deliver (few people d though, as it happens if your ISP generally delivers acceptable throughput and generally is not a d*ck, people are fine with not caring all too much). On the last point I believe the more responsiveness an ISP link maintains under load the fewer people will get unhappy about their internet experience and without unhappyness most users I presime have better things to do than fight with their ISP. ;)


Regards
	Sebastian



^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 17:37                         ` Jeremy Austin
@ 2023-03-13 18:34                           ` Sebastian Moeller
  0 siblings, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-13 18:34 UTC (permalink / raw)
  To: Jeremy Austin
  Cc: dan, Rpm, libreqos, Dave Taht via Starlink, rjmcmahon, bloat

Hi Jeremy,


> On Mar 13, 2023, at 18:37, Jeremy Austin <jeremy@aterlo.com> wrote:
> 
> 
> 
> On Mon, Mar 13, 2023 at 10:26 AM dan <dandenson@gmail.com> wrote:
> 
> 
> If you troubleshoot your ISP based on speed tests you will be chasing
> your tail.  Meanwhile, that internet facing interface can see the true
> numbers the entire time.  The consumer is pulling their full capacity
> on almost all links routinely even if briefly and can be nudged into
> pushing more a dozen ways (including a speed test).  The only thing
> lacking is a latency measurement of some sort.  Preseem and Libreqos's
> TCP measurements on the head end are awesome, but that's not available
> on the subscriber's side but if it were, there's the full testing
> suite.  how much peak data, what happened to latency.  If you could
> get data from the ISP's head end to diff you'd have internet vs isp
> latencies.    'speed test' is a stress test or a burn in test in
> effect.
> 
> I cannot upvote this enough. I call speed tests — and in fact any packet injection doing more than a bare minimum probe — destructive testing, and said as much to NTIA recently.

	[SM] Why? With competent traffic shaping, scheduling and AQM even a capacity test running at full throttle (like e.g a three minute bidirectional flent RRUL test) is not destructive to network responsiveness. I am not saying that the network behaves as if there was no load, but the old, "stop your downloads I have a VoIP call/vide conference coming scenario" really only need to happen at networks with way too little capacity assigned. 


> The *big problem* (emphasis mine) is that the recent BEAD NOFO, pumping tens of billions of dollars into broadband, has *speedtests* as the "proof" that an ISP is delivering.

	[SM] I respectfully disagree, as long as ISP market and price on capacity it is not unreasonable to actually have end-users actually measure capacity. I do agree the way we d this currently is sub-optimal though. And it is a but unfair to ISPs as other business fields are not held to such standards. However, my solution would be to hold other businesses equally to account for their promises, not letting ISPs off the hook ;) (but easy for me to say, I do not operate/work for an ISP and likely misunderstand all the subtleties involved).


> It's one thing to solve this problem at the ISP and consumer level. It's another to solve it at the political level. In this case, I think it's incumbent on ISPs to atone for former sins — now that we know that speed tests are not just bad but actively misleading, we need to provide real tools and education.

	[SM] +1; however as long as ISP essentially sell  capacity, capacity tests will stay relevant. 

> 
> Going back to my previous comment, and no disrespect meant to the CAKE autorate detection: "How do we distinguish between constrained supply and reduced demand *without injecting packets or layer violations*?"

	[SM] Oh, I share this question especially with my cake-autprate junior partner hat on... in theory one could not only use the organic load, but also the organic TCP timestamp increases as shown by pping to estimate times when the load exceeds/meets capacity (I think preseem have an iron in the fire there as well), but that is also not without its challenged. E.g. my router sits behind a bridged modem, but accesses the modem over the same link as the internet and routinely collects data from the modem, will pping now give the delay to the modem as its minimal estimate? If it does it is pretty much useless as that RTT is going to essentially stay flat as it is not affected by the bottleneck queue to/from the internet... (that is why the autorates opted for active probes as that allows to select spatially separate reflectors).

Regards
	Sebastian


> 
> -- 
> --
> Jeremy Austin
> Sr. Product Manager
> Preseem | Aterlo Networks
> preseem.com
> 
> Book a Call: https://app.hubspot.com/meetings/jeremy548
> Phone: 1-833-733-7336 x718
> Email: jeremy@preseem.com
> 
> Stay Connected with Newsletters & More: https://preseem.com/stay-connected/


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 18:14                         ` Sebastian Moeller
@ 2023-03-13 18:42                           ` rjmcmahon
  2023-03-13 18:51                             ` Sebastian Moeller
  2023-03-13 19:33                           ` [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA dan
  1 sibling, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-13 18:42 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: dan, Jeremy Austin, Rpm, libreqos, Dave Taht via Starlink, bloat

> 	[SM] not really, given enough capacity, typical streaming protocols
> will actually not hit the ceiling, at least the one's I look at every
> now and then tend to stay well below actual capacity of the link.
> 
I think DASH type protocol will hit link peaks. An example with iperf 
2's burst option a controlled WiFi test rig, server side first.


[root@ctrl1fc35 ~]# iperf -s -i 1 -e --histograms
------------------------------------------------------------
Server listening on TCP port 5001 with pid 23764
Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
Enabled receive histograms bin-width=0.100 ms, bins=10000 (clients 
should use --trip-times)
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.15%enp2s0 port 5001 connected with 192.168.1.234 
port 34894 (burst-period=1.00s) (trip-times) (sock=4) (peer 2.1.9-rc2) 
(icwnd/mss/irtt=14/1448/5170) on 2023-03-13 11:37:24.500 (PDT)
[ ID] Burst (start-end)  Transfer     Bandwidth       XferTime  (DC%)    
  Reads=Dist          NetPwr
[  1] 0.00-0.13 sec  10.0 MBytes   633 Mbits/sec  132.541 ms (13%)    
209=29:31:31:88:11:2:1:16  597
[  1] 1.00-1.11 sec  10.0 MBytes   755 Mbits/sec  111.109 ms (11%)    
205=34:30:22:83:11:2:6:17  849
[  1] 2.00-2.12 sec  10.0 MBytes   716 Mbits/sec  117.196 ms (12%)    
208=33:39:20:81:13:1:5:16  763
[  1] 3.00-3.11 sec  10.0 MBytes   745 Mbits/sec  112.564 ms (11%)    
203=27:36:30:76:6:3:6:19  828
[  1] 4.00-4.11 sec  10.0 MBytes   787 Mbits/sec  106.621 ms (11%)    
193=29:26:19:80:10:4:6:19  922
[  1] 5.00-5.11 sec  10.0 MBytes   769 Mbits/sec  109.148 ms (11%)    
208=36:25:32:86:6:1:5:17  880
[  1] 6.00-6.11 sec  10.0 MBytes   760 Mbits/sec  110.403 ms (11%)    
206=42:30:22:73:8:3:5:23  860
[  1] 7.00-7.11 sec  10.0 MBytes   775 Mbits/sec  108.261 ms (11%)    
171=20:21:21:58:12:1:11:27  895
[  1] 8.00-8.11 sec  10.0 MBytes   746 Mbits/sec  112.405 ms (11%)    
203=36:31:28:70:9:3:2:24  830
[  1] 9.00-9.11 sec  10.0 MBytes   748 Mbits/sec  112.133 ms (11%)    
228=41:56:27:73:7:2:3:19  834
[  1] 0.00-10.00 sec   100 MBytes  83.9 Mbits/sec  
113.238/106.621/132.541/7.367 ms  2034=327:325:252:768:93:22:50:197
[  1] 0.00-10.00 sec F8(f)-PDF: 
bin(w=100us):cnt(10)=1067:1,1083:1,1092:1,1105:1,1112:1,1122:1,1125:1,1126:1,1172:1,1326:1 
(5.00/95.00/99.7%=1067/1326/1326,Outliers=0,obl/obu=0/0) (132.541 
ms/1678732644.500333)


[root@fedora ~]# iperf -c 192.168.1.15 -i 1 -t 10 --burst-size 10M 
--burst-period 1 --trip-times
------------------------------------------------------------
Client connecting to 192.168.1.15, TCP port 5001 with pid 132332 (1 
flows)
Write buffer size: 131072 Byte
Bursting: 10.0 MByte every 1.00 second(s)
TOS set to 0x0 (Nagle on)
TCP window size: 16.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[  1] local 192.168.1.234%eth1 port 34894 connected with 192.168.1.15 
port 5001 (prefetch=16384) (trip-times) (sock=3) 
(icwnd/mss/irtt=14/1448/5489) (ct=5.58 ms) on 2023-03-13 11:37:24.494 
(PDT)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     
Cwnd/RTT(var)        NetPwr
[  1] 0.00-1.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     
5517K/18027(1151) us  582
[  1] 1.00-2.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     
5584K/13003(2383) us  806
[  1] 2.00-3.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     
5613K/16462(962) us  637
[  1] 3.00-4.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     
5635K/19523(671) us  537
[  1] 4.00-5.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     
5594K/10013(1685) us  1047
[  1] 5.00-6.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     
5479K/14008(654) us  749
[  1] 6.00-7.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     
5613K/17752(283) us  591
[  1] 7.00-8.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     
5599K/17743(436) us  591
[  1] 8.00-9.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     
5577K/11214(2538) us  935
[  1] 9.00-10.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     
4178K/7251(993) us  1446
[  1] 0.00-10.01 sec   100 MBytes  83.8 Mbits/sec  800/0         0     
4178K/7725(1694) us  1356
[root@fedora ~]#

Note: Client side output is being updated to support outputs based upon 
the bursts. This allows one to see that a DASH type protocol can drive 
the bw bottleneck queue.

Bob


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 18:42                           ` rjmcmahon
@ 2023-03-13 18:51                             ` Sebastian Moeller
  2023-03-13 19:32                               ` rjmcmahon
  0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-13 18:51 UTC (permalink / raw)
  To: rjmcmahon
  Cc: dan, Jeremy Austin, Rpm, libreqos, Dave Taht via Starlink, bloat

Hi Bob,


> On Mar 13, 2023, at 19:42, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
> 
>> 	[SM] not really, given enough capacity, typical streaming protocols
>> will actually not hit the ceiling, at least the one's I look at every
>> now and then tend to stay well below actual capacity of the link.
> I think DASH type protocol will hit link peaks. An example with iperf 2's burst option a controlled WiFi test rig, server side first.

	[SM] I think that depends, each segment has only a finite length and if this can delivered before slow start ends that burst might never hit the capacity?

Regards
	Sebastian


> 
> 
> [root@ctrl1fc35 ~]# iperf -s -i 1 -e --histograms
> ------------------------------------------------------------
> Server listening on TCP port 5001 with pid 23764
> Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
> Enabled receive histograms bin-width=0.100 ms, bins=10000 (clients should use --trip-times)
> TCP window size:  128 KByte (default)
> ------------------------------------------------------------
> [  1] local 192.168.1.15%enp2s0 port 5001 connected with 192.168.1.234 port 34894 (burst-period=1.00s) (trip-times) (sock=4) (peer 2.1.9-rc2) (icwnd/mss/irtt=14/1448/5170) on 2023-03-13 11:37:24.500 (PDT)
> [ ID] Burst (start-end)  Transfer     Bandwidth       XferTime  (DC%)     Reads=Dist          NetPwr
> [  1] 0.00-0.13 sec  10.0 MBytes   633 Mbits/sec  132.541 ms (13%)    209=29:31:31:88:11:2:1:16  597
> [  1] 1.00-1.11 sec  10.0 MBytes   755 Mbits/sec  111.109 ms (11%)    205=34:30:22:83:11:2:6:17  849
> [  1] 2.00-2.12 sec  10.0 MBytes   716 Mbits/sec  117.196 ms (12%)    208=33:39:20:81:13:1:5:16  763
> [  1] 3.00-3.11 sec  10.0 MBytes   745 Mbits/sec  112.564 ms (11%)    203=27:36:30:76:6:3:6:19  828
> [  1] 4.00-4.11 sec  10.0 MBytes   787 Mbits/sec  106.621 ms (11%)    193=29:26:19:80:10:4:6:19  922
> [  1] 5.00-5.11 sec  10.0 MBytes   769 Mbits/sec  109.148 ms (11%)    208=36:25:32:86:6:1:5:17  880
> [  1] 6.00-6.11 sec  10.0 MBytes   760 Mbits/sec  110.403 ms (11%)    206=42:30:22:73:8:3:5:23  860
> [  1] 7.00-7.11 sec  10.0 MBytes   775 Mbits/sec  108.261 ms (11%)    171=20:21:21:58:12:1:11:27  895
> [  1] 8.00-8.11 sec  10.0 MBytes   746 Mbits/sec  112.405 ms (11%)    203=36:31:28:70:9:3:2:24  830
> [  1] 9.00-9.11 sec  10.0 MBytes   748 Mbits/sec  112.133 ms (11%)    228=41:56:27:73:7:2:3:19  834
> [  1] 0.00-10.00 sec   100 MBytes  83.9 Mbits/sec  113.238/106.621/132.541/7.367 ms  2034=327:325:252:768:93:22:50:197
> [  1] 0.00-10.00 sec F8(f)-PDF: bin(w=100us):cnt(10)=1067:1,1083:1,1092:1,1105:1,1112:1,1122:1,1125:1,1126:1,1172:1,1326:1 (5.00/95.00/99.7%=1067/1326/1326,Outliers=0,obl/obu=0/0) (132.541 ms/1678732644.500333)
> 
> 
> [root@fedora ~]# iperf -c 192.168.1.15 -i 1 -t 10 --burst-size 10M --burst-period 1 --trip-times
> ------------------------------------------------------------
> Client connecting to 192.168.1.15, TCP port 5001 with pid 132332 (1 flows)
> Write buffer size: 131072 Byte
> Bursting: 10.0 MByte every 1.00 second(s)
> TOS set to 0x0 (Nagle on)
> TCP window size: 16.0 KByte (default)
> Event based writes (pending queue watermark at 16384 bytes)
> ------------------------------------------------------------
> [  1] local 192.168.1.234%eth1 port 34894 connected with 192.168.1.15 port 5001 (prefetch=16384) (trip-times) (sock=3) (icwnd/mss/irtt=14/1448/5489) (ct=5.58 ms) on 2023-03-13 11:37:24.494 (PDT)
> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT(var)        NetPwr
> [  1] 0.00-1.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     5517K/18027(1151) us  582
> [  1] 1.00-2.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     5584K/13003(2383) us  806
> [  1] 2.00-3.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     5613K/16462(962) us  637
> [  1] 3.00-4.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     5635K/19523(671) us  537
> [  1] 4.00-5.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     5594K/10013(1685) us  1047
> [  1] 5.00-6.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     5479K/14008(654) us  749
> [  1] 6.00-7.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     5613K/17752(283) us  591
> [  1] 7.00-8.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     5599K/17743(436) us  591
> [  1] 8.00-9.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     5577K/11214(2538) us  935
> [  1] 9.00-10.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0         0     4178K/7251(993) us  1446
> [  1] 0.00-10.01 sec   100 MBytes  83.8 Mbits/sec  800/0         0     4178K/7725(1694) us  1356
> [root@fedora ~]#
> 
> Note: Client side output is being updated to support outputs based upon the bursts. This allows one to see that a DASH type protocol can drive the bw bottleneck queue.
> 
> Bob
> 


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 18:51                             ` Sebastian Moeller
@ 2023-03-13 19:32                               ` rjmcmahon
  2023-03-13 20:00                                 ` Sebastian Moeller
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-13 19:32 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: dan, Jeremy Austin, Rpm, libreqos, Dave Taht via Starlink, bloat

On 2023-03-13 11:51, Sebastian Moeller wrote:
> Hi Bob,
> 
> 
>> On Mar 13, 2023, at 19:42, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>> 
>>> 	[SM] not really, given enough capacity, typical streaming protocols
>>> will actually not hit the ceiling, at least the one's I look at every
>>> now and then tend to stay well below actual capacity of the link.
>> I think DASH type protocol will hit link peaks. An example with iperf 
>> 2's burst option a controlled WiFi test rig, server side first.
> 
> 	[SM] I think that depends, each segment has only a finite length and
> if this can delivered before slow start ends that burst might never
> hit the capacity?
> 
> Regards

I believe most CDNs are setting the initial CWND so TCP can bypass slow 
start. Slow start seems an engineering flaw from the perspective of low 
latency. It's done for "fairness" whatever that means.

Bob

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 18:14                         ` Sebastian Moeller
  2023-03-13 18:42                           ` rjmcmahon
@ 2023-03-13 19:33                           ` dan
  2023-03-13 19:52                             ` Jeremy Austin
  2023-03-13 20:45                             ` Sebastian Moeller
  1 sibling, 2 replies; 183+ messages in thread
From: dan @ 2023-03-13 19:33 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Jeremy Austin, Rpm, libreqos, Dave Taht via Starlink, rjmcmahon, bloat

>
>         I respectfully disagree, if say, my modem had a 4 GB queue I could theoretically burst ~4GB worth of data at line rate into that buffer without learning anything about the the modem-link capacity.

so this is where we're getting into staw man arguments.  Find me a
single device or shaper with a 4GB buffer and we'll talk.
>
>
> >  turn off the
> > shaper and run anything.  run your speed test.  don't look at the
> > speed test results, just use it to generate some traffic.  you'll find
> > your peak and where you hit the buffers on the DSL modem by measuring
> > on the interface and measuring latency.
>
>         Peak of what? Exactly? The peak sending rate of my router is well known, its 1 Gbps gross ethernet rate...

I don't know how I can say it any clearer.  there is a port, any
speed.  data flows across that port.  The peak data flowing is the
measure.  simultaneously measuring latency will give the 'best' rate.
so called 'goodput' which is a stupid name and I hate it but there it
is.

>
>
> > That speed test isn't giving
> > you this data and more than Disney+, other than you get to pick when
> > it runs.
>
>         Hrm, no we sre back at actually saturating the link,

which we're doing all the time.  it's the entire premise of QoE.
Links get saturated, manage them.

>

>
>         [SM] not really, given enough capacity, typical streaming protocols will actually not hit the ceiling, at least the one's I look at every now and then tend to stay well below actual capacity of the link.

Not sure where you're getting this info, I'm looking right at
customers on everything from 25Mbps to 800Mbps plans.  And again, I'm
not saying you can saturate the link intentionally, I'm saying that
the tool doing the saturation isn't actually giving you accurate
results.  You have to look at the interface and the latency for the
results.  The speed test is a traffic generator, not a measuring tool.
It's fundamentally cannot do the measuring, it's doesn't have the
ability to see other flows on the interface.

>
>
>         [SM] But my problem is that on variable rate links I want to measure the instantaneous capacity such that I can do adaptive admission control and avpid over filling my modem's DSL buffers (I wish they would do something like BQL, but alas they don't).

Literally measure the interface on a schedule or constantly and you're
getting this measurement every time you use the link.  and if you
measure the latency you're constantly finding the spot right below the
buffers.


>
>         [SM] With competent AQM (like cake on ingress and egress configured for per-internal-IP isolation) I do not even notice whether a speedtes runs or not, and from the reported capacity I can estimate the concurrent load from other endhosts in my network.

exactly.  EXACTLY.  You might just be coming around.  That speed test
was held back by the shaper for your benefit NOT the speed test's.
It's results are necessarily false.  YOU can estimate the capacity by
adding up the speedtest results and your other uses.  Measuring the
outside interface gives you exactly that.  the speed test does not.
it's just a traffic generator for when you aren't generating it on
your own.


>
>
> >  Speed test cannot return actual capacity
>
>         [SM] Conventional capcaity tests give a decent enough estimate of current capacity to be useful, I could not care less that they are potential not perfect, sorry. The question still is how to estimate capacity without loading the link...

you have to load the link to know this.  Again, the speed test is a
traffic generator, it's not a measuring tool.  You have to measure at
the wan interface to know this, you can never get it from the speed
test.  And no, the speed test isn't a decent enough estimate.  The
more important the data is to you the more likely the test is bad and
going to lie.  Internet feeling slow? run a speed test and put more
pressure on the service and the speed test has less available to
return results on, all the other services getting their slice of the
pie.

>
>
> > Guess what the only way to get an actual measure of the capacity is?
> > my way.  measure what's passing the interface and measure what happens
> > to a reliable latency test during that time.
>
>         [SM] This is, respectfully, what we do in cake-autorate, but that requires an actual load and only accidentally detects the capacity, if a high enough load is sustained long enough to evoke a latency increase. But I knew that already, what you initially wrote sounded to me like you had a method to detect instantaneous capacity without needing to generate load. (BTW, in cake-autorate we do not generate an artificial load (only artificial/active latency probes) but use the organic user generated traffic as load generator*).
>
> *) If all endhosts are idle we do not care much about the capacity, only if there is traffic, however the quicker we can estimate the capacity the tigher our controller can operate.
>

See, you're coming around.  Cake is autorating (or very close, 'on
device') at the wan port.  not the speed test device or software.  And
the accurate data is collected by cake, not the speed test tool.  That
tool is reporting false information because it must, it doesn't know
the other consumers on the network.  It's 'truest' when the network is
quiet but the more talkers the more the tool lies.

cake, the kernel, and the wan port all have real info, the speed test
tool does not.

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 19:33                           ` [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA dan
@ 2023-03-13 19:52                             ` Jeremy Austin
  2023-03-13 21:00                               ` Sebastian Moeller
  2023-03-13 20:45                             ` Sebastian Moeller
  1 sibling, 1 reply; 183+ messages in thread
From: Jeremy Austin @ 2023-03-13 19:52 UTC (permalink / raw)
  To: dan
  Cc: Sebastian Moeller, Rpm, libreqos, Dave Taht via Starlink,
	rjmcmahon, bloat

[-- Attachment #1: Type: text/plain, Size: 1747 bytes --]

On Mon, Mar 13, 2023 at 12:34 PM dan <dandenson@gmail.com> wrote:

>
> See, you're coming around.  Cake is autorating (or very close, 'on
> device') at the wan port.  not the speed test device or software.  And
> the accurate data is collected by cake, not the speed test tool.  That
> tool is reporting false information because it must, it doesn't know
> the other consumers on the network.  It's 'truest' when the network is
> quiet but the more talkers the more the tool lies.
>
> cake, the kernel, and the wan port all have real info, the speed test
> tool does not.
>

I'm running a bit behind on commenting on the thread (apologies, more
later) but I point you back at my statement about NTIA (and, to a certain
extent, the FCC):

Consumers use speed tests to qualify their connection.

Whether AQM is applied or not, a speed test does not reflect in all
circumstances the capacity of the pipe. One might argue that it seldom
reflects it.

Unfortunately, those who have "real info", to use Dan's term, are currently
nearly powerless to use it. I am, if possible, on both the ISP and consumer
side here.

And yes, Preseem does have an iron in this fire, or at least a dog in this
fight.

Ironically, the FCC testing for CAF/RDOF actually *does* take interface
load into account, only tests during peak busy hours, and /then/ does a
speed test. But NTIA largely ignores that for BEAD.

-- 
--
Jeremy Austin
Sr. Product Manager
Preseem | Aterlo Networks
preseem.com

Book a Call: https://app.hubspot.com/meetings/jeremy548
Phone: 1-833-733-7336 x718
Email: jeremy@preseem.com

Stay Connected with Newsletters & More:
*https://preseem.com/stay-connected/* <https://preseem.com/stay-connected/>

[-- Attachment #2: Type: text/html, Size: 3191 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 19:32                               ` rjmcmahon
@ 2023-03-13 20:00                                 ` Sebastian Moeller
  2023-03-13 20:28                                   ` rjmcmahon
  0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-13 20:00 UTC (permalink / raw)
  To: rjmcmahon
  Cc: dan, Jeremy Austin, Rpm, libreqos, Dave Taht via Starlink, bloat

Hi Bob,


> On Mar 13, 2023, at 20:32, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
> 
> On 2023-03-13 11:51, Sebastian Moeller wrote:
>> Hi Bob,
>>> On Mar 13, 2023, at 19:42, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>>>> 	[SM] not really, given enough capacity, typical streaming protocols
>>>> will actually not hit the ceiling, at least the one's I look at every
>>>> now and then tend to stay well below actual capacity of the link.
>>> I think DASH type protocol will hit link peaks. An example with iperf 2's burst option a controlled WiFi test rig, server side first.
>> 	[SM] I think that depends, each segment has only a finite length and
>> if this can delivered before slow start ends that burst might never
>> hit the capacity?
>> Regards
> 
> I believe most CDNs are setting the initial CWND so TCP can bypass slow start.

	[SM] My take is not necessarily to bypass slow start, but to kick it off with a higher starting point... which is the conservative way to expedite slow-start. Real man actually increase the multiplication factor instead, but there are few real men around (luckily)... s I see both the desire to finish many smaller transfers within the initial window (so the first RTT after the handshake IIUC).

> Slow start seems an engineering flaw from the perspective of low latency.

	[SM] Yes, exponential search, doubling every RTT is pretty aggressive.


> It's done for "fairness" whatever that means.

	[SM] It is doe because:
a) TCP needs some capacity estimate
b) preferably quickly
c) in a way gentler than what was used before the congestion collapse.
	we are calling it slow-start not because it is slow in absolute terms (it is pretty aggressive already)
	but IIUC because before slow start people where even more aggressive (immediately sending at line rate?)

I think we need immediate queue build-up feedback so each flow can look at its own growth projection and the queue space shrinkage projection and then determine where these two will meet. Essentially we need  a gently way of ending slow-start instead of the old chestnut, dump twice as much data in flight into the network before we notice... it is this part that is at odds with low latency. L4s, true to form with its essential bit-banging of queue filling status over all flows in the LL queue is essentially givvin too little information too late. 


If I had a better proposal for a slow-start altenative I would make it, but for me slow-start is similar to what Churchill is supposed to have said about democracy "democracy is the worst form of government – except for all the others that have been tried."...


> 
> Bob


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 20:00                                 ` Sebastian Moeller
@ 2023-03-13 20:28                                   ` rjmcmahon
  2023-03-14  4:27                                     ` [LibreQoS] On FiWi rjmcmahon
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-13 20:28 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: dan, Jeremy Austin, Rpm, libreqos, Dave Taht via Starlink, bloat

> 	[SM] It is doe because:
> a) TCP needs some capacity estimate
> b) preferably quickly
> c) in a way gentler than what was used before the congestion collapse.

Right, but we're moving away from capacity shortages to a focus on 
better latencies. The speed of distributed compute (or speed of 
causality) is now mostly latency constrained.

Also, it's impossible per Jaffe & others for a TCP link to figure out 
the on-demand capacity so trying to get one via a "broken control loop" 
seems futile. I believe control theory states control loops need to be 
an order greater than what they're trying to control. I don't think an 
app or transport layer can do more than make educated guesses at for its 
control loop. Using a rating might help with that but for sure it's not 
accurate in space-time samples. (Note: many APs are rated 60+ Watts. 
What's the actual? Has to be sampled and that's only a sample. This 
leads to poor PoE designs - but I digress.)

Let's assume the transport layer should be designed to optimize the 
speed of causality. This also seems impossible because the e2e jitter is 
worse with respect to end host discovery so there seems no way to adapt 
from end host only.

If it's true that the end can only guess, maybe the solution domain 
comes from incorporating network measurements via telemetry with the ECN 
or equivalent? And an app can signal to the network elements to capture 
the e2e telemetry. I think this all has to happen within a few RTTs if 
the transport or host app is going to adjust.

Bob


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 19:33                           ` [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA dan
  2023-03-13 19:52                             ` Jeremy Austin
@ 2023-03-13 20:45                             ` Sebastian Moeller
  2023-03-13 21:02                               ` [LibreQoS] When do you drop? Always! Dave Taht
  1 sibling, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-13 20:45 UTC (permalink / raw)
  To: dan
  Cc: Jeremy Austin, Rpm, libreqos, Dave Taht via Starlink, rjmcmahon, bloat

Hi Dan,


> On Mar 13, 2023, at 20:33, dan <dandenson@gmail.com> wrote:
> 
>> 
>>        I respectfully disagree, if say, my modem had a 4 GB queue I could theoretically burst ~4GB worth of data at line rate into that buffer without learning anything about the the modem-link capacity.
> 
> so this is where we're getting into staw man arguments.  Find me a
> single device or shaper with a 4GB buffer and we'll talk.

	[SM] Sure, the moment you tell me how to measure true capacity without load ;) but my point stays intial burtst from my router to the modem will be absorbed by the modems queues and will not be indicative of the ink capacity.


>> 
>> 
>>> turn off the
>>> shaper and run anything.  run your speed test.  don't look at the
>>> speed test results, just use it to generate some traffic.  you'll find
>>> your peak and where you hit the buffers on the DSL modem by measuring
>>> on the interface and measuring latency.
>> 
>>        Peak of what? Exactly? The peak sending rate of my router is well known, its 1 Gbps gross ethernet rate...
> 
> I don't know how I can say it any clearer.  there is a port, any
> speed.  data flows across that port.  The peak data flowing is the
> measure.  simultaneously measuring latency will give the 'best' rate.
> so called 'goodput' which is a stupid name and I hate it but there it
> is.

	[SM] Sorry the peak gross ate on my gigabit interface to the modem in spite of my shaper is always going to be 1 Gbps what changes is the duty cycle... so without averaging this method of yours looks only partially useful.


> 
>> 
>> 
>>> That speed test isn't giving
>>> you this data and more than Disney+, other than you get to pick when
>>> it runs.
>> 
>>        Hrm, no we sre back at actually saturating the link,
> 
> which we're doing all the time.  it's the entire premise of QoE.
> Links get saturated, manage them.

	[SM] Mmmh, my goal (and your promise) was to be able to estimate the saturation capacity before/without actually hitting it. Which s the whole reason why cake-autorate exists, if we knew that we would track (a low passed version) of that value with our shaper and be done with...


> 
>> 
> 
>> 
>>        [SM] not really, given enough capacity, typical streaming protocols will actually not hit the ceiling, at least the one's I look at every now and then tend to stay well below actual capacity of the link.
> 
> Not sure where you're getting this info, I'm looking right at
> customers on everything from 25Mbps to 800Mbps plans.

	[SM] I guess I need to take a packet capture, I have a hunch that I might see ECN in action and a lack of drops is not indicative of slow-start not ending, Ho-hom something for my todo list....


>  And again, I'm
> not saying you can saturate the link intentionally, I'm saying that
> the tool doing the saturation isn't actually giving you accurate
> results.  You have to look at the interface and the latency for the
> results.  The speed test is a traffic generator, not a measuring tool.
> It's fundamentally cannot do the measuring, it's doesn't have the
> ability to see other flows on the interface.

	[SM] Ah, now I get your point, but I also ignore that point immediately as that is a) not the capacity resolution I typically need, b) in cake autorate we actually extract interface counters exactly  because we want to see all traffic. But comparing cake-autorate traces with speedtest curves (e.g. flent) looks pretty well correlated, so for my use cases the typical speedtests gve actionable and useful (albeit not perfect) capacity estimates, the longer running the better. This is a strike against all of these 10-20 seconds tests, but e.g. fast.com can be configured easily to measure each direction for a full minute, which side steps our buffer filling versus filling the link capacity discussion nicely, as my modem's buffers are not nearly large enough to absorg a noticeable portion of this 60 second load.


> 
>> 
>> 
>>        [SM] But my problem is that on variable rate links I want to measure the instantaneous capacity such that I can do adaptive admission control and avpid over filling my modem's DSL buffers (I wish they would do something like BQL, but alas they don't).
> 
> Literally measure the interface on a schedule or constantly and you're
> getting this measurement every time you use the link.  and if you
> measure the latency you're constantly finding the spot right below the
> buffers.

	[SM] Well except I can not measure the relevant interface veridically, the DSL SoC internal buffering for GINP retransmissions is not exposed to the OS but handled inside the "modem".


> 
> 
>> 
>>        [SM] With competent AQM (like cake on ingress and egress configured for per-internal-IP isolation) I do not even notice whether a speedtes runs or not, and from the reported capacity I can estimate the concurrent load from other endhosts in my network.
> 
> exactly.  EXACTLY.  You might just be coming around.

	[SM] ONne way of looking at it, I would say I am already here since a decade ad longer ;)

>  That speed test
> was held back by the shaper for your benefit NOT the speed test's.
> It's results are necessarily false.

	[SM] The question is not about false or wrong but about useful or useless. And I maintain that even a speedtest from an end-user system with all potential cross traffic is still useful.

>  YOU can estimate the capacity by
> adding up the speedtest results and your other uses.

	[SM] But here is the rub, this being a VDSL2 link and me having had a look in the standards I can calculate the maximum goodput over that link and in routine speedtests I come close enough to it that I consider most speedtests useful estimates of capacity. If I see a test noticeably smaller than expected I tend to repeat that test with tighter control... No i am typically not scraping numbers from kernel cpunters, but I simply run iftop on the router which will quickly let me see whether there are other noticeable data transfers on going.


>  Measuring the
> outside interface gives you exactly that.  the speed test does not.
> it's just a traffic generator for when you aren't generating it on
> your own.

	[SM] The perfect is the enemy of the good, I am depp in the "gopd enough is good enough" camp. Even tough I am happy to obsess about details of per-packet-overhead if I not remind myself of the "good enough mantra" ;)


> 
> 
>> 
>> 
>>> Speed test cannot return actual capacity
>> 
>>        [SM] Conventional capcaity tests give a decent enough estimate of current capacity to be useful, I could not care less that they are potential not perfect, sorry. The question still is how to estimate capacity without loading the link...
> 
> you have to load the link to know this.  Again, the speed test is a
> traffic generator, it's not a measuring tool.  You have to measure at
> the wan interface to know this, you can never get it from the speed
> test.

	[SM] Agian I understand your point and where you are coming from, even though I do not fully agree. Yes speedtests will not be 100% accurate and precise, but no that is not a show stopper as I am less concerned about individual Bps, and more whether I hit that capacity I pay for +/- 10-15%. I am a stickler for details, but i am not going to harass my ISP (which I am satisfied with) just because it slightly under delivers the contracted capacity ;)


>  And no, the speed test isn't a decent enough estimate.  

	[SM] Welcome to the EU, over here speedtests are essentially the officially blessed tool to check contract compliance by ISPs... and oin the process of getting there the same arguments we currently exchange where exchanged as well... The EU made a good point, ISP are free to put those numbers into contracts as they see fit, so they need to deliver these numbers in a user confirmable way. That means ISPs have to eat the slack, e.g. My ISP gives me a 116 Mps sync for a nominal 100 Mbps contract (the speedtest defaults tp IPv6 if possible)
116.7 * (64/65) * ((1500-8-40-20-12)/(1500+34)) = 106.36 Mbps
which allows them to fulfill the contracted maximal rate of 100 easily (to allow for some measuremnt slack it is sufficient if the end user measures 90% of the contracted maximal rate).
Tl; dr: the methods that you argue strongly to be useless are actually mandated in the EU and in Germany these even have legal standing in disputes between ISPs and their customers. If a strong point could be made that these measurements would be so wrong to be useless, I guess on of the sleazier ISPs would already have done so ;)


> The
> more important the data is to you the more likely the test is bad and
> going to lie.  Internet feeling slow? run a speed test and put more
> pressure on the service and the speed test has less available to
> return results on, all the other services getting their slice of the
> pie.

	[SM] Bad excuse. All ISPs over subscribe, which I consider an acceptable and economic way of operation, AS LONG as they expand "capacity" (or cancel contracts) once a "segment" shows signs of repeated and/or sustained overload. If you sell X Mbps to me, be prepared to actually deliver these any time...
Personally i wonder why ISPs do not simply offer something like "fair share on an X Mbps aggregate link" shared with my neighbours, but as long as they charge me for X Mpbs I expect that they acrually deliver this. Again occasional overload is just fine, sustained and predictable overload however is not... (which by the way is not my stance alone, it is essentially encoded in the EU regulation 2015/2120)


> 
>> 
>> 
>>> Guess what the only way to get an actual measure of the capacity is?
>>> my way.  measure what's passing the interface and measure what happens
>>> to a reliable latency test during that time.
>> 
>>        [SM] This is, respectfully, what we do in cake-autorate, but that requires an actual load and only accidentally detects the capacity, if a high enough load is sustained long enough to evoke a latency increase. But I knew that already, what you initially wrote sounded to me like you had a method to detect instantaneous capacity without needing to generate load. (BTW, in cake-autorate we do not generate an artificial load (only artificial/active latency probes) but use the organic user generated traffic as load generator*).
>> 
>> *) If all endhosts are idle we do not care much about the capacity, only if there is traffic, however the quicker we can estimate the capacity the tigher our controller can operate.
>> 
> 
> See, you're coming around.  Cake is autorating (or very close, 'on
> device') at the wan port.  not the speed test device or software.

	[SM] We are purposefully not running speedtests to estimate capacity as that would not be helpful, what we do is measure the existing load and the indiced delay and if the induced delay is larger than a threshold we reduce the shaper rate to reduce the load gently...


>  And
> the accurate data is collected by cake, not the speed test tool.

	[SM] Nah, we are not using cake's counters here but the kernels traffic counters for the interface that cake is instantiated on. and no these counyers are not perfect either as the kernel does IIRC not update them immediately but in a delayed fashion that is computationally cheaper. But for our purpose that is plenty "good enough"...


>  That
> tool is reporting false information because it must, it doesn't know
> the other consumers on the network.  It's 'truest' when the network is
> quiet but the more talkers the more the tool lies.

	[SM] A tool does not lie, the interpretation of the reading becomes trickier, but that is a problem of te user, not the tool. If i use a hammer to hammer in a screwm blame me, not the hammer or the screw). ;)

> 
> cake, the kernel, and the wan port all have real info, the speed test
> tool does not.

	[SM] THis is not where we started... and not what cake-autorate does, it also does not convince me that capacity tests are useless. I happily concede that they are neither 100% accurate, precise or reliable. There is a contimuum between 100% correct and useless ;)

Regards
	Sebastian




^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 19:52                             ` Jeremy Austin
@ 2023-03-13 21:00                               ` Sebastian Moeller
  2023-03-13 21:27                                 ` dan
  0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-13 21:00 UTC (permalink / raw)
  To: Jeremy Austin
  Cc: dan, Rpm, libreqos, Dave Taht via Starlink, rjmcmahon, bloat

Hi Jeremy,

> On Mar 13, 2023, at 20:52, Jeremy Austin <jeremy@aterlo.com> wrote:
> 
> 
> 
> On Mon, Mar 13, 2023 at 12:34 PM dan <dandenson@gmail.com> wrote:
> 
> See, you're coming around.  Cake is autorating (or very close, 'on
> device') at the wan port.  not the speed test device or software.  And
> the accurate data is collected by cake, not the speed test tool.  That
> tool is reporting false information because it must, it doesn't know
> the other consumers on the network.  It's 'truest' when the network is
> quiet but the more talkers the more the tool lies.
> 
> cake, the kernel, and the wan port all have real info, the speed test
> tool does not.
> 
> I'm running a bit behind on commenting on the thread (apologies, more later) but I point you back at my statement about NTIA (and, to a certain extent, the FCC): 
> 
> Consumers use speed tests to qualify their connection.

	[SM] And rightly so... this put a nice stop to the perverse practice of selling contracts stating (up to) 100 Mbps for links that never could reach that capacity ever, now an ISP is careful in what they promise... Speedtest (especially using the official speedtest app that tries to make users pay attention to a number of important points, e.g. not over WiFi, but over an ethernet port that has a capacity above the contracted speed) seem to be good enough for that purpose. Really over here that is the law and ISP still are doing fine and we are taking low single digit thousands of complaints in a market with ~40 million households.

> 
> Whether AQM is applied or not, a speed test does not reflect in all circumstances the capacity of the pipe. One might argue that it seldom reflects it.

	[SM] But one would be wrong, at least the official speedtests over here are pretty reliable, but they seem to be competenyly managed. E.g. users need to put in the contracted speed (drop down boxes to the select ISP and contract name) and the test infrastructure will only start the test if it managed to reserver sufficient capacity of the test servers to reliably saturate the contracted rate. This is a bit of engineering and not witchcraft, really. ;)

> Unfortunately, those who have "real info", to use Dan's term, are currently nearly powerless to use it. I am, if possible, on both the ISP and consumer side here.

	[SM] If you are talking about speedtests being systemicly wrong in getting usabe capacity estimates I disagree, if your point is that a sole focus on this measure is missing the way more important point od keeping latency under load limited, I fully agree. That point currently is lost on the national regulator over here as well.

> And yes, Preseem does have an iron in this fire, or at least a dog in this fight.

	[SM] Go team!

> Ironically, the FCC testing for CAF/RDOF actually *does* take interface load into account, only tests during peak busy hours, and /then/ does a speed test. But NTIA largely ignores that for BEAD.

	[SM] I admit that I have not looked deeply into these different test methods, and will shut up about this topic until I did to avoid wasting your time.

Regards
	Sebastian


> 
> -- 
> --
> Jeremy Austin
> Sr. Product Manager
> Preseem | Aterlo Networks
> preseem.com
> 
> Book a Call: https://app.hubspot.com/meetings/jeremy548
> Phone: 1-833-733-7336 x718
> Email: jeremy@preseem.com
> 
> Stay Connected with Newsletters & More: https://preseem.com/stay-connected/


^ permalink raw reply	[flat|nested] 183+ messages in thread

* [LibreQoS] When do you drop? Always!
  2023-03-13 20:45                             ` Sebastian Moeller
@ 2023-03-13 21:02                               ` Dave Taht
  0 siblings, 0 replies; 183+ messages in thread
From: Dave Taht @ 2023-03-13 21:02 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: dan, Dave Taht via Starlink, libreqos, Rpm, rjmcmahon, bloat

[-- Attachment #1: Type: text/plain, Size: 1268 bytes --]

Attached is a picture of what slow start looks like on a 100Mbit plan
(acquired via the libreqos testbed, our tests vary, but if you would
like to see many ISP plans tested against (presently) cake, feel free
to click on https://payne.taht.net - it is not up all the time, nor
are the tests the same all the time, for details as to what is
running, please join us in the #libreqos:matrix.org chatroom)

An overall point I have been trying to make is that *at some point*,
any sufficiently long flow will exceed the available fifo queue
length, and drop packets, sometimes quite a lot. That is a point, the
high water mark, worth capturing the bandwidth in, say, the prior
100ms.  To me packet behaviors look a lot like musical waveforms,
especially when sampled at the appropriate nyquist rate for the
bandwidth and rtt. Out of any waveform, these days, I can usually pick
out what AQM (if any) is in action. I hope one day soon, more people
see patterns like these, and glean a deeper understanding.

I also keep hoping for someone to lean in, verify, and plot some
results I got recently against mkeown´s theories of buffersizing,
here:

https://blog.cerowrt.org/post/juniper/

I don´t trust my results, especially when they are this good.

[-- Attachment #2: image.png --]
[-- Type: image/png, Size: 26515 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 21:00                               ` Sebastian Moeller
@ 2023-03-13 21:27                                 ` dan
  2023-03-14  9:11                                   ` Sebastian Moeller
  0 siblings, 1 reply; 183+ messages in thread
From: dan @ 2023-03-13 21:27 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Rpm, libreqos, Dave Taht via Starlink, rjmcmahon, bloat, Jeremy Austin

[-- Attachment #1: Type: text/plain, Size: 5707 bytes --]

 I’m sticking to my guns on this, but am prepared to let this particular
argument rest.  The threads is approaching unreadable.

Let me throw something else out there.  It would be very nice to have some
standard packet type that was designed to be mangled by a traffic shaper.
So you could initiate a speed test specifically to stress-test the link and
then exchange a packet that the shaper would update both ways with all the
stats you might want.  Ie, speed test is getting 80Mbps but there’s an
additional 20Mbps on-link so it should report to the user that 100M
aggregate with the details broken out usably.  Could also report to that
speed test client and server things like latency over the last x minutes
along with throughput so again, could be charted out to show the ‘good put’
and similar numbers.  Basically, provide the end user with decently
accurate data that includes what the speed test app wasn’t able to see
itself.  It could also insert useful data around how many packets arrived
that the speed test app(s) could use to determine if there are issues on
wan or lan.

I say mangle here because many traffic shapers are transparent so the speed
test app itself doesn’t really have a way to ask the shaper directly.

My point in all of this is that if you’re giving the end user information,
it should be right.  No information is better than false information.  End
users will call their ISP or worse get on social media and trash them
because they bought a $29 netgear at Walmart that is terrible.

After all the entire point if all of this is end-user experience.  The only
benefit to ISPs is that happy users are good for business.  A lot of the
data that can be collected at various points along the path are better for
ISPs to use to update their networks to improve user experience, but aren’t
so useful to the 99% of users that just want low ‘lag’ on their games and
no buffering.




On Mar 13, 2023 at 3:00:23 PM, Sebastian Moeller <moeller0@gmx.de> wrote:

> Hi Jeremy,
>
> On Mar 13, 2023, at 20:52, Jeremy Austin <jeremy@aterlo.com> wrote:
>
>
>
>
> On Mon, Mar 13, 2023 at 12:34 PM dan <dandenson@gmail.com> wrote:
>
>
> See, you're coming around.  Cake is autorating (or very close, 'on
>
> device') at the wan port.  not the speed test device or software.  And
>
> the accurate data is collected by cake, not the speed test tool.  That
>
> tool is reporting false information because it must, it doesn't know
>
> the other consumers on the network.  It's 'truest' when the network is
>
> quiet but the more talkers the more the tool lies.
>
>
> cake, the kernel, and the wan port all have real info, the speed test
>
> tool does not.
>
>
> I'm running a bit behind on commenting on the thread (apologies, more
> later) but I point you back at my statement about NTIA (and, to a certain
> extent, the FCC):
>
>
> Consumers use speed tests to qualify their connection.
>
>
> [SM] And rightly so... this put a nice stop to the perverse practice of
> selling contracts stating (up to) 100 Mbps for links that never could reach
> that capacity ever, now an ISP is careful in what they promise... Speedtest
> (especially using the official speedtest app that tries to make users pay
> attention to a number of important points, e.g. not over WiFi, but over an
> ethernet port that has a capacity above the contracted speed) seem to be
> good enough for that purpose. Really over here that is the law and ISP
> still are doing fine and we are taking low single digit thousands of
> complaints in a market with ~40 million households.
>
>
> Whether AQM is applied or not, a speed test does not reflect in all
> circumstances the capacity of the pipe. One might argue that it seldom
> reflects it.
>
>
> [SM] But one would be wrong, at least the official speedtests over here
> are pretty reliable, but they seem to be competenyly managed. E.g. users
> need to put in the contracted speed (drop down boxes to the select ISP and
> contract name) and the test infrastructure will only start the test if it
> managed to reserver sufficient capacity of the test servers to reliably
> saturate the contracted rate. This is a bit of engineering and not
> witchcraft, really. ;)
>
> Unfortunately, those who have "real info", to use Dan's term, are
> currently nearly powerless to use it. I am, if possible, on both the ISP
> and consumer side here.
>
>
> [SM] If you are talking about speedtests being systemicly wrong in getting
> usabe capacity estimates I disagree, if your point is that a sole focus on
> this measure is missing the way more important point od keeping latency
> under load limited, I fully agree. That point currently is lost on the
> national regulator over here as well.
>
> And yes, Preseem does have an iron in this fire, or at least a dog in this
> fight.
>
>
> [SM] Go team!
>
> Ironically, the FCC testing for CAF/RDOF actually *does* take interface
> load into account, only tests during peak busy hours, and /then/ does a
> speed test. But NTIA largely ignores that for BEAD.
>
>
> [SM] I admit that I have not looked deeply into these different test
> methods, and will shut up about this topic until I did to avoid wasting
> your time.
>
> Regards
> Sebastian
>
>
>
> --
>
> --
>
> Jeremy Austin
>
> Sr. Product Manager
>
> Preseem | Aterlo Networks
>
> preseem.com
>
>
> Book a Call: https://app.hubspot.com/meetings/jeremy548
>
> Phone: 1-833-733-7336 x718
>
> Email: jeremy@preseem.com
>
>
> Stay Connected with Newsletters & More:
> https://preseem.com/stay-connected/
>
>
>

[-- Attachment #2: Type: text/html, Size: 7951 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* [LibreQoS] On FiWi
  2023-03-13 20:28                                   ` rjmcmahon
@ 2023-03-14  4:27                                     ` rjmcmahon
  2023-03-14 11:10                                       ` [LibreQoS] [Starlink] " Mike Puchol
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-14  4:27 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: dan, Jeremy Austin, Rpm, libreqos, Dave Taht via Starlink, bloat

To change the topic - curious to thoughts on FiWi.

Imagine a world with no copper cable called FiWi (Fiber,VCSEL/CMOS 
Radios, Antennas) and which is point to point inside a building 
connected to virtualized APs fiber hops away. Each remote radio head 
(RRH) would consume 5W or less and only when active. No need for things 
like zigbee, or meshes, or threads as each radio has a fiber connection 
via Corning's actifi or equivalent. Eliminate the AP/Client power 
imbalance. Plastics also can house smoke or other sensors.

Some reminders from Paul Baran in 1994 (and from David Reed)

o) Shorter range rf transceivers connected to fiber could produce a 
significant improvement - - tremendous improvement, really.
o) a mixture of terrestrial links plus shorter range radio links has the 
effect of increasing by orders and orders of magnitude the amount of 
frequency spectrum that can be made available.
o) By authorizing high power to support a few users to reach slightly 
longer distances we deprive ourselves of the opportunity to serve the 
many.
o) Communications systems can be built with 10dB ratio
o) Digital transmission when properly done allows a small signal to 
noise ratio to be used successfully to retrieve an error free signal.
o) And, never forget, any transmission capacity not used is wasted 
forever, like water over the dam. Not using such techniques represent 
lost opportunity.

And on waveguides:

o) "Fiber transmission loss is ~0.5dB/km for single mode fiber, 
independent of modulation"
o) “Copper cables and PCB traces are very frequency dependent.  At 
100Gb/s, the loss is in dB/inch."
o) "Free space: the power density of the radio waves decreases with the 
square of distance from the transmitting antenna due to spreading of the 
electromagnetic energy in space according to the inverse square law"

The sunk costs & long-lived parts of FiWi are the fiber and the CPE 
plastics & antennas, as CMOS radios+ & fiber/laser, e.g. VCSEL could be 
pluggable, allowing for field upgrades. Just like swapping out SFP in a 
data center.

This approach basically drives out WiFi latency by eliminating shared 
queues and increases capacity by orders of magnitude by leveraging 10dB 
in the spatial dimension, all of which is achieved by a physical design. 
Just place enough RRHs as needed (similar to a pop up sprinkler in an 
irrigation system.)

Start and build this for an MDU and the value of the building improves. 
Sadly, there seems no way to capture that value other than over long 
term use. It doesn't matter whether the leader of the HOA tries to 
capture the value or if a last mile provider tries. The value remains 
sunk or hidden with nothing on the asset side of the balance sheet. 
We've got a CAPEX spend that has to be made up via "OPEX returns" over 
years.

But the asset is there.

How do we do this?

Bob

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
  2023-03-13 21:27                                 ` dan
@ 2023-03-14  9:11                                   ` Sebastian Moeller
  0 siblings, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-14  9:11 UTC (permalink / raw)
  To: dan
  Cc: Rpm, libreqos, Dave Taht via Starlink, rjmcmahon, bloat, Jeremy Austin

Hi Dan,


> On Mar 13, 2023, at 22:27, dan <dandenson@gmail.com> wrote:
> 
> I’m sticking to my guns on this, but am prepared to let this particular argument rest.  The threads is approaching unreadable.

	[SM] Sorry I have a tendency of simultaneously pushing multiple discussion threads instead of focussing on the most important...

> 
> Let me throw something else out there.  It would be very nice to have some standard packet type that was designed to be mangled by a traffic shaper.  So you could initiate a speed test specifically to stress-test the link and then exchange a packet that the shaper would update both ways with all the stats you might want.  Ie, speed test is getting 80Mbps but there’s an additional 20Mbps on-link so it should report to the user that 100M aggregate with the details broken out usably. 

	[SM] Yeah, that does not really work, the traffic shaper does not know that elusive capacity either... on some link technologies like ethernet something reportable might exist, but on variable rate links not so much. However it would be nice if traffic shapers cpuld be tickled to reveal their own primary configuration. As I answered to Dave in a different post we are talking about 4 numbers per shaper instance here:
gross shaper rate, per-packet-overhead, mpu (minimum packet unit), link-layer specific accpunting (e.g. 48/53 encapsulation for ATM/AAL5)
The first two are clearly shaper specific and I expect all competent shapers to use these; mpu is more a protocol issue (e.g. all link layers sending partial ethernet frames with frame check sequence inherit ethernets minimal packet size of 64 byte plus overhead.
	For my own shaper I know these already, but my ISPs shapers are pretty opaque to me, so being able to query these would be great. (BTW, for speedtests in dispues with my ISP, I disable my traffic shaper obviously that capacity loss is not in their responsibility).


>  Could also report to that speed test client and server things like latency over the last x minutes along

	[SM] A normal shaper does not know this... and even cake/fq_codel that measure sojourn time per packet and have a decent idea about a packets flow-identity (not perfect as there is a limited number of hash buckets). It does not report anything useful in regards to "average" sojourn time for the packets in the measurement flows... (it would need to know when to start and when to stop at the very least). Honestly this looks more like a post-hoc job to be performed on packet captures than an on-line thing expected from a traffic/shaper/AQM.

> with throughput so again, could be charted out to show the ‘good put’ and similar numbers.

	[SM] Sorry to sound contrarien, but goodput is IMHO a number quite relevant to end-users, so that speed tests report an estimate of that number is A-OK with me, but I also understand that speedtest can not report the veridical gross bottleneck capacity in all cases anyway, due to lack of important information.

>  Basically, provide the end user with decently accurate data that includes what the speed test app wasn’t able to see itself. 

	[SM] Red herring IMHO, speedtests have clear issues and problems, the fact that they do not measure 100% of packets traversing a link is not one of them IMHO, they mostly are close enough to the theoretical numbers that differences can simply be ignored... as I said my ISP provisions a gross DSL sync of 116.7 Mbps but contractually only asserts a maximum of 100 Mbps goodput (over IPv6), the math for this works well and actually my ISP tends to over-fulfil the contractual rate in that I get ~105 Mbps of the 100 my contract promises... 
	Sure personally I am more interested in the actuall gross rate my ISP sets its traffic shapers for my link too, but generally hitting the contract numbers is not rocket science if one is careful which rate to promise... ;)


>  It could also insert useful data around how many packets arrived that the speed test app(s) could use to determine if there are issues on wan or lan.  

	[SM] I think speedtests should report: number of parallel flows, total number of packets, total number of bytes transferred, number of retransmits, and finally MTU and more importantly for TCP MSS (or average packet size, but that is much harder to get at with TCP). AND latency under load ;)

> 
> I say mangle here because many traffic shapers are transparent so the speed test app itself doesn’t really have a way to ask the shaper directly. 

	[SM] I am pretty sue that is not going to come... as this smells like a gross layering violation, unless one comes up with some IP extension header to add that contains that information. Having intermediary nodes write into payload area of packets, is frowned upon in the IETF IIRC...

> 
> My point in all of this is that if you’re giving the end user information, it should be right.  No information is better than false information.

	[SM] A normal speedtest is not actually wrong, just because it is not 100% precise and accurate. At the current time users operating  traffic shaper can be expected to turn that off during an official speedtest. If a user wanted to cheat and artificially lower their achieved rates there is way more bang for you buck in either forcing IP fragmentation or using MSS clamping to cause the speedtest to use smaller packets. This is not only due to the higher overhead fraction for smaller packets, but simply because in my admittedly limited experience few CPE seem prepared for the PPS processing required for dealing with a saturating flow of small packets. 
	However cheating in official tests is neither permitted nor to be expected (most humans act honestly). Between business partners like ISP and customer there should be an initial assumption of good will in either direction, no?

>  End users will call their ISP or worse get on social media and trash them because they bought a $29 netgear at Walmart that is terrible.

	[SM] Maybe, but unlikely ot affect the reutation of an ISP unless that is not a rare exception but the rule.... think about reading e.g. amazon 1 star reviews, some read like a genuine faulty product and sone clearly show the writer had no clue... same is true for social media posts unless you happen to be in the center of a veritable shit storm a decent ISP should be able to shrug off a few negative comments, no?
	Over here from looking at ISPs forum, the issue is often reversed, genuine problem reports are rejected because end-users did not use the ISP supplied router/modem... (and admittedly that can cause problems, but these problems are not guaranteed).


> 
> After all the entire point if all of this is end-user experience.  The only benefit to ISPs is that happy users are good for business.

	[SM] A customer that can confirm and see that what their ISP promised is what the ISP actually and delivers likely to feel validated for selecting that ISP. As a rule happy customers tend to stick...


>  A lot of the data that can be collected at various points along the path are better for ISPs to use to update their networks to improve user experience, but aren’t so useful to the 99% of users that just want low ‘lag’ on their games and no buffering.
> 
> 
> 
> 
> On Mar 13, 2023 at 3:00:23 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
>> Hi Jeremy,
>> 
>>> On Mar 13, 2023, at 20:52, Jeremy Austin <jeremy@aterlo.com> wrote:
>>> 
>>> 
>>> 
>>> On Mon, Mar 13, 2023 at 12:34 PM dan <dandenson@gmail.com> wrote:
>>> 
>>> See, you're coming around.  Cake is autorating (or very close, 'on
>>> device') at the wan port.  not the speed test device or software.  And
>>> the accurate data is collected by cake, not the speed test tool.  That
>>> tool is reporting false information because it must, it doesn't know
>>> the other consumers on the network.  It's 'truest' when the network is
>>> quiet but the more talkers the more the tool lies.
>>> 
>>> cake, the kernel, and the wan port all have real info, the speed test
>>> tool does not.
>>> 
>>> I'm running a bit behind on commenting on the thread (apologies, more later) but I point you back at my statement about NTIA (and, to a certain extent, the FCC): 
>>> 
>>> Consumers use speed tests to qualify their connection.
>> 
>> [SM] And rightly so... this put a nice stop to the perverse practice of selling contracts stating (up to) 100 Mbps for links that never could reach that capacity ever, now an ISP is careful in what they promise... Speedtest (especially using the official speedtest app that tries to make users pay attention to a number of important points, e.g. not over WiFi, but over an ethernet port that has a capacity above the contracted speed) seem to be good enough for that purpose. Really over here that is the law and ISP still are doing fine and we are taking low single digit thousands of complaints in a market with ~40 million households.
>> 
>>> 
>>> Whether AQM is applied or not, a speed test does not reflect in all circumstances the capacity of the pipe. One might argue that it seldom reflects it.
>> 
>> [SM] But one would be wrong, at least the official speedtests over here are pretty reliable, but they seem to be competenyly managed. E.g. users need to put in the contracted speed (drop down boxes to the select ISP and contract name) and the test infrastructure will only start the test if it managed to reserver sufficient capacity of the test servers to reliably saturate the contracted rate. This is a bit of engineering and not witchcraft, really. ;)
>> 
>>> Unfortunately, those who have "real info", to use Dan's term, are currently nearly powerless to use it. I am, if possible, on both the ISP and consumer side here.
>> 
>> [SM] If you are talking about speedtests being systemicly wrong in getting usabe capacity estimates I disagree, if your point is that a sole focus on this measure is missing the way more important point od keeping latency under load limited, I fully agree. That point currently is lost on the national regulator over here as well.
>> 
>>> And yes, Preseem does have an iron in this fire, or at least a dog in this fight.
>> 
>> [SM] Go team!
>> 
>>> Ironically, the FCC testing for CAF/RDOF actually *does* take interface load into account, only tests during peak busy hours, and /then/ does a speed test. But NTIA largely ignores that for BEAD.
>> 
>> [SM] I admit that I have not looked deeply into these different test methods, and will shut up about this topic until I did to avoid wasting your time.
>> 
>> Regards
>> Sebastian
>> 
>> 
>>> 
>>> -- 
>>> --
>>> Jeremy Austin
>>> Sr. Product Manager
>>> Preseem | Aterlo Networks
>>> preseem.com
>>> 
>>> Book a Call: https://app.hubspot.com/meetings/jeremy548
>>> Phone: 1-833-733-7336 x718
>>> Email: jeremy@preseem.com
>>> 
>>> Stay Connected with Newsletters & More: https://preseem.com/stay-connected/
>> 


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] On FiWi
  2023-03-14  4:27                                     ` [LibreQoS] On FiWi rjmcmahon
@ 2023-03-14 11:10                                       ` Mike Puchol
  2023-03-14 16:54                                         ` [LibreQoS] [Rpm] " Robert McMahon
  2023-03-17 16:38                                         ` [LibreQoS] [Rpm] " Dave Taht
  0 siblings, 2 replies; 183+ messages in thread
From: Mike Puchol @ 2023-03-14 11:10 UTC (permalink / raw)
  To: Dave Taht via Starlink; +Cc: libreqos, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 7399 bytes --]

Hi Bob,

You hit on a set of very valid points, which I'll complement with my views on where the industry (the bit of it that affects WISPs) is heading, and what I saw at the MWC in Barcelona. Love the FiWi term :-)

I have seen the vendors that supply WISPs, such as Ubiquiti, Cambium, and Mimosa, but also newer entrants such as Tarana, increase the performance and on-paper specs of their equipment. My examples below are centered on the African market, if you operate in Europe or the US, where you can charge customers a higher install fee, or even charge them a break-up fee if they don't return equipment, the economics work.

Where currently a ~$500 sector radio could serve ~60 endpoints, at a cost of ~$50 per endpoint (I use this term in place of ODU/CPE, the antenna that you mount on the roof), and supply ~2.5 Mbps CIR per endpoint, the evolution is now a ~$2,000+ sector radio, a $200 endpoint, capability for ~150 endpoints per sector, and ~25 Mbps CIR per endpoint.

If every customer a WISP installs represents, say, $100 CAPEX at install time ($50 for the antenna + cabling, router, etc), and you charge a $30 install fee, you have $70 to recover, and you recover from the monthly contribution the customer makes. If the contribution after OPEX is, say, $10, it takes you 7 months to recover the full install cost. Not bad, doable even in low-income markets.

Fast-forward to the next-generation version. Now, the CAPEX at install is $250, you need to recover $220, and it will take you 22 months, which is above the usual 18 months that investors look for.

The focus, thereby, has to be the lever that has the largest effect on the unit economics - which is the per-customer cost. I have drawn what my ideal FiWi network would look like:



Taking you through this - we start with a 1-port, low-cost EPON OLT (or you could go for 2, 4, 8 ports as you add capacity). This OLT has capacity for 64 ONUs on its single port. Instead of connecting the typical fiber infrastructure with kilometers of cables which break, require maintenance, etc. we insert an EPON to Ethernet converter (I added "magic" because these don't exist AFAIK).

This converter allows us to connect our $2k sector radio, and serve the $200 endpoints (ODUs) over wireless point-to-multipoint up to 10km away. Each ODU then has a reverse converter, which gives us EPON again.

Once we are back on EPON, we can insert splitters, for example, pre-connectorized outdoor 1:16 boxes. Every customer install now involves a 100 meter roll of pre-connectorized 2-core drop cable, and a $20 EPON ONU.

Using this deployment method, we could connect up to 16 customers to a single $200 endpoint, so the enpoint CAPEX per customer is now $12.5. Add the ONU, cable, etc. and we have a per-install CAPEX of $82.5 (assuming the same $50 of extras we had before), and an even shorter break-even. In addition, as the endpoints support higher capacity, we can provision at least the same, if not more, capacity per customer.

Other advantages: the $200 ODU is no longer customer equipment and CAPEX, but network equipment, and as such, can operate under a longer break-even timeline, and be financed by infrastructure PE funds, for example. As a result, churn has a much lower financial impact on the operator.

The main reason why this wouldn't work today is that EPON, as we know, is synchronous, and requires the OLT to orchestrate the amount of time each ONU can transmit, and when. Having wireless hops and media conversions will introduce latencies which can break down the communications (e.g. one ONU may transmit, get delayed on the radio link, and end up overlapping another ONU that transmitted on the next slot). Thus, either the "magic" box needs to account for this, or an new hybrid EPON-wireless protocol developed.

My main point here: the industry is moving away from the unconnected. All the claims I heard and saw at MWC about "connecting the unconnected" had zero resonance with the financial drivers that the unconnected really operate under, on top of IT literacy, digital skills, devices, power...

Best,

Mike
On Mar 14, 2023 at 05:27 +0100, rjmcmahon via Starlink <starlink@lists.bufferbloat.net>, wrote:
> To change the topic - curious to thoughts on FiWi.
>
> Imagine a world with no copper cable called FiWi (Fiber,VCSEL/CMOS
> Radios, Antennas) and which is point to point inside a building
> connected to virtualized APs fiber hops away. Each remote radio head
> (RRH) would consume 5W or less and only when active. No need for things
> like zigbee, or meshes, or threads as each radio has a fiber connection
> via Corning's actifi or equivalent. Eliminate the AP/Client power
> imbalance. Plastics also can house smoke or other sensors.
>
> Some reminders from Paul Baran in 1994 (and from David Reed)
>
> o) Shorter range rf transceivers connected to fiber could produce a
> significant improvement - - tremendous improvement, really.
> o) a mixture of terrestrial links plus shorter range radio links has the
> effect of increasing by orders and orders of magnitude the amount of
> frequency spectrum that can be made available.
> o) By authorizing high power to support a few users to reach slightly
> longer distances we deprive ourselves of the opportunity to serve the
> many.
> o) Communications systems can be built with 10dB ratio
> o) Digital transmission when properly done allows a small signal to
> noise ratio to be used successfully to retrieve an error free signal.
> o) And, never forget, any transmission capacity not used is wasted
> forever, like water over the dam. Not using such techniques represent
> lost opportunity.
>
> And on waveguides:
>
> o) "Fiber transmission loss is ~0.5dB/km for single mode fiber,
> independent of modulation"
> o) “Copper cables and PCB traces are very frequency dependent. At
> 100Gb/s, the loss is in dB/inch."
> o) "Free space: the power density of the radio waves decreases with the
> square of distance from the transmitting antenna due to spreading of the
> electromagnetic energy in space according to the inverse square law"
>
> The sunk costs & long-lived parts of FiWi are the fiber and the CPE
> plastics & antennas, as CMOS radios+ & fiber/laser, e.g. VCSEL could be
> pluggable, allowing for field upgrades. Just like swapping out SFP in a
> data center.
>
> This approach basically drives out WiFi latency by eliminating shared
> queues and increases capacity by orders of magnitude by leveraging 10dB
> in the spatial dimension, all of which is achieved by a physical design.
> Just place enough RRHs as needed (similar to a pop up sprinkler in an
> irrigation system.)
>
> Start and build this for an MDU and the value of the building improves.
> Sadly, there seems no way to capture that value other than over long
> term use. It doesn't matter whether the leader of the HOA tries to
> capture the value or if a last mile provider tries. The value remains
> sunk or hidden with nothing on the asset side of the balance sheet.
> We've got a CAPEX spend that has to be made up via "OPEX returns" over
> years.
>
> But the asset is there.
>
> How do we do this?
>
> Bob
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink

[-- Attachment #2.1: Type: text/html, Size: 8415 bytes --]

[-- Attachment #2.2: Hybrid EPON-Wireless network.png --]
[-- Type: image/png, Size: 149871 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink] On FiWi
  2023-03-14 11:10                                       ` [LibreQoS] [Starlink] " Mike Puchol
@ 2023-03-14 16:54                                         ` Robert McMahon
  2023-03-14 17:06                                           ` Robert McMahon
  2023-03-17 16:38                                         ` [LibreQoS] [Rpm] " Dave Taht
  1 sibling, 1 reply; 183+ messages in thread
From: Robert McMahon @ 2023-03-14 16:54 UTC (permalink / raw)
  To: Mike Puchol; +Cc: Dave Taht via Starlink, Rpm, libreqos, bloat

[-- Attachment #1: Type: text/plain, Size: 8913 bytes --]

Hi Mike,

I'm thinking more of fiber to the room. The last few meters are wifi everything else is fiber.. Those radios would be a max of 20' from the associated STA. Then at phy rates of 2.8Gb/s per spatial stream. The common MIMO is 2x2 so each radio head or wifi transceiver supports 5.6G, no queueing delay. Wholesale is $5 and retail $19.95 per pluggable transceiver. Sold at Home Depot next to the irrigation aisle. 10 per house is $199 and each room gets a dedicated 5.8G phy rate. Need more devices in a space? Pick an RRH with more cmos radios. Also, the antennas would be patch antenna and fill the room properly. Then plug in an optional sensor for fire alerting.


A digression. A lot of signal processing engineers have been working on TX beam forming. The best beam is fiber. Just do that. It even can turn corners and goes exactly to where it's needed at very low energies. This is similar to pvc pipes in irrigation systems. They're designed to take water to spray heads.

The cost is the cable plant. That's labor more than materials. Similar for irrigation, pvc is inexpensive and lasts decades. A return labor means use future proof materials, e.g. fiber.

Bob



On Mar 14, 2023, 4:10 AM, at 4:10 AM, Mike Puchol via Rpm <rpm@lists.bufferbloat.net> wrote:
>Hi Bob,
>
>You hit on a set of very valid points, which I'll complement with my
>views on where the industry (the bit of it that affects WISPs) is
>heading, and what I saw at the MWC in Barcelona. Love the FiWi term :-)
>
>I have seen the vendors that supply WISPs, such as Ubiquiti, Cambium,
>and Mimosa, but also newer entrants such as Tarana, increase the
>performance and on-paper specs of their equipment. My examples below
>are centered on the African market, if you operate in Europe or the US,
>where you can charge customers a higher install fee, or even charge
>them a break-up fee if they don't return equipment, the economics work.
>
>Where currently a ~$500 sector radio could serve ~60 endpoints, at a
>cost of ~$50 per endpoint (I use this term in place of ODU/CPE, the
>antenna that you mount on the roof), and supply ~2.5 Mbps CIR per
>endpoint, the evolution is now a ~$2,000+ sector radio, a $200
>endpoint, capability for ~150 endpoints per sector, and ~25 Mbps CIR
>per endpoint.
>
>If every customer a WISP installs represents, say, $100 CAPEX at
>install time ($50 for the antenna + cabling, router, etc), and you
>charge a $30 install fee, you have $70 to recover, and you recover from
>the monthly contribution the customer makes. If the contribution after
>OPEX is, say, $10, it takes you 7 months to recover the full install
>cost. Not bad, doable even in low-income markets.
>
>Fast-forward to the next-generation version. Now, the CAPEX at install
>is $250, you need to recover $220, and it will take you 22 months,
>which is above the usual 18 months that investors look for.
>
>The focus, thereby, has to be the lever that has the largest effect on
>the unit economics - which is the per-customer cost. I have drawn what
>my ideal FiWi network would look like:
>
>
>
>Taking you through this - we start with a 1-port, low-cost EPON OLT (or
>you could go for 2, 4, 8 ports as you add capacity). This OLT has
>capacity for 64 ONUs on its single port. Instead of connecting the
>typical fiber infrastructure with kilometers of cables which break,
>require maintenance, etc. we insert an EPON to Ethernet converter (I
>added "magic" because these don't exist AFAIK).
>
>This converter allows us to connect our $2k sector radio, and serve the
>$200 endpoints (ODUs) over wireless point-to-multipoint up to 10km
>away. Each ODU then has a reverse converter, which gives us EPON again.
>
>Once we are back on EPON, we can insert splitters, for example,
>pre-connectorized outdoor 1:16 boxes. Every customer install now
>involves a 100 meter roll of pre-connectorized 2-core drop cable, and a
>$20 EPON ONU.
>
>Using this deployment method, we could connect up to 16 customers to a
>single $200 endpoint, so the enpoint CAPEX per customer is now $12.5.
>Add the ONU, cable, etc. and we have a per-install CAPEX of $82.5
>(assuming the same $50 of extras we had before), and an even shorter
>break-even. In addition, as the endpoints support higher capacity, we
>can provision at least the same, if not more, capacity per customer.
>
>Other advantages: the $200 ODU is no longer customer equipment and
>CAPEX, but network equipment, and as such, can operate under a longer
>break-even timeline, and be financed by infrastructure PE funds, for
>example. As a result, churn has a much lower financial impact on the
>operator.
>
>The main reason why this wouldn't work today is that EPON, as we know,
>is synchronous, and requires the OLT to orchestrate the amount of time
>each ONU can transmit, and when. Having wireless hops and media
>conversions will introduce latencies which can break down the
>communications (e.g. one ONU may transmit, get delayed on the radio
>link, and end up overlapping another ONU that transmitted on the next
>slot). Thus, either the "magic" box needs to account for this, or an
>new hybrid EPON-wireless protocol developed.
>
>My main point here: the industry is moving away from the unconnected.
>All the claims I heard and saw at MWC about "connecting the
>unconnected" had zero resonance with the financial drivers that the
>unconnected really operate under, on top of IT literacy, digital
>skills, devices, power...
>
>Best,
>
>Mike
>On Mar 14, 2023 at 05:27 +0100, rjmcmahon via Starlink
><starlink@lists.bufferbloat.net>, wrote:
>> To change the topic - curious to thoughts on FiWi.
>>
>> Imagine a world with no copper cable called FiWi (Fiber,VCSEL/CMOS
>> Radios, Antennas) and which is point to point inside a building
>> connected to virtualized APs fiber hops away. Each remote radio head
>> (RRH) would consume 5W or less and only when active. No need for
>things
>> like zigbee, or meshes, or threads as each radio has a fiber
>connection
>> via Corning's actifi or equivalent. Eliminate the AP/Client power
>> imbalance. Plastics also can house smoke or other sensors.
>>
>> Some reminders from Paul Baran in 1994 (and from David Reed)
>>
>> o) Shorter range rf transceivers connected to fiber could produce a
>> significant improvement - - tremendous improvement, really.
>> o) a mixture of terrestrial links plus shorter range radio links has
>the
>> effect of increasing by orders and orders of magnitude the amount of
>> frequency spectrum that can be made available.
>> o) By authorizing high power to support a few users to reach slightly
>> longer distances we deprive ourselves of the opportunity to serve the
>> many.
>> o) Communications systems can be built with 10dB ratio
>> o) Digital transmission when properly done allows a small signal to
>> noise ratio to be used successfully to retrieve an error free signal.
>> o) And, never forget, any transmission capacity not used is wasted
>> forever, like water over the dam. Not using such techniques represent
>> lost opportunity.
>>
>> And on waveguides:
>>
>> o) "Fiber transmission loss is ~0.5dB/km for single mode fiber,
>> independent of modulation"
>> o) “Copper cables and PCB traces are very frequency dependent. At
>> 100Gb/s, the loss is in dB/inch."
>> o) "Free space: the power density of the radio waves decreases with
>the
>> square of distance from the transmitting antenna due to spreading of
>the
>> electromagnetic energy in space according to the inverse square law"
>>
>> The sunk costs & long-lived parts of FiWi are the fiber and the CPE
>> plastics & antennas, as CMOS radios+ & fiber/laser, e.g. VCSEL could
>be
>> pluggable, allowing for field upgrades. Just like swapping out SFP in
>a
>> data center.
>>
>> This approach basically drives out WiFi latency by eliminating shared
>> queues and increases capacity by orders of magnitude by leveraging
>10dB
>> in the spatial dimension, all of which is achieved by a physical
>design.
>> Just place enough RRHs as needed (similar to a pop up sprinkler in an
>> irrigation system.)
>>
>> Start and build this for an MDU and the value of the building
>improves.
>> Sadly, there seems no way to capture that value other than over long
>> term use. It doesn't matter whether the leader of the HOA tries to
>> capture the value or if a last mile provider tries. The value remains
>> sunk or hidden with nothing on the asset side of the balance sheet.
>> We've got a CAPEX spend that has to be made up via "OPEX returns"
>over
>> years.
>>
>> But the asset is there.
>>
>> How do we do this?
>>
>> Bob
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink

[-- Attachment #2: Type: text/html, Size: 10121 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink] On FiWi
  2023-03-14 16:54                                         ` [LibreQoS] [Rpm] " Robert McMahon
@ 2023-03-14 17:06                                           ` Robert McMahon
  2023-03-14 17:11                                             ` [LibreQoS] [Bloat] " Sebastian Moeller
  0 siblings, 1 reply; 183+ messages in thread
From: Robert McMahon @ 2023-03-14 17:06 UTC (permalink / raw)
  To: Mike Puchol; +Cc: Dave Taht via Starlink, Rpm, libreqos, bloat

[-- Attachment #1: Type: text/plain, Size: 9523 bytes --]

The ISP could charge per radio head and manage the system from a FiWi head end which they own. Virtualize the APs. Get rid of SoC complexity and costly O&M via simplicity. Eliminate all the incremental engineering that has gone astray, e.g. bloat and over powered APs.

Bob



On Mar 14, 2023, 9:49 AM, at 9:49 AM, Robert McMahon <rjmcmahon@rjmcmahon.com> wrote:
>Hi Mike,
>
>I'm thinking more of fiber to the room. The last few meters are wifi
>everything else is fiber.. Those radios would be a max of 20' from the
>associated STA. Then at phy rates of 2.8Gb/s per spatial stream. The
>common MIMO is 2x2 so each radio head or wifi transceiver supports
>5.6G, no queueing delay. Wholesale is $5 and retail $19.95 per
>pluggable transceiver. Sold at Home Depot next to the irrigation aisle.
>10 per house is $199 and each room gets a dedicated 5.8G phy rate. Need
>more devices in a space? Pick an RRH with more cmos radios. Also, the
>antennas would be patch antenna and fill the room properly. Then plug
>in an optional sensor for fire alerting.
>
>
>A digression. A lot of signal processing engineers have been working on
>TX beam forming. The best beam is fiber. Just do that. It even can turn
>corners and goes exactly to where it's needed at very low energies.
>This is similar to pvc pipes in irrigation systems. They're designed to
>take water to spray heads.
>
>The cost is the cable plant. That's labor more than materials. Similar
>for irrigation, pvc is inexpensive and lasts decades. A return labor
>means use future proof materials, e.g. fiber.
>
>Bob
>
>
>
>On Mar 14, 2023, 4:10 AM, at 4:10 AM, Mike Puchol via Rpm
><rpm@lists.bufferbloat.net> wrote:
>>Hi Bob,
>>
>>You hit on a set of very valid points, which I'll complement with my
>>views on where the industry (the bit of it that affects WISPs) is
>>heading, and what I saw at the MWC in Barcelona. Love the FiWi term
>:-)
>>
>>I have seen the vendors that supply WISPs, such as Ubiquiti, Cambium,
>>and Mimosa, but also newer entrants such as Tarana, increase the
>>performance and on-paper specs of their equipment. My examples below
>>are centered on the African market, if you operate in Europe or the
>US,
>>where you can charge customers a higher install fee, or even charge
>>them a break-up fee if they don't return equipment, the economics
>work.
>>
>>Where currently a ~$500 sector radio could serve ~60 endpoints, at a
>>cost of ~$50 per endpoint (I use this term in place of ODU/CPE, the
>>antenna that you mount on the roof), and supply ~2.5 Mbps CIR per
>>endpoint, the evolution is now a ~$2,000+ sector radio, a $200
>>endpoint, capability for ~150 endpoints per sector, and ~25 Mbps CIR
>>per endpoint.
>>
>>If every customer a WISP installs represents, say, $100 CAPEX at
>>install time ($50 for the antenna + cabling, router, etc), and you
>>charge a $30 install fee, you have $70 to recover, and you recover
>from
>>the monthly contribution the customer makes. If the contribution after
>>OPEX is, say, $10, it takes you 7 months to recover the full install
>>cost. Not bad, doable even in low-income markets.
>>
>>Fast-forward to the next-generation version. Now, the CAPEX at install
>>is $250, you need to recover $220, and it will take you 22 months,
>>which is above the usual 18 months that investors look for.
>>
>>The focus, thereby, has to be the lever that has the largest effect on
>>the unit economics - which is the per-customer cost. I have drawn what
>>my ideal FiWi network would look like:
>>
>>
>>
>>Taking you through this - we start with a 1-port, low-cost EPON OLT
>(or
>>you could go for 2, 4, 8 ports as you add capacity). This OLT has
>>capacity for 64 ONUs on its single port. Instead of connecting the
>>typical fiber infrastructure with kilometers of cables which break,
>>require maintenance, etc. we insert an EPON to Ethernet converter (I
>>added "magic" because these don't exist AFAIK).
>>
>>This converter allows us to connect our $2k sector radio, and serve
>the
>>$200 endpoints (ODUs) over wireless point-to-multipoint up to 10km
>>away. Each ODU then has a reverse converter, which gives us EPON
>again.
>>
>>Once we are back on EPON, we can insert splitters, for example,
>>pre-connectorized outdoor 1:16 boxes. Every customer install now
>>involves a 100 meter roll of pre-connectorized 2-core drop cable, and
>a
>>$20 EPON ONU.
>>
>>Using this deployment method, we could connect up to 16 customers to a
>>single $200 endpoint, so the enpoint CAPEX per customer is now $12.5.
>>Add the ONU, cable, etc. and we have a per-install CAPEX of $82.5
>>(assuming the same $50 of extras we had before), and an even shorter
>>break-even. In addition, as the endpoints support higher capacity, we
>>can provision at least the same, if not more, capacity per customer.
>>
>>Other advantages: the $200 ODU is no longer customer equipment and
>>CAPEX, but network equipment, and as such, can operate under a longer
>>break-even timeline, and be financed by infrastructure PE funds, for
>>example. As a result, churn has a much lower financial impact on the
>>operator.
>>
>>The main reason why this wouldn't work today is that EPON, as we know,
>>is synchronous, and requires the OLT to orchestrate the amount of time
>>each ONU can transmit, and when. Having wireless hops and media
>>conversions will introduce latencies which can break down the
>>communications (e.g. one ONU may transmit, get delayed on the radio
>>link, and end up overlapping another ONU that transmitted on the next
>>slot). Thus, either the "magic" box needs to account for this, or an
>>new hybrid EPON-wireless protocol developed.
>>
>>My main point here: the industry is moving away from the unconnected.
>>All the claims I heard and saw at MWC about "connecting the
>>unconnected" had zero resonance with the financial drivers that the
>>unconnected really operate under, on top of IT literacy, digital
>>skills, devices, power...
>>
>>Best,
>>
>>Mike
>>On Mar 14, 2023 at 05:27 +0100, rjmcmahon via Starlink
>><starlink@lists.bufferbloat.net>, wrote:
>>> To change the topic - curious to thoughts on FiWi.
>>>
>>> Imagine a world with no copper cable called FiWi (Fiber,VCSEL/CMOS
>>> Radios, Antennas) and which is point to point inside a building
>>> connected to virtualized APs fiber hops away. Each remote radio head
>>> (RRH) would consume 5W or less and only when active. No need for
>>things
>>> like zigbee, or meshes, or threads as each radio has a fiber
>>connection
>>> via Corning's actifi or equivalent. Eliminate the AP/Client power
>>> imbalance. Plastics also can house smoke or other sensors.
>>>
>>> Some reminders from Paul Baran in 1994 (and from David Reed)
>>>
>>> o) Shorter range rf transceivers connected to fiber could produce a
>>> significant improvement - - tremendous improvement, really.
>>> o) a mixture of terrestrial links plus shorter range radio links has
>>the
>>> effect of increasing by orders and orders of magnitude the amount of
>>> frequency spectrum that can be made available.
>>> o) By authorizing high power to support a few users to reach
>slightly
>>> longer distances we deprive ourselves of the opportunity to serve
>the
>>> many.
>>> o) Communications systems can be built with 10dB ratio
>>> o) Digital transmission when properly done allows a small signal to
>>> noise ratio to be used successfully to retrieve an error free
>signal.
>>> o) And, never forget, any transmission capacity not used is wasted
>>> forever, like water over the dam. Not using such techniques
>represent
>>> lost opportunity.
>>>
>>> And on waveguides:
>>>
>>> o) "Fiber transmission loss is ~0.5dB/km for single mode fiber,
>>> independent of modulation"
>>> o) “Copper cables and PCB traces are very frequency dependent. At
>>> 100Gb/s, the loss is in dB/inch."
>>> o) "Free space: the power density of the radio waves decreases with
>>the
>>> square of distance from the transmitting antenna due to spreading of
>>the
>>> electromagnetic energy in space according to the inverse square law"
>>>
>>> The sunk costs & long-lived parts of FiWi are the fiber and the CPE
>>> plastics & antennas, as CMOS radios+ & fiber/laser, e.g. VCSEL could
>>be
>>> pluggable, allowing for field upgrades. Just like swapping out SFP
>in
>>a
>>> data center.
>>>
>>> This approach basically drives out WiFi latency by eliminating
>shared
>>> queues and increases capacity by orders of magnitude by leveraging
>>10dB
>>> in the spatial dimension, all of which is achieved by a physical
>>design.
>>> Just place enough RRHs as needed (similar to a pop up sprinkler in
>an
>>> irrigation system.)
>>>
>>> Start and build this for an MDU and the value of the building
>>improves.
>>> Sadly, there seems no way to capture that value other than over long
>>> term use. It doesn't matter whether the leader of the HOA tries to
>>> capture the value or if a last mile provider tries. The value
>remains
>>> sunk or hidden with nothing on the asset side of the balance sheet.
>>> We've got a CAPEX spend that has to be made up via "OPEX returns"
>>over
>>> years.
>>>
>>> But the asset is there.
>>>
>>> How do we do this?
>>>
>>> Bob
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink

[-- Attachment #2: Type: text/html, Size: 10771 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Rpm] [Starlink] On FiWi
  2023-03-14 17:06                                           ` Robert McMahon
@ 2023-03-14 17:11                                             ` Sebastian Moeller
  2023-03-14 17:35                                               ` Robert McMahon
  0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-14 17:11 UTC (permalink / raw)
  To: Robert McMahon; +Cc: Mike Puchol, Dave Taht via Starlink, Rpm, libreqos, bloat

Hi Bob,

technically attractive, but the "charge per radio head" and :virtualize the AP" are show stoppers for me... I like my ISP, but I have a clear understanding that my ISPs goals and my goals are not perfectly aligned so I would never give them control of my in house network and even less if they start moving things into the clown^W cloud. That means running important functions on some one else's computers, giving that some one else effectively too much power.

Regards
	Sebastian

P.S.: The technical side you propose will also work just as well with me in control, even though that lacks a business to make it attractive for ISPs ;)


> On Mar 14, 2023, at 18:06, Robert McMahon via Bloat <bloat@lists.bufferbloat.net> wrote:
> 
> The ISP could charge per radio head and manage the system from a FiWi head end which they own. Virtualize the APs. Get rid of SoC complexity and costly O&M via simplicity. Eliminate all the incremental engineering that has gone astray, e.g. bloat and over powered APs. 
> 
> Bob
> On Mar 14, 2023, at 9:49 AM, Robert McMahon <rjmcmahon@rjmcmahon.com> wrote:
> Hi Mike,
> 
> I'm thinking more of fiber to the room. The last few meters are wifi everything else is fiber.. Those radios would be a max of 20' from the associated STA. Then at phy rates of 2.8Gb/s per spatial stream. The common MIMO is 2x2 so each radio head or wifi transceiver supports 5.6G, no queueing delay. Wholesale is $5 and retail $19.95 per pluggable transceiver. Sold at Home Depot next to the irrigation aisle. 10 per house is $199 and each room gets a dedicated 5.8G phy rate. Need more devices in a space? Pick an RRH with more cmos radios. Also, the antennas would be patch antenna and fill the room properly. Then plug in an optional sensor for fire alerting.
> 
> 
> A digression. A lot of signal processing engineers have been working on TX beam forming. The best beam is fiber. Just do that. It even can turn corners and goes exactly to where it's needed at very low energies. This is similar to pvc pipes in irrigation systems. They're designed to take water to spray heads.
> 
> The cost is the cable plant. That's labor more than materials. Similar for irrigation, pvc is inexpensive and lasts decades. A return labor means use future proof materials, e.g. fiber.
> 
> Bob
> On Mar 14, 2023, at 4:10 AM, Mike Puchol via Rpm <rpm@lists.bufferbloat.net> wrote:
> Hi Bob, 
> 
> You hit on a set of very valid points, which I'll complement with my views on where the industry (the bit of it that affects WISPs) is heading, and what I saw at the MWC in Barcelona. Love the FiWi term :-) 
> 
> I have seen the vendors that supply WISPs, such as Ubiquiti, Cambium, and Mimosa, but also newer entrants such as Tarana, increase the performance and on-paper specs of their equipment. My examples below are centered on the African market, if you operate in Europe or the US, where you can charge customers a higher install fee, or even charge them a break-up fee if they don't return equipment, the economics work. 
> 
> Where currently a ~$500 sector radio could serve ~60 endpoints, at a cost of ~$50 per endpoint (I use this term in place of ODU/CPE, the antenna that you mount on the roof), and supply ~2.5 Mbps CIR per endpoint, the evolution is now a ~$2,000+ sector radio, a $200 endpoint, capability for ~150 endpoints per sector, and ~25 Mbps CIR per endpoint. 
> 
> If every customer a WISP installs represents, say, $100 CAPEX at install time ($50 for the antenna + cabling, router, etc), and you charge a $30 install fee, you have $70 to recover, and you recover from the monthly contribution the customer makes. If the contribution after OPEX is, say, $10, it takes you 7 months to recover the full install cost. Not bad, doable even in low-income markets. 
> 
> Fast-forward to the next-generation version. Now, the CAPEX at install is $250, you need to recover $220, and it will take you 22 months, which is above the usual 18 months that investors look for. 
> 
> The focus, thereby, has to be the lever that has the largest effect on the unit economics - which is the per-customer cost. I have drawn what my ideal FiWi network would look like: 
> 
> 
>  
> Taking you through this - we start with a 1-port, low-cost EPON OLT (or you could go for 2, 4, 8 ports as you add capacity). This OLT has capacity for 64 ONUs on its single port. Instead of connecting the typical fiber infrastructure with kilometers of cables which break, require maintenance, etc. we insert an EPON to Ethernet converter (I added "magic" because these don't exist AFAIK). 
> 
> This converter allows us to connect our $2k sector radio, and serve the $200 endpoints (ODUs) over wireless point-to-multipoint up to 10km away. Each ODU then has a reverse converter, which gives us EPON again. 
> 
> Once we are back on EPON, we can insert splitters, for example, pre-connectorized outdoor 1:16 boxes. Every customer install now involves a 100 meter roll of pre-connectorized 2-core drop cable, and a $20 EPON ONU.  
> 
> Using this deployment method, we could connect up to 16 customers to a single $200 endpoint, so the enpoint CAPEX per customer is now $12.5. Add the ONU, cable, etc. and we have a per-install CAPEX of $82.5 (assuming the same $50 of extras we had before), and an even shorter break-even. In addition, as the endpoints support higher capacity, we can provision at least the same, if not more, capacity per customer. 
> 
> Other advantages: the $200 ODU is no longer customer equipment and CAPEX, but network equipment, and as such, can operate under a longer break-even timeline, and be financed by infrastructure PE funds, for example. As a result, churn has a much lower financial impact on the operator. 
> 
> The main reason why this wouldn't work today is that EPON, as we know, is synchronous, and requires the OLT to orchestrate the amount of time each ONU can transmit, and when. Having wireless hops and media conversions will introduce latencies which can break down the communications (e.g. one ONU may transmit, get delayed on the radio link, and end up overlapping another ONU that transmitted on the next slot). Thus, either the "magic" box needs to account for this, or an new hybrid EPON-wireless protocol developed. 
> 
> My main point here: the industry is moving away from the unconnected. All the claims I heard and saw at MWC about "connecting the unconnected" had zero resonance with the financial drivers that the unconnected really operate under, on top of IT literacy, digital skills, devices, power... 
> 
> Best, 
> 
> Mike
> On Mar 14, 2023 at 05:27 +0100, rjmcmahon via Starlink <starlink@lists.bufferbloat.net>, wrote: 
>> To change the topic - curious to thoughts on FiWi. 
>> 
>> Imagine a world with no copper cable called FiWi (Fiber,VCSEL/CMOS 
>> Radios, Antennas) and which is point to point inside a building 
>> connected to virtualized APs fiber hops away. Each remote radio head 
>> (RRH) would consume 5W or less and only when active. No need for things 
>> like zigbee, or meshes, or threads as each radio has a fiber connection 
>> via Corning's actifi or equivalent. Eliminate the AP/Client power 
>> imbalance. Plastics also can house smoke or other sensors. 
>> 
>> Some reminders from Paul Baran in 1994 (and from David Reed) 
>> 
>> o) Shorter range rf transceivers connected to fiber could produce a 
>> significant improvement - - tremendous improvement, really. 
>> o) a mixture of terrestrial links plus shorter range radio links has the 
>> effect of increasing by orders and orders of magnitude the amount of 
>> frequency spectrum that can be made available. 
>> o) By authorizing high power to support a few users to reach slightly 
>> longer distances we deprive ourselves of the opportunity to serve the 
>> many. 
>> o) Communications systems can be built with 10dB ratio 
>> o) Digital transmission when properly done allows a small signal to 
>> noise ratio to be used successfully to retrieve an error free signal. 
>> o) And, never forget, any transmission capacity not used is wasted 
>> forever, like water over the dam. Not using such techniques represent 
>> lost opportunity. 
>> 
>> And on waveguides: 
>> 
>> o) "Fiber transmission loss is ~0.5dB/km for single mode fiber, 
>> independent of modulation" 
>> o) “Copper cables and PCB traces are very frequency dependent. At 
>> 100Gb/s, the loss is in dB/inch." 
>> o) "Free space: the power density of the radio waves decreases with the 
>> square of distance from the transmitting antenna due to spreading of the 
>> electromagnetic energy in space according to the inverse square law" 
>> 
>> The sunk costs & long-lived parts of FiWi are the fiber and the CPE 
>> plastics & antennas, as CMOS radios+ & fiber/laser, e.g. VCSEL could be 
>> pluggable, allowing for field upgrades. Just like swapping out SFP in a 
>> data center. 
>> 
>> This approach basically drives out WiFi latency by eliminating shared 
>> queues and increases capacity by orders of magnitude by leveraging 10dB 
>> in the spatial dimension, all of which is achieved by a physical design. 
>> Just place enough RRHs as needed (similar to a pop up sprinkler in an 
>> irrigation system.) 
>> 
>> Start and build this for an MDU and the value of the building improves. 
>> Sadly, there seems no way to capture that value other than over long 
>> term use. It doesn't matter whether the leader of the HOA tries to 
>> capture the value or if a last mile provider tries. The value remains 
>> sunk or hidden with nothing on the asset side of the balance sheet. 
>> We've got a CAPEX spend that has to be made up via "OPEX returns" over 
>> years. 
>> 
>> But the asset is there. 
>> 
>> How do we do this? 
>> 
>> Bob 
>> _______________________________________________ 
>> Starlink mailing list 
>> Starlink@lists.bufferbloat.net 
>> https://lists.bufferbloat.net/listinfo/starlink 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Rpm] [Starlink] On FiWi
  2023-03-14 17:11                                             ` [LibreQoS] [Bloat] " Sebastian Moeller
@ 2023-03-14 17:35                                               ` Robert McMahon
  2023-03-14 17:54                                                 ` dan
  0 siblings, 1 reply; 183+ messages in thread
From: Robert McMahon @ 2023-03-14 17:35 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Mike Puchol, Dave Taht via Starlink, Rpm, libreqos, bloat

[-- Attachment #1: Type: text/plain, Size: 11046 bytes --]

You could always do it yourself. 

Most people need high skilled network engineers to provide them IT services. This need is only going to grow and grow. We can help by producing better and simpler offerings, be they DIY or by service providers.

Steve Job's almost didn't support the iPhone development because he hated "the orifices." Probably time for many of us to revisit our belief set. Does it move the needle, even if imperfectly?

FiWi blows the needle off the gauge by my judgment. Who does it is secondary.

Bob



On Mar 14, 2023, 10:11 AM, at 10:11 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>Hi Bob,
>
>technically attractive, but the "charge per radio head" and :virtualize
>the AP" are show stoppers for me... I like my ISP, but I have a clear
>understanding that my ISPs goals and my goals are not perfectly aligned
>so I would never give them control of my in house network and even less
>if they start moving things into the clown^W cloud. That means running
>important functions on some one else's computers, giving that some one
>else effectively too much power.
>
>Regards
>	Sebastian
>
>P.S.: The technical side you propose will also work just as well with
>me in control, even though that lacks a business to make it attractive
>for ISPs ;)
>
>
>> On Mar 14, 2023, at 18:06, Robert McMahon via Bloat
><bloat@lists.bufferbloat.net> wrote:
>>
>> The ISP could charge per radio head and manage the system from a FiWi
>head end which they own. Virtualize the APs. Get rid of SoC complexity
>and costly O&M via simplicity. Eliminate all the incremental
>engineering that has gone astray, e.g. bloat and over powered APs.
>>
>> Bob
>> On Mar 14, 2023, at 9:49 AM, Robert McMahon <rjmcmahon@rjmcmahon.com>
>wrote:
>> Hi Mike,
>>
>> I'm thinking more of fiber to the room. The last few meters are wifi
>everything else is fiber.. Those radios would be a max of 20' from the
>associated STA. Then at phy rates of 2.8Gb/s per spatial stream. The
>common MIMO is 2x2 so each radio head or wifi transceiver supports
>5.6G, no queueing delay. Wholesale is $5 and retail $19.95 per
>pluggable transceiver. Sold at Home Depot next to the irrigation aisle.
>10 per house is $199 and each room gets a dedicated 5.8G phy rate. Need
>more devices in a space? Pick an RRH with more cmos radios. Also, the
>antennas would be patch antenna and fill the room properly. Then plug
>in an optional sensor for fire alerting.
>>
>>
>> A digression. A lot of signal processing engineers have been working
>on TX beam forming. The best beam is fiber. Just do that. It even can
>turn corners and goes exactly to where it's needed at very low
>energies. This is similar to pvc pipes in irrigation systems. They're
>designed to take water to spray heads.
>>
>> The cost is the cable plant. That's labor more than materials.
>Similar for irrigation, pvc is inexpensive and lasts decades. A return
>labor means use future proof materials, e.g. fiber.
>>
>> Bob
>> On Mar 14, 2023, at 4:10 AM, Mike Puchol via Rpm
><rpm@lists.bufferbloat.net> wrote:
>> Hi Bob,
>>
>> You hit on a set of very valid points, which I'll complement with my
>views on where the industry (the bit of it that affects WISPs) is
>heading, and what I saw at the MWC in Barcelona. Love the FiWi term :-)
>
>>
>> I have seen the vendors that supply WISPs, such as Ubiquiti, Cambium,
>and Mimosa, but also newer entrants such as Tarana, increase the
>performance and on-paper specs of their equipment. My examples below
>are centered on the African market, if you operate in Europe or the US,
>where you can charge customers a higher install fee, or even charge
>them a break-up fee if they don't return equipment, the economics work.
>
>>
>> Where currently a ~$500 sector radio could serve ~60 endpoints, at a
>cost of ~$50 per endpoint (I use this term in place of ODU/CPE, the
>antenna that you mount on the roof), and supply ~2.5 Mbps CIR per
>endpoint, the evolution is now a ~$2,000+ sector radio, a $200
>endpoint, capability for ~150 endpoints per sector, and ~25 Mbps CIR
>per endpoint.
>>
>> If every customer a WISP installs represents, say, $100 CAPEX at
>install time ($50 for the antenna + cabling, router, etc), and you
>charge a $30 install fee, you have $70 to recover, and you recover from
>the monthly contribution the customer makes. If the contribution after
>OPEX is, say, $10, it takes you 7 months to recover the full install
>cost. Not bad, doable even in low-income markets.
>>
>> Fast-forward to the next-generation version. Now, the CAPEX at
>install is $250, you need to recover $220, and it will take you 22
>months, which is above the usual 18 months that investors look for.
>>
>> The focus, thereby, has to be the lever that has the largest effect
>on the unit economics - which is the per-customer cost. I have drawn
>what my ideal FiWi network would look like:
>>
>> 
>>
>> Taking you through this - we start with a 1-port, low-cost EPON OLT
>(or you could go for 2, 4, 8 ports as you add capacity). This OLT has
>capacity for 64 ONUs on its single port. Instead of connecting the
>typical fiber infrastructure with kilometers of cables which break,
>require maintenance, etc. we insert an EPON to Ethernet converter (I
>added "magic" because these don't exist AFAIK).
>>
>> This converter allows us to connect our $2k sector radio, and serve
>the $200 endpoints (ODUs) over wireless point-to-multipoint up to 10km
>away. Each ODU then has a reverse converter, which gives us EPON again.
>
>>
>> Once we are back on EPON, we can insert splitters, for example,
>pre-connectorized outdoor 1:16 boxes. Every customer install now
>involves a 100 meter roll of pre-connectorized 2-core drop cable, and a
>$20 EPON ONU.
>>
>> Using this deployment method, we could connect up to 16 customers to
>a single $200 endpoint, so the enpoint CAPEX per customer is now $12.5.
>Add the ONU, cable, etc. and we have a per-install CAPEX of $82.5
>(assuming the same $50 of extras we had before), and an even shorter
>break-even. In addition, as the endpoints support higher capacity, we
>can provision at least the same, if not more, capacity per customer.
>>
>> Other advantages: the $200 ODU is no longer customer equipment and
>CAPEX, but network equipment, and as such, can operate under a longer
>break-even timeline, and be financed by infrastructure PE funds, for
>example. As a result, churn has a much lower financial impact on the
>operator.
>>
>> The main reason why this wouldn't work today is that EPON, as we
>know, is synchronous, and requires the OLT to orchestrate the amount of
>time each ONU can transmit, and when. Having wireless hops and media
>conversions will introduce latencies which can break down the
>communications (e.g. one ONU may transmit, get delayed on the radio
>link, and end up overlapping another ONU that transmitted on the next
>slot). Thus, either the "magic" box needs to account for this, or an
>new hybrid EPON-wireless protocol developed.
>>
>> My main point here: the industry is moving away from the unconnected.
>All the claims I heard and saw at MWC about "connecting the
>unconnected" had zero resonance with the financial drivers that the
>unconnected really operate under, on top of IT literacy, digital
>skills, devices, power...
>>
>> Best,
>>
>> Mike
>> On Mar 14, 2023 at 05:27 +0100, rjmcmahon via Starlink
><starlink@lists.bufferbloat.net>, wrote:
>>> To change the topic - curious to thoughts on FiWi.
>>>
>>> Imagine a world with no copper cable called FiWi (Fiber,VCSEL/CMOS
>>> Radios, Antennas) and which is point to point inside a building
>>> connected to virtualized APs fiber hops away. Each remote radio head
>
>>> (RRH) would consume 5W or less and only when active. No need for
>things
>>> like zigbee, or meshes, or threads as each radio has a fiber
>connection
>>> via Corning's actifi or equivalent. Eliminate the AP/Client power
>>> imbalance. Plastics also can house smoke or other sensors.
>>>
>>> Some reminders from Paul Baran in 1994 (and from David Reed)
>>>
>>> o) Shorter range rf transceivers connected to fiber could produce a
>>> significant improvement - - tremendous improvement, really.
>>> o) a mixture of terrestrial links plus shorter range radio links has
>the
>>> effect of increasing by orders and orders of magnitude the amount of
>
>>> frequency spectrum that can be made available.
>>> o) By authorizing high power to support a few users to reach
>slightly
>>> longer distances we deprive ourselves of the opportunity to serve
>the
>>> many.
>>> o) Communications systems can be built with 10dB ratio
>>> o) Digital transmission when properly done allows a small signal to
>>> noise ratio to be used successfully to retrieve an error free
>signal.
>>> o) And, never forget, any transmission capacity not used is wasted
>>> forever, like water over the dam. Not using such techniques
>represent
>>> lost opportunity.
>>>
>>> And on waveguides:
>>>
>>> o) "Fiber transmission loss is ~0.5dB/km for single mode fiber,
>>> independent of modulation"
>>> o) “Copper cables and PCB traces are very frequency dependent. At
>>> 100Gb/s, the loss is in dB/inch."
>>> o) "Free space: the power density of the radio waves decreases with
>the
>>> square of distance from the transmitting antenna due to spreading of
>the
>>> electromagnetic energy in space according to the inverse square law"
>
>>> 
>>> The sunk costs & long-lived parts of FiWi are the fiber and the CPE
>>> plastics & antennas, as CMOS radios+ & fiber/laser, e.g. VCSEL could
>be
>>> pluggable, allowing for field upgrades. Just like swapping out SFP
>in a
>>> data center.
>>>
>>> This approach basically drives out WiFi latency by eliminating
>shared
>>> queues and increases capacity by orders of magnitude by leveraging
>10dB
>>> in the spatial dimension, all of which is achieved by a physical
>design.
>>> Just place enough RRHs as needed (similar to a pop up sprinkler in
>an
>>> irrigation system.)
>>>
>>> Start and build this for an MDU and the value of the building
>improves.
>>> Sadly, there seems no way to capture that value other than over long
>
>>> term use. It doesn't matter whether the leader of the HOA tries to
>>> capture the value or if a last mile provider tries. The value
>remains
>>> sunk or hidden with nothing on the asset side of the balance sheet.
>>> We've got a CAPEX spend that has to be made up via "OPEX returns"
>over
>>> years.
>>>
>>> But the asset is there.
>>>
>>> How do we do this?
>>>
>>> Bob
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net 
>>> https://lists.bufferbloat.net/listinfo/starlink
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat

[-- Attachment #2: Type: text/html, Size: 11652 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Rpm] [Starlink] On FiWi
  2023-03-14 17:35                                               ` Robert McMahon
@ 2023-03-14 17:54                                                 ` dan
  2023-03-14 18:14                                                   ` Robert McMahon
  0 siblings, 1 reply; 183+ messages in thread
From: dan @ 2023-03-14 17:54 UTC (permalink / raw)
  To: Robert McMahon
  Cc: Sebastian Moeller, Dave Taht via Starlink, Mike Puchol, bloat,
	Rpm, libreqos

> You could always do it yourself.
>
> Most people need high skilled network engineers to provide them IT services. This need is only going to grow and grow. We can help by producing better and simpler offerings, be they DIY or by service providers.
>
> Steve Job's almost didn't support the iPhone development because he hated "the orifices." Probably time for many of us to revisit our belief set. Does it move the needle, even if imperfectly?
>
> FiWi blows the needle off the gauge by my judgment. Who does it is secondary.
>
> Bob

most people are unwilling to pay for those services also lol.

I don't see the paradigm of discreet routers/nat per prem anytime
soon.  If you subtract that piece of it then we're basically just
talking XGSPON or similar.

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Rpm] [Starlink] On FiWi
  2023-03-14 17:54                                                 ` dan
@ 2023-03-14 18:14                                                   ` Robert McMahon
  2023-03-14 19:18                                                     ` dan
  0 siblings, 1 reply; 183+ messages in thread
From: Robert McMahon @ 2023-03-14 18:14 UTC (permalink / raw)
  To: dan
  Cc: Sebastian Moeller, Dave Taht via Starlink, Mike Puchol, bloat,
	Rpm, libreqos

[-- Attachment #1: Type: text/plain, Size: 1352 bytes --]

It's not  discrete routers. It's more like a transceiver. WiFi is already splitting at the MAC for MLO. I perceive two choices for the split, one at the PHY DAC or, two, a minimalist 802.3 tunneling of 802.11 back to the FiWi head end. Use 802.3 to leverage merchant silicon supporting up to 200 or so RRHs or even move the baseband DSP there. I think a split PHY may not work well but a thorough eng analysis is still warranted.

Bob



Get BlueMail for Android



On Mar 14, 2023, 10:54 AM, at 10:54 AM, dan <dandenson@gmail.com> wrote:
>> You could always do it yourself.
>>
>> Most people need high skilled network engineers to provide them IT
>services. This need is only going to grow and grow. We can help by
>producing better and simpler offerings, be they DIY or by service
>providers.
>>
>> Steve Job's almost didn't support the iPhone development because he
>hated "the orifices." Probably time for many of us to revisit our
>belief set. Does it move the needle, even if imperfectly?
>>
>> FiWi blows the needle off the gauge by my judgment. Who does it is
>secondary.
>>
>> Bob
>
>most people are unwilling to pay for those services also lol.
>
>I don't see the paradigm of discreet routers/nat per prem anytime
>soon.  If you subtract that piece of it then we're basically just
>talking XGSPON or similar.

[-- Attachment #2: Type: text/html, Size: 1899 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Rpm] [Starlink] On FiWi
  2023-03-14 18:14                                                   ` Robert McMahon
@ 2023-03-14 19:18                                                     ` dan
  2023-03-14 19:30                                                       ` Dave Taht
  2023-03-14 19:30                                                       ` rjmcmahon
  0 siblings, 2 replies; 183+ messages in thread
From: dan @ 2023-03-14 19:18 UTC (permalink / raw)
  To: Robert McMahon
  Cc: Sebastian Moeller, Dave Taht via Starlink, Mike Puchol, bloat,
	Rpm, libreqos

end users are still going to want their own router/firewall.  That's
my point, I don't see how you can have that on-prem firewall while
having a remote radio that's useful.

I would adamantly oppose anyone I know passing their firewall off to
the upstream vendor.   I run an MSP and I would offer a customer to
drop my services if they were to buy into something like this on the
business side.

So I really only see this sort of concept for campus networks where
the end users are 'part' of the entity.

On Tue, Mar 14, 2023 at 12:14 PM Robert McMahon <rjmcmahon@rjmcmahon.com> wrote:
>
> It's not  discrete routers. It's more like a transceiver. WiFi is already splitting at the MAC for MLO. I perceive two choices for the split, one at the PHY DAC or, two, a minimalist 802.3 tunneling of 802.11 back to the FiWi head end. Use 802.3 to leverage merchant silicon supporting up to 200 or so RRHs or even move the baseband DSP there. I think a split PHY may not work well but a thorough eng analysis is still warranted.
>
> Bob
>
>
>
> Get BlueMail for Android
> On Mar 14, 2023, at 10:54 AM, dan <dandenson@gmail.com> wrote:
>>>
>>>  You could always do it yourself.
>>>
>>>  Most people need high skilled network engineers to provide them IT services. This need is only going to grow and grow. We can help by producing better and simpler offerings, be they DIY or by service providers.
>>>
>>>  Steve Job's almost didn't support the iPhone development because he hated "the orifices." Probably time for many of us to revisit our belief set. Does it move the needle, even if imperfectly?
>>>
>>>  FiWi blows the needle off the gauge by my judgment. Who does it is secondary.
>>>
>>>  Bob
>>
>>
>> most people are unwilling to pay for those services also lol.
>>
>> I don't see the paradigm of discreet routers/nat per prem anytime
>> soon.  If you subtract that piece of it then we're basically just
>> talking XGSPON or similar.

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat]  [Rpm] [Starlink] On FiWi
  2023-03-14 19:18                                                     ` dan
@ 2023-03-14 19:30                                                       ` Dave Taht
  2023-03-14 20:06                                                         ` rjmcmahon
  2023-03-14 19:30                                                       ` rjmcmahon
  1 sibling, 1 reply; 183+ messages in thread
From: Dave Taht @ 2023-03-14 19:30 UTC (permalink / raw)
  To: dan
  Cc: Robert McMahon, Mike Puchol, libreqos, Dave Taht via Starlink,
	Rpm, bloat

On Tue, Mar 14, 2023 at 12:18 PM dan via Bloat
<bloat@lists.bufferbloat.net> wrote:
>
> end users are still going to want their own router/firewall.

I am old fashioned this way, also, but I think most modern users would
not care, any more about this. They are used to pretty much having all
their data exposed to the internet, available via cellphone, and used
to having their security cameras and other personal information, gone,
out there.

They just want internet.

> That's
> my point, I don't see how you can have that on-prem firewall while
> having a remote radio that's useful.
>
> I would adamantly oppose anyone I know passing their firewall off to
> the upstream vendor.   I run an MSP and I would offer a customer to
> drop my services if they were to buy into something like this on the
> business side.
>
> So I really only see this sort of concept for campus networks where
> the end users are 'part' of the entity.
>
> On Tue, Mar 14, 2023 at 12:14 PM Robert McMahon <rjmcmahon@rjmcmahon.com> wrote:
> >
> > It's not  discrete routers. It's more like a transceiver. WiFi is already splitting at the MAC for MLO. I perceive two choices for the split, one at the PHY DAC or, two, a minimalist 802.3 tunneling of 802.11 back to the FiWi head end. Use 802.3 to leverage merchant silicon supporting up to 200 or so RRHs or even move the baseband DSP there. I think a split PHY may not work well but a thorough eng analysis is still warranted.
> >
> > Bob
> >
> >
> >
> > Get BlueMail for Android
> > On Mar 14, 2023, at 10:54 AM, dan <dandenson@gmail.com> wrote:
> >>>
> >>>  You could always do it yourself.
> >>>
> >>>  Most people need high skilled network engineers to provide them IT services. This need is only going to grow and grow. We can help by producing better and simpler offerings, be they DIY or by service providers.
> >>>
> >>>  Steve Job's almost didn't support the iPhone development because he hated "the orifices." Probably time for many of us to revisit our belief set. Does it move the needle, even if imperfectly?
> >>>
> >>>  FiWi blows the needle off the gauge by my judgment. Who does it is secondary.
> >>>
> >>>  Bob
> >>
> >>
> >> most people are unwilling to pay for those services also lol.
> >>
> >> I don't see the paradigm of discreet routers/nat per prem anytime
> >> soon.  If you subtract that piece of it then we're basically just
> >> talking XGSPON or similar.
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 
Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Rpm] [Starlink] On FiWi
  2023-03-14 19:18                                                     ` dan
  2023-03-14 19:30                                                       ` Dave Taht
@ 2023-03-14 19:30                                                       ` rjmcmahon
  2023-03-14 23:30                                                         ` [LibreQoS] [Starlink] [Bloat] [Rpm] " Bruce Perens
  1 sibling, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-14 19:30 UTC (permalink / raw)
  To: dan
  Cc: Sebastian Moeller, Dave Taht via Starlink, Mike Puchol, bloat,
	Rpm, libreqos

The design has to be flexible so DIY w/local firewall is fine.

I'll disagree though that early & late majority care about firewalls. 
They want high-quality access that is secure & private. Both of these 
require high skill network engineers on staff. DIY is hard here. 
Intrusion detection systems, etc. are non-trivial. The days of broadcast 
NFL networks are over.

I disagree to with nobody wanting to pay for quality access to knowledge 
based networks. Not that many years ago, nobody wanted to pay to teach 
women to read either. Then, nobody wanted to pay for university. I grew 
up in the latter and figured out that I needed come up with payment 
somehow to develop my brain. Otherwise, I was screwed.

So, if it's a chatGPT, advertising system - sure wrong market. Free 
shit, even provided by Google, is mostly shit.

Connect to something real without the privacy invasions, no queueing, 
etc. I think it's worth it in spades despite the idea that we shouldn't 
invest so people, regardless of gender, etc. can learn to read.

Bob

> end users are still going to want their own router/firewall.  That's
> my point, I don't see how you can have that on-prem firewall while
> having a remote radio that's useful.
> 
> I would adamantly oppose anyone I know passing their firewall off to
> the upstream vendor.   I run an MSP and I would offer a customer to
> drop my services if they were to buy into something like this on the
> business side.
> 
> So I really only see this sort of concept for campus networks where
> the end users are 'part' of the entity.
> 
> On Tue, Mar 14, 2023 at 12:14 PM Robert McMahon 
> <rjmcmahon@rjmcmahon.com> wrote:
>> 
>> It's not  discrete routers. It's more like a transceiver. WiFi is 
>> already splitting at the MAC for MLO. I perceive two choices for the 
>> split, one at the PHY DAC or, two, a minimalist 802.3 tunneling of 
>> 802.11 back to the FiWi head end. Use 802.3 to leverage merchant 
>> silicon supporting up to 200 or so RRHs or even move the baseband DSP 
>> there. I think a split PHY may not work well but a thorough eng 
>> analysis is still warranted.
>> 
>> Bob
>> 
>> 
>> 
>> Get BlueMail for Android
>> On Mar 14, 2023, at 10:54 AM, dan <dandenson@gmail.com> wrote:
>>>> 
>>>>  You could always do it yourself.
>>>> 
>>>>  Most people need high skilled network engineers to provide them IT 
>>>> services. This need is only going to grow and grow. We can help by 
>>>> producing better and simpler offerings, be they DIY or by service 
>>>> providers.
>>>> 
>>>>  Steve Job's almost didn't support the iPhone development because he 
>>>> hated "the orifices." Probably time for many of us to revisit our 
>>>> belief set. Does it move the needle, even if imperfectly?
>>>> 
>>>>  FiWi blows the needle off the gauge by my judgment. Who does it is 
>>>> secondary.
>>>> 
>>>>  Bob
>>> 
>>> 
>>> most people are unwilling to pay for those services also lol.
>>> 
>>> I don't see the paradigm of discreet routers/nat per prem anytime
>>> soon.  If you subtract that piece of it then we're basically just
>>> talking XGSPON or similar.

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat]  [Rpm] [Starlink] On FiWi
  2023-03-14 19:30                                                       ` Dave Taht
@ 2023-03-14 20:06                                                         ` rjmcmahon
  0 siblings, 0 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-14 20:06 UTC (permalink / raw)
  To: Dave Taht; +Cc: dan, Mike Puchol, libreqos, Dave Taht via Starlink, Rpm, bloat

> I am old fashioned this way, also, but I think most modern users would
> not care, any more about this. They are used to pretty much having all
> their data exposed to the internet, available via cellphone, and used
> to having their security cameras and other personal information, gone,
> out there.
> 
> They just want internet.

I think people want privacy it's just that those in leadership roles, 
e.g. Eric Schmidt, rationalized their behaviors with comments like, 
"Privacy is over. Get used to it." At the same time, Google algorithms 
were advertising breast implants to women who just learned from their 
doctors they had breast cancer. Google gleaned this from her information 
search on her recently diagnosed condition.

Life support use cases and privacy have to be added back in as a base 
feature. It's past time we as a society tolerated this behavior from 
billionaires who see us as nothing more than subjects to their targeted 
ads.

Bob

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink]  [Bloat] [Rpm] On FiWi
  2023-03-14 19:30                                                       ` rjmcmahon
@ 2023-03-14 23:30                                                         ` Bruce Perens
  2023-03-15  0:11                                                           ` Robert McMahon
  0 siblings, 1 reply; 183+ messages in thread
From: Bruce Perens @ 2023-03-14 23:30 UTC (permalink / raw)
  To: rjmcmahon; +Cc: dan, libreqos, Dave Taht via Starlink, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 4526 bytes --]

Let's remember some of the reasons why a lot of wireless-last-mile and mesh
networking plans have failed to date.

Most people who hand-wave about wireless _still_ don't understand Fresnel
zones.
Most don't account for the possibility of multipath.
Or the hidden transmitter problem.
Or absorption.
Or noise.

Spread spectrum does not cure all ills. You are *trading* bandwidth for
processing gain.
You also trade digital modulations that reach incredibly low s/n for
bandwidth.
You can only extract so much of your link budget from processing or
efficient modulation. Many modern systems already operate at that point.

All usable spectrum has been allocated at any particular time. At least 50%
is spent on supporting legacy systems.

Your greatest spectrum availability will be at the highest possible
frequency, just because of 1/f. There your largest consideration will be
absorption.

    Thanks

    Bruce



On Tue, Mar 14, 2023 at 12:30 PM rjmcmahon via Starlink <
starlink@lists.bufferbloat.net> wrote:

> The design has to be flexible so DIY w/local firewall is fine.
>
> I'll disagree though that early & late majority care about firewalls.
> They want high-quality access that is secure & private. Both of these
> require high skill network engineers on staff. DIY is hard here.
> Intrusion detection systems, etc. are non-trivial. The days of broadcast
> NFL networks are over.
>
> I disagree to with nobody wanting to pay for quality access to knowledge
> based networks. Not that many years ago, nobody wanted to pay to teach
> women to read either. Then, nobody wanted to pay for university. I grew
> up in the latter and figured out that I needed come up with payment
> somehow to develop my brain. Otherwise, I was screwed.
>
> So, if it's a chatGPT, advertising system - sure wrong market. Free
> shit, even provided by Google, is mostly shit.
>
> Connect to something real without the privacy invasions, no queueing,
> etc. I think it's worth it in spades despite the idea that we shouldn't
> invest so people, regardless of gender, etc. can learn to read.
>
> Bob
>
> > end users are still going to want their own router/firewall.  That's
> > my point, I don't see how you can have that on-prem firewall while
> > having a remote radio that's useful.
> >
> > I would adamantly oppose anyone I know passing their firewall off to
> > the upstream vendor.   I run an MSP and I would offer a customer to
> > drop my services if they were to buy into something like this on the
> > business side.
> >
> > So I really only see this sort of concept for campus networks where
> > the end users are 'part' of the entity.
> >
> > On Tue, Mar 14, 2023 at 12:14 PM Robert McMahon
> > <rjmcmahon@rjmcmahon.com> wrote:
> >>
> >> It's not  discrete routers. It's more like a transceiver. WiFi is
> >> already splitting at the MAC for MLO. I perceive two choices for the
> >> split, one at the PHY DAC or, two, a minimalist 802.3 tunneling of
> >> 802.11 back to the FiWi head end. Use 802.3 to leverage merchant
> >> silicon supporting up to 200 or so RRHs or even move the baseband DSP
> >> there. I think a split PHY may not work well but a thorough eng
> >> analysis is still warranted.
> >>
> >> Bob
> >>
> >>
> >>
> >> Get BlueMail for Android
> >> On Mar 14, 2023, at 10:54 AM, dan <dandenson@gmail.com> wrote:
> >>>>
> >>>>  You could always do it yourself.
> >>>>
> >>>>  Most people need high skilled network engineers to provide them IT
> >>>> services. This need is only going to grow and grow. We can help by
> >>>> producing better and simpler offerings, be they DIY or by service
> >>>> providers.
> >>>>
> >>>>  Steve Job's almost didn't support the iPhone development because he
> >>>> hated "the orifices." Probably time for many of us to revisit our
> >>>> belief set. Does it move the needle, even if imperfectly?
> >>>>
> >>>>  FiWi blows the needle off the gauge by my judgment. Who does it is
> >>>> secondary.
> >>>>
> >>>>  Bob
> >>>
> >>>
> >>> most people are unwilling to pay for those services also lol.
> >>>
> >>> I don't see the paradigm of discreet routers/nat per prem anytime
> >>> soon.  If you subtract that piece of it then we're basically just
> >>> talking XGSPON or similar.
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>


-- 
Bruce Perens K6BP

[-- Attachment #2: Type: text/html, Size: 6185 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink]  [Bloat] [Rpm] On FiWi
  2023-03-14 23:30                                                         ` [LibreQoS] [Starlink] [Bloat] [Rpm] " Bruce Perens
@ 2023-03-15  0:11                                                           ` Robert McMahon
  2023-03-15  5:20                                                             ` Bruce Perens
  0 siblings, 1 reply; 183+ messages in thread
From: Robert McMahon @ 2023-03-15  0:11 UTC (permalink / raw)
  To: Bruce Perens; +Cc: dan, libreqos, Dave Taht via Starlink, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 5825 bytes --]

This isn't last mile nor mesh. It's last meters. The AP/STA power asymmetry shows that a low power STA can reach an AP, but the AP needs to blast a CTS so every other possible conversation has to halt. It's like a person in a large conference room where one person with a megaphone is yelling to someone distant (and that person doesn't have a megaphone to respond.) Better if everyone reduced their energy to just enough and get rid of all megaphone. Reduce AP/STA density which is what drives excessive queuing delays.

The free space loss works in the advantage in this model. The trick is structured fiber. But to what exactly?

The problem with structured wiring before was that nobody wanted to plug into a wall jack or nobody wants to be on a leash.

Look up. How many led lights are over our heads in every space? Electric to photon conversion is happening. We don't blast illumination anymore, either. Semiconductor mfg is changing everything. Best to embrace it vs adding another part to Frankenstein.

Bob



Get BlueMail for Android



On Mar 14, 2023, 4:30 PM, at 4:30 PM, Bruce Perens <bruce@perens.com> wrote:
>Let's remember some of the reasons why a lot of wireless-last-mile and
>mesh
>networking plans have failed to date.
>
>Most people who hand-wave about wireless _still_ don't understand
>Fresnel
>zones.
>Most don't account for the possibility of multipath.
>Or the hidden transmitter problem.
>Or absorption.
>Or noise.
>
>Spread spectrum does not cure all ills. You are *trading* bandwidth for
>processing gain.
>You also trade digital modulations that reach incredibly low s/n for
>bandwidth.
>You can only extract so much of your link budget from processing or
>efficient modulation. Many modern systems already operate at that
>point.
>
>All usable spectrum has been allocated at any particular time. At least
>50%
>is spent on supporting legacy systems.
>
>Your greatest spectrum availability will be at the highest possible
>frequency, just because of 1/f. There your largest consideration will
>be
>absorption.
>
>    Thanks
>
>    Bruce
>
>
>
>On Tue, Mar 14, 2023 at 12:30 PM rjmcmahon via Starlink <
>starlink@lists.bufferbloat.net> wrote:
>
>> The design has to be flexible so DIY w/local firewall is fine.
>>
>> I'll disagree though that early & late majority care about firewalls.
>> They want high-quality access that is secure & private. Both of these
>> require high skill network engineers on staff. DIY is hard here.
>> Intrusion detection systems, etc. are non-trivial. The days of
>broadcast
>> NFL networks are over.
>>
>> I disagree to with nobody wanting to pay for quality access to
>knowledge
>> based networks. Not that many years ago, nobody wanted to pay to
>teach
>> women to read either. Then, nobody wanted to pay for university. I
>grew
>> up in the latter and figured out that I needed come up with payment
>> somehow to develop my brain. Otherwise, I was screwed.
>>
>> So, if it's a chatGPT, advertising system - sure wrong market. Free
>> shit, even provided by Google, is mostly shit.
>>
>> Connect to something real without the privacy invasions, no queueing,
>> etc. I think it's worth it in spades despite the idea that we
>shouldn't
>> invest so people, regardless of gender, etc. can learn to read.
>>
>> Bob
>>
>> > end users are still going to want their own router/firewall.
>That's
>> > my point, I don't see how you can have that on-prem firewall while
>> > having a remote radio that's useful.
>> >
>> > I would adamantly oppose anyone I know passing their firewall off
>to
>> > the upstream vendor.   I run an MSP and I would offer a customer to
>> > drop my services if they were to buy into something like this on
>the
>> > business side.
>> >
>> > So I really only see this sort of concept for campus networks where
>> > the end users are 'part' of the entity.
>> >
>> > On Tue, Mar 14, 2023 at 12:14 PM Robert McMahon
>> > <rjmcmahon@rjmcmahon.com> wrote:
>> >>
>> >> It's not  discrete routers. It's more like a transceiver. WiFi is
>> >> already splitting at the MAC for MLO. I perceive two choices for
>the
>> >> split, one at the PHY DAC or, two, a minimalist 802.3 tunneling of
>> >> 802.11 back to the FiWi head end. Use 802.3 to leverage merchant
>> >> silicon supporting up to 200 or so RRHs or even move the baseband
>DSP
>> >> there. I think a split PHY may not work well but a thorough eng
>> >> analysis is still warranted.
>> >>
>> >> Bob
>> >>
>> >>
>> >>
>> >> Get BlueMail for Android
>> >> On Mar 14, 2023, at 10:54 AM, dan <dandenson@gmail.com> wrote:
>> >>>>
>> >>>>  You could always do it yourself.
>> >>>>
>> >>>>  Most people need high skilled network engineers to provide them
>IT
>> >>>> services. This need is only going to grow and grow. We can help
>by
>> >>>> producing better and simpler offerings, be they DIY or by
>service
>> >>>> providers.
>> >>>>
>> >>>>  Steve Job's almost didn't support the iPhone development
>because he
>> >>>> hated "the orifices." Probably time for many of us to revisit
>our
>> >>>> belief set. Does it move the needle, even if imperfectly?
>> >>>>
>> >>>>  FiWi blows the needle off the gauge by my judgment. Who does it
>is
>> >>>> secondary.
>> >>>>
>> >>>>  Bob
>> >>>
>> >>>
>> >>> most people are unwilling to pay for those services also lol.
>> >>>
>> >>> I don't see the paradigm of discreet routers/nat per prem anytime
>> >>> soon.  If you subtract that piece of it then we're basically just
>> >>> talking XGSPON or similar.
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>
>
>--
>Bruce Perens K6BP

[-- Attachment #2: Type: text/html, Size: 8283 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink]  [Bloat] [Rpm] On FiWi
  2023-03-15  0:11                                                           ` Robert McMahon
@ 2023-03-15  5:20                                                             ` Bruce Perens
  2023-03-15 16:17                                                               ` [LibreQoS] [Rpm] [Starlink] [Bloat] " Aaron Wood
  2023-03-15 17:32                                                               ` [LibreQoS] [Starlink] [Bloat] [Rpm] " rjmcmahon
  0 siblings, 2 replies; 183+ messages in thread
From: Bruce Perens @ 2023-03-15  5:20 UTC (permalink / raw)
  To: Robert McMahon; +Cc: dan, libreqos, Dave Taht via Starlink, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 313 bytes --]

On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon <rjmcmahon@rjmcmahon.com>
wrote:

> the AP needs to blast a CTS so every other possible conversation has to
> halt.
>
The wireless network is not a bus. This still ignores the hidden
transmitter problem because there is a similar network in the next room.

[-- Attachment #2: Type: text/html, Size: 672 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink]  [Bloat] On FiWi
  2023-03-15  5:20                                                             ` Bruce Perens
@ 2023-03-15 16:17                                                               ` Aaron Wood
  2023-03-15 17:05                                                                 ` Bruce Perens
  2023-03-15 17:32                                                               ` [LibreQoS] [Starlink] [Bloat] [Rpm] " rjmcmahon
  1 sibling, 1 reply; 183+ messages in thread
From: Aaron Wood @ 2023-03-15 16:17 UTC (permalink / raw)
  To: Bruce Perens
  Cc: Dave Taht via Starlink, Robert McMahon, Rpm, bloat, dan, libreqos

[-- Attachment #1: Type: text/plain, Size: 1815 bytes --]

I like the general idea, especially if there was a site-wide controller
module that can do the sort of frequency allocation that network engineers
do in dense AP deployments today:  adjacent APs run on different frequency
bands so that they reduce the likelihood of stepping on each others
transmissions.

One of the biggest knowledge gaps that I see people have around wireless is
that it IS a shared medium.  It both is, and isn’t a bus.  Shared like a
bus, but with the hidden transmissions that remove the csma abilities that
get with a bus.

But the main issue will be deployment.  This would be great for commercial
buildings that get retrofitted every decade or so with new gear.

This will be near-impossible in the US except for new construction or big
remodels of existing structures.  The cost of opening the walls to run the
fiber will make the cost of the hardware itself insignificant.

OTOH, because the STAs aren’t specialized, the existing ones “just work”,
and so you don’t have the usual bootstrap issue that plagues tech like
zigbee and Zwave, where there isn’t enough infra to justify the devices, or
not enough devices to justify the infra.

-Aaron

On Tue, Mar 14, 2023 at 10:21 PM Bruce Perens via Rpm <
rpm@lists.bufferbloat.net> wrote:

>
>
> On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon <rjmcmahon@rjmcmahon.com>
> wrote:
>
>> the AP needs to blast a CTS so every other possible conversation has to
>> halt.
>>
> The wireless network is not a bus. This still ignores the hidden
> transmitter problem because there is a similar network in the next room.
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
>
-- 
- Sent from my iPhone.

[-- Attachment #2: Type: text/html, Size: 2893 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink]  [Bloat] On FiWi
  2023-03-15 16:17                                                               ` [LibreQoS] [Rpm] [Starlink] [Bloat] " Aaron Wood
@ 2023-03-15 17:05                                                                 ` Bruce Perens
  2023-03-15 17:44                                                                   ` rjmcmahon
  2023-03-15 19:22                                                                   ` [LibreQoS] [Bloat] [Rpm] [Starlink] " David Lang
  0 siblings, 2 replies; 183+ messages in thread
From: Bruce Perens @ 2023-03-15 17:05 UTC (permalink / raw)
  To: Aaron Wood
  Cc: Dave Taht via Starlink, Robert McMahon, Rpm, bloat, dan, libreqos

[-- Attachment #1: Type: text/plain, Size: 2625 bytes --]

I think the big problem with this is users per domicile. It's easy enough
to support one floor of a residence with a single AP. There is an upper
limit on the bandwidth that one user can ever require. It is probably what
is needed for full-sphere VR at the perceptual limit. We have long achieved
the perceptual limit of ears, on top of that we have a lot of tweaking and
self-deception. We will get to the limit of eyes. Multiply this by eight
users per domicile for a limit that most would fit in. We can probably do
that with one AP. The additional equipment and maintenance outlay for
structural fiber and an AP per room doesn't really seem worth it.



On Wed, Mar 15, 2023, 09:17 Aaron Wood <woody77@gmail.com> wrote:

> I like the general idea, especially if there was a site-wide controller
> module that can do the sort of frequency allocation that network engineers
> do in dense AP deployments today:  adjacent APs run on different frequency
> bands so that they reduce the likelihood of stepping on each others
> transmissions.
>
> One of the biggest knowledge gaps that I see people have around wireless
> is that it IS a shared medium.  It both is, and isn’t a bus.  Shared like a
> bus, but with the hidden transmissions that remove the csma abilities that
> get with a bus.
>
> But the main issue will be deployment.  This would be great for commercial
> buildings that get retrofitted every decade or so with new gear.
>
> This will be near-impossible in the US except for new construction or big
> remodels of existing structures.  The cost of opening the walls to run the
> fiber will make the cost of the hardware itself insignificant.
>
> OTOH, because the STAs aren’t specialized, the existing ones “just work”,
> and so you don’t have the usual bootstrap issue that plagues tech like
> zigbee and Zwave, where there isn’t enough infra to justify the devices, or
> not enough devices to justify the infra.
>
> -Aaron
>
> On Tue, Mar 14, 2023 at 10:21 PM Bruce Perens via Rpm <
> rpm@lists.bufferbloat.net> wrote:
>
>>
>>
>> On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon <rjmcmahon@rjmcmahon.com>
>> wrote:
>>
>>> the AP needs to blast a CTS so every other possible conversation has to
>>> halt.
>>>
>> The wireless network is not a bus. This still ignores the hidden
>> transmitter problem because there is a similar network in the next room.
>>
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
>>
> --
> - Sent from my iPhone.
>

[-- Attachment #2: Type: text/html, Size: 3989 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink]  [Bloat] [Rpm] On FiWi
  2023-03-15  5:20                                                             ` Bruce Perens
  2023-03-15 16:17                                                               ` [LibreQoS] [Rpm] [Starlink] [Bloat] " Aaron Wood
@ 2023-03-15 17:32                                                               ` rjmcmahon
  2023-03-15 17:42                                                                 ` dan
  2023-03-15 17:43                                                                 ` [LibreQoS] [Bloat] [Starlink] [Rpm] " Sebastian Moeller
  1 sibling, 2 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-15 17:32 UTC (permalink / raw)
  To: Bruce Perens; +Cc: dan, libreqos, Dave Taht via Starlink, Rpm, bloat

The 6G is a contiguous 1200MhZ. It has low power indoor (LPI) and very 
low power (VLP) modes. The pluggable transceiver could be color coded to 
a chanspec, then the four color map problem can be used by installers 
per those chanspecs. https://en.wikipedia.org/wiki/Four_color_theorem

There is no CTS with microwave "interference" The high-speed PHY rates 
combined with low-density AP/STA ratios, ideally 1/1, decrease the 
probability of time signal superpositions. The goal with wireless isn't 
high densities but to unleash humans. A bunch of humans stuck in a dog 
park isn't really being unleashed. It's the ability to move from block 
to block so-to-speak. FiWi is cheaper than sidewalks, sanitation 
systems, etc.

The goal now is very low latency. Higher phy rates can achieve that and 
leave the medium free the vast most of the time and shut down the RRH 
too. Engineering extra capacity by orders of magnitude is better than 
AQM. This has been the case in data centers for decades. Congestion? Add 
a zero (or multiple by 10)

Note: None of this is done. This is a 5-10 year project with zero 
engineering resources assigned.

Bob
> On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
> <rjmcmahon@rjmcmahon.com> wrote:
> 
>> the AP needs to blast a CTS so every other possible conversation has
>> to halt.
> 
> The wireless network is not a bus. This still ignores the hidden
> transmitter problem because there is a similar network in the next
> room.

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink]  [Bloat] [Rpm] On FiWi
  2023-03-15 17:32                                                               ` [LibreQoS] [Starlink] [Bloat] [Rpm] " rjmcmahon
@ 2023-03-15 17:42                                                                 ` dan
  2023-03-15 19:33                                                                   ` [LibreQoS] [Bloat] [Starlink] " David Lang
  2023-03-15 17:43                                                                 ` [LibreQoS] [Bloat] [Starlink] [Rpm] " Sebastian Moeller
  1 sibling, 1 reply; 183+ messages in thread
From: dan @ 2023-03-15 17:42 UTC (permalink / raw)
  To: rjmcmahon; +Cc: Bruce Perens, libreqos, Dave Taht via Starlink, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 3874 bytes --]

Trying to do all of what is currently wanted with 1 AP in a house is a huge
part of the current problems with WiFi networks.  MOAR power to try to
overcome attenuation and reflections from walls so more power bleeds into
the next home/suite/apartment etc.

In the MSP space it's been rapidly moving to an AP per room with output
turned down to minimum.    Doing this we can reused 5Ghz channels 50ft away
(through 2 walls etc...) without interference.

One issue with the RRH model is that to accomplish this 'light bulb' model,
ie you put a light bulb in the room you want light, is that it requires
infrastructure cabling.  1 RRH AP in a house is already a failure today and
accounts for most access complaints.

Mesh radios have provided a bit of a gap fill, getting the access SSID
closer to the device and backhauling on a separate channel with better (and
likely fixed position ) antennas.

regardless of my opinion on the full on failure of moving firewall off prem
and the associated security risks and liabilities, single AP in a home is
already a proven failure that has given rise to the mesh systems that are
top sellers and top performers today.

IMO, there was a scheme that gained a moment of fame and then died out of
powerline networking and an AP per room off that powerline network.  I have
some of these deployed with mikrotik PLA adapters and the model works
fantastically, but the powerline networking has evolved slowly so I'm
seeing ~200Mbps practical speeds, and the mikrotik units have 802.11n
radios in them so also a bit of a struggle for modern speeds.   This model,
with some development to get ~2.5Gbps practical speeds, and WiFi6 or WiFi7
per room at very low output power, is a very practical and deployable by
consumers setup.

WiFi7 also solves some pieces of this with AP coordination and
co-transmission, sort of like a MUMIMO with multiple APs, and that's in
early devices already (TPLINK just launched an AP).

IMO, too many hurdles for RRH models from massive amounts of unfrastructure
to build, homes and appartment buildings that need re-wired, security and
liability concerns of homes and business not being firewall isolated by
stakeholders of those networks.

On Wed, Mar 15, 2023 at 11:32 AM rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:

> The 6G is a contiguous 1200MhZ. It has low power indoor (LPI) and very
> low power (VLP) modes. The pluggable transceiver could be color coded to
> a chanspec, then the four color map problem can be used by installers
> per those chanspecs. https://en.wikipedia.org/wiki/Four_color_theorem
>
> There is no CTS with microwave "interference" The high-speed PHY rates
> combined with low-density AP/STA ratios, ideally 1/1, decrease the
> probability of time signal superpositions. The goal with wireless isn't
> high densities but to unleash humans. A bunch of humans stuck in a dog
> park isn't really being unleashed. It's the ability to move from block
> to block so-to-speak. FiWi is cheaper than sidewalks, sanitation
> systems, etc.
>
> The goal now is very low latency. Higher phy rates can achieve that and
> leave the medium free the vast most of the time and shut down the RRH
> too. Engineering extra capacity by orders of magnitude is better than
> AQM. This has been the case in data centers for decades. Congestion? Add
> a zero (or multiple by 10)
>
> Note: None of this is done. This is a 5-10 year project with zero
> engineering resources assigned.
>
> Bob
> > On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
> > <rjmcmahon@rjmcmahon.com> wrote:
> >
> >> the AP needs to blast a CTS so every other possible conversation has
> >> to halt.
> >
> > The wireless network is not a bus. This still ignores the hidden
> > transmitter problem because there is a similar network in the next
> > room.
>

[-- Attachment #2: Type: text/html, Size: 4737 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink]   [Rpm] On FiWi
  2023-03-15 17:32                                                               ` [LibreQoS] [Starlink] [Bloat] [Rpm] " rjmcmahon
  2023-03-15 17:42                                                                 ` dan
@ 2023-03-15 17:43                                                                 ` Sebastian Moeller
  2023-03-15 17:49                                                                   ` rjmcmahon
  1 sibling, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-15 17:43 UTC (permalink / raw)
  To: rjmcmahon; +Cc: Bruce Perens, Dave Taht via Starlink, Rpm, dan, libreqos, bloat

Hi Bob,

I like your design sketch and the ideas behind it.


> On Mar 15, 2023, at 18:32, rjmcmahon via Bloat <bloat@lists.bufferbloat.net> wrote:
> 
> The 6G is a contiguous 1200MhZ. It has low power indoor (LPI) and very low power (VLP) modes. The pluggable transceiver could be color coded to a chanspec, then the four color map problem can be used by installers per those chanspecs. https://en.wikipedia.org/wiki/Four_color_theorem

	Maybe design this to be dual band from the start to avoid the up/down "tdm" approach we currently use? Better yet go full duplex, which might be an option if we get enough radios that not much beamforming/MIMO is necessary? I obviously lack deep enough understanf=dingwhether this makes any sense or is just buzzword bingo from my side :)


> 
> There is no CTS with microwave "interference" The high-speed PHY rates combined with low-density AP/STA ratios, ideally 1/1, decrease the probability of time signal superpositions. The goal with wireless isn't high densities but to unleash humans. A bunch of humans stuck in a dog park isn't really being unleashed. It's the ability to move from block to block so-to-speak. FiWi is cheaper than sidewalks, sanitation systems, etc.
> 
> The goal now is very low latency. Higher phy rates can achieve that and leave the medium free the vast most of the time and shut down the RRH too. Engineering extra capacity by orders of magnitude is better than AQM. This has been the case in data centers for decades. Congestion? Add a zero (or multiple by 10)

	I am weary of this kind of trust in continuous exponential growth... at one point we reach a limit and will need to figure out how to deal with congestion again, so why drop this capability on the way? The nice thing about AQMs is if there is no queue build up these basically do nothing... (might need some design changes to optimize an AQM to be as cheap as possible for the uncontended case)...

> Note: None of this is done. This is a 5-10 year project with zero engineering resources assigned.
> 
> Bob
>> On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
>> <rjmcmahon@rjmcmahon.com> wrote:
>>> the AP needs to blast a CTS so every other possible conversation has
>>> to halt.
>> The wireless network is not a bus. This still ignores the hidden
>> transmitter problem because there is a similar network in the next
>> room.
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink]  [Bloat] On FiWi
  2023-03-15 17:05                                                                 ` Bruce Perens
@ 2023-03-15 17:44                                                                   ` rjmcmahon
  2023-03-15 19:22                                                                   ` [LibreQoS] [Bloat] [Rpm] [Starlink] " David Lang
  1 sibling, 0 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-15 17:44 UTC (permalink / raw)
  To: Bruce Perens
  Cc: Aaron Wood, Dave Taht via Starlink, Rpm, bloat, dan, libreqos

My brother and I installed irrigation systems in Texas where it rains a 
lot. No problem with getting business. Digging trenches, laying & gluing 
PVC pipe, installing controller wires, etc is good, respectable work.

I wonder if too many white-collar workers avoided blue-collar work and 
don't understand that blue-collar workers actually are very interested 
in installing fiber (or Actifi) and being part of improving things.

Bob
> I think the big problem with this is users per domicile. It's easy
> enough to support one floor of a residence with a single AP. There is
> an upper limit on the bandwidth that one user can ever require. It is
> probably what is needed for full-sphere VR at the perceptual limit. We
> have long achieved the perceptual limit of ears, on top of that we
> have a lot of tweaking and self-deception. We will get to the limit of
> eyes. Multiply this by eight users per domicile for a limit that most
> would fit in. We can probably do that with one AP. The additional
> equipment and maintenance outlay for structural fiber and an AP per
> room doesn't really seem worth it.
> 
> On Wed, Mar 15, 2023, 09:17 Aaron Wood <woody77@gmail.com> wrote:
> 
>> I like the general idea, especially if there was a site-wide
>> controller module that can do the sort of frequency allocation that
>> network engineers do in dense AP deployments today:  adjacent APs
>> run on different frequency bands so that they reduce the likelihood
>> of stepping on each others transmissions.
>> 
>> One of the biggest knowledge gaps that I see people have around
>> wireless is that it IS a shared medium.  It both is, and isn’t a
>> bus.  Shared like a bus, but with the hidden transmissions that
>> remove the csma abilities that get with a bus.
>> 
>> But the main issue will be deployment.  This would be great for
>> commercial buildings that get retrofitted every decade or so with
>> new gear.
>> 
>> This will be near-impossible in the US except for new construction
>> or big remodels of existing structures.  The cost of opening the
>> walls to run the fiber will make the cost of the hardware itself
>> insignificant.
>> 
>> OTOH, because the STAs aren’t specialized, the existing ones
>> “just work”, and so you don’t have the usual bootstrap issue
>> that plagues tech like zigbee and Zwave, where there isn’t enough
>> infra to justify the devices, or not enough devices to justify the
>> infra.
>> 
>> -Aaron
>> 
>> On Tue, Mar 14, 2023 at 10:21 PM Bruce Perens via Rpm
>> <rpm@lists.bufferbloat.net> wrote:
>> 
>> On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
>> <rjmcmahon@rjmcmahon.com> wrote:
>> 
>> the AP needs to blast a CTS so every other possible conversation has
>> to halt.
>> The wireless network is not a bus. This still ignores the hidden
>> transmitter problem because there is a similar network in the next
>> room.
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
>  --
> - Sent from my iPhone.

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink]   [Rpm] On FiWi
  2023-03-15 17:43                                                                 ` [LibreQoS] [Bloat] [Starlink] [Rpm] " Sebastian Moeller
@ 2023-03-15 17:49                                                                   ` rjmcmahon
  2023-03-15 17:53                                                                     ` [LibreQoS] [Rpm] [Bloat] [Starlink] " Dave Taht
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-15 17:49 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Bruce Perens, Dave Taht via Starlink, Rpm, dan, libreqos, bloat

Agreed, AQM is like an emergency brake. Go ahead and keep it but hope to 
never need to use it.

Bob
> Hi Bob,
> 
> I like your design sketch and the ideas behind it.
> 
> 
>> On Mar 15, 2023, at 18:32, rjmcmahon via Bloat 
>> <bloat@lists.bufferbloat.net> wrote:
>> 
>> The 6G is a contiguous 1200MhZ. It has low power indoor (LPI) and very 
>> low power (VLP) modes. The pluggable transceiver could be color coded 
>> to a chanspec, then the four color map problem can be used by 
>> installers per those chanspecs. 
>> https://en.wikipedia.org/wiki/Four_color_theorem
> 
> 	Maybe design this to be dual band from the start to avoid the up/down
> "tdm" approach we currently use? Better yet go full duplex, which
> might be an option if we get enough radios that not much
> beamforming/MIMO is necessary? I obviously lack deep enough
> understanf=dingwhether this makes any sense or is just buzzword bingo
> from my side :)
> 
> 
>> 
>> There is no CTS with microwave "interference" The high-speed PHY rates 
>> combined with low-density AP/STA ratios, ideally 1/1, decrease the 
>> probability of time signal superpositions. The goal with wireless 
>> isn't high densities but to unleash humans. A bunch of humans stuck in 
>> a dog park isn't really being unleashed. It's the ability to move from 
>> block to block so-to-speak. FiWi is cheaper than sidewalks, sanitation 
>> systems, etc.
>> 
>> The goal now is very low latency. Higher phy rates can achieve that 
>> and leave the medium free the vast most of the time and shut down the 
>> RRH too. Engineering extra capacity by orders of magnitude is better 
>> than AQM. This has been the case in data centers for decades. 
>> Congestion? Add a zero (or multiple by 10)
> 
> 	I am weary of this kind of trust in continuous exponential growth...
> at one point we reach a limit and will need to figure out how to deal
> with congestion again, so why drop this capability on the way? The
> nice thing about AQMs is if there is no queue build up these basically
> do nothing... (might need some design changes to optimize an AQM to be
> as cheap as possible for the uncontended case)...
> 
>> Note: None of this is done. This is a 5-10 year project with zero 
>> engineering resources assigned.
>> 
>> Bob
>>> On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
>>> <rjmcmahon@rjmcmahon.com> wrote:
>>>> the AP needs to blast a CTS so every other possible conversation has
>>>> to halt.
>>> The wireless network is not a bus. This still ignores the hidden
>>> transmitter problem because there is a similar network in the next
>>> room.
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Bloat] [Starlink]  On FiWi
  2023-03-15 17:49                                                                   ` rjmcmahon
@ 2023-03-15 17:53                                                                     ` Dave Taht
  2023-03-15 17:59                                                                       ` dan
  2023-03-15 19:39                                                                       ` rjmcmahon
  0 siblings, 2 replies; 183+ messages in thread
From: Dave Taht @ 2023-03-15 17:53 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Sebastian Moeller, Rpm, dan, Bruce Perens, libreqos,
	Dave Taht via Starlink, bloat

On Wed, Mar 15, 2023 at 10:49 AM rjmcmahon via Rpm
<rpm@lists.bufferbloat.net> wrote:
>
> Agreed, AQM is like an emergency brake. Go ahead and keep it but hope to
> never need to use it.

Tee-hee, flow queuing is like having a 1024 lanes that can be used for
everything from pedestrians, to bicycles, to trucks and trains. I
would settle for FQ everywhere over AQM.

This has been a very fun conversation and I am struggling to keep up.

I have sometimes thought that LiFi (https://lifi.co/) would suddenly
come out of the woodwork,
and we would be networking over that through the household.


>
> Bob
> > Hi Bob,
> >
> > I like your design sketch and the ideas behind it.
> >
> >
> >> On Mar 15, 2023, at 18:32, rjmcmahon via Bloat
> >> <bloat@lists.bufferbloat.net> wrote:
> >>
> >> The 6G is a contiguous 1200MhZ. It has low power indoor (LPI) and very
> >> low power (VLP) modes. The pluggable transceiver could be color coded
> >> to a chanspec, then the four color map problem can be used by
> >> installers per those chanspecs.
> >> https://en.wikipedia.org/wiki/Four_color_theorem
> >
> >       Maybe design this to be dual band from the start to avoid the up/down
> > "tdm" approach we currently use? Better yet go full duplex, which
> > might be an option if we get enough radios that not much
> > beamforming/MIMO is necessary? I obviously lack deep enough
> > understanf=dingwhether this makes any sense or is just buzzword bingo
> > from my side :)
> >
> >
> >>
> >> There is no CTS with microwave "interference" The high-speed PHY rates
> >> combined with low-density AP/STA ratios, ideally 1/1, decrease the
> >> probability of time signal superpositions. The goal with wireless
> >> isn't high densities but to unleash humans. A bunch of humans stuck in
> >> a dog park isn't really being unleashed. It's the ability to move from
> >> block to block so-to-speak. FiWi is cheaper than sidewalks, sanitation
> >> systems, etc.
> >>
> >> The goal now is very low latency. Higher phy rates can achieve that
> >> and leave the medium free the vast most of the time and shut down the
> >> RRH too. Engineering extra capacity by orders of magnitude is better
> >> than AQM. This has been the case in data centers for decades.
> >> Congestion? Add a zero (or multiple by 10)
> >
> >       I am weary of this kind of trust in continuous exponential growth...
> > at one point we reach a limit and will need to figure out how to deal
> > with congestion again, so why drop this capability on the way? The
> > nice thing about AQMs is if there is no queue build up these basically
> > do nothing... (might need some design changes to optimize an AQM to be
> > as cheap as possible for the uncontended case)...
> >
> >> Note: None of this is done. This is a 5-10 year project with zero
> >> engineering resources assigned.
> >>
> >> Bob
> >>> On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
> >>> <rjmcmahon@rjmcmahon.com> wrote:
> >>>> the AP needs to blast a CTS so every other possible conversation has
> >>>> to halt.
> >>> The wireless network is not a bus. This still ignores the hidden
> >>> transmitter problem because there is a similar network in the next
> >>> room.
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm



-- 
Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Bloat] [Starlink]  On FiWi
  2023-03-15 17:53                                                                     ` [LibreQoS] [Rpm] [Bloat] [Starlink] " Dave Taht
@ 2023-03-15 17:59                                                                       ` dan
  2023-03-15 19:39                                                                       ` rjmcmahon
  1 sibling, 0 replies; 183+ messages in thread
From: dan @ 2023-03-15 17:59 UTC (permalink / raw)
  To: Dave Taht
  Cc: rjmcmahon, Sebastian Moeller, Rpm, Bruce Perens, libreqos,
	Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 1596 bytes --]

On Wed, Mar 15, 2023 at 11:53 AM Dave Taht <dave.taht@gmail.com> wrote:

> On Wed, Mar 15, 2023 at 10:49 AM rjmcmahon via Rpm
> <rpm@lists.bufferbloat.net> wrote:
> >
> > Agreed, AQM is like an emergency brake. Go ahead and keep it but hope to
> > never need to use it.
>
> Tee-hee, flow queuing is like having a 1024 lanes that can be used for
> everything from pedestrians, to bicycles, to trucks and trains. I
> would settle for FQ everywhere over AQM.
>
> This has been a very fun conversation and I am struggling to keep up.
>
> I have sometimes thought that LiFi (https://lifi.co/) would suddenly
> come out of the woodwork,
> and we would be networking over that through the household.
>
> I'd rather say it's a traffic cop and has value in essentially any
network.  Keeping the costs down on end user hardware is fundamental, and
those devices will behave however they want (ie badly).  AQM is the
'roundabout' that keeps things flowing but each thing at an appropriate
rate so it works well.  There will *never be infinite bandwidth or even
enough that no services saturate it.   Even a very small town with everyone
on a 1G turns into 20Tb of necessary capacity to avoid the usefulness of
AQM.  When likely 20Gb is sufficient.

There has to be something that addresses the car going 180MPH on the
freeway.  That car requires everyone else to pull off the road to avoid
disaster in the same way that data chews up a fifo buffer and wrecks the
rest.  AQM is the solution now, and more evolved AQM is most likely the
answer for many many years to come.

[-- Attachment #2: Type: text/html, Size: 2104 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Rpm] [Starlink]   On FiWi
  2023-03-15 17:05                                                                 ` Bruce Perens
  2023-03-15 17:44                                                                   ` rjmcmahon
@ 2023-03-15 19:22                                                                   ` David Lang
  1 sibling, 0 replies; 183+ messages in thread
From: David Lang @ 2023-03-15 19:22 UTC (permalink / raw)
  To: Bruce Perens
  Cc: Aaron Wood, Rpm, dan, libreqos, Dave Taht via Starlink,
	Robert McMahon, bloat

[-- Attachment #1: Type: text/plain, Size: 560 bytes --]

On Wed, 15 Mar 2023, Bruce Perens via Bloat wrote:

> There is an upper limit on the bandwidth that one user can ever require. It is 
> probably what is needed for full-sphere VR at the perceptual limit.

I would disagree with this. This assumes that you are only streaming the data. 
If you then have your full-sphere VR at the perceptual limit for the Lord of the 
Rings extended edition trilogy that you want to download before you hop on a 
plane in an hour, you want a lot more bandwith than just what you would need to 
watch it in real time.

David Lang

[-- Attachment #2: Type: text/plain, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink]   [Rpm] On FiWi
  2023-03-15 17:42                                                                 ` dan
@ 2023-03-15 19:33                                                                   ` David Lang
  2023-03-15 19:39                                                                     ` [LibreQoS] [Rpm] [Bloat] [Starlink] " Dave Taht
  0 siblings, 1 reply; 183+ messages in thread
From: David Lang @ 2023-03-15 19:33 UTC (permalink / raw)
  To: dan; +Cc: rjmcmahon, Dave Taht via Starlink, Rpm, bloat, Bruce Perens, libreqos

[-- Attachment #1: Type: text/plain, Size: 5367 bytes --]

if you want another example of the failure, look at any conference center, they 
have a small number of APs with wide coverage. It works well when the place is 
empty and they walk around and test it, but when it fills up with users, the 
entire network collapses.

Part of this is that wifi was really designed for sparse environments, so it's 
solution to "I didn't get my message through" is to talk slower (and louder if 
possible), which just creates more interference for other users and reduces the 
available airtime.

I just finished the Scale conference in Pasadena, CA. We deployed over 100 APs 
for the conference, up to 7 in a room, on the floor (so that the attendees 
bodies attenuate the signal) at low power so that the channels could be re-used 
more readily.

in the cell phone world they discovered 'microcells' years ago, but with wifi 
too many people are still trying to cover the max area with the fewest possible 
number of radios. As Dan says, it just doesn't work.

and on mesh radios, you need to not just use a different channel for your 
uplink, you need a different band to avoid desense on the connection to your 
users. And that uplink is going to have the same hidden transmitter and airtime 
problems competing with the other nodes also doing the uplink that it's 
scalability is very limited (even with directional antennas). Wire/fiber for the 
uplink is much better.

David Lang



  On Wed, 15 Mar 
2023, dan via Bloat wrote:

> Trying to do all of what is currently wanted with 1 AP in a house is a huge
> part of the current problems with WiFi networks.  MOAR power to try to
> overcome attenuation and reflections from walls so more power bleeds into
> the next home/suite/apartment etc.
>
> In the MSP space it's been rapidly moving to an AP per room with output
> turned down to minimum.    Doing this we can reused 5Ghz channels 50ft away
> (through 2 walls etc...) without interference.
>
> One issue with the RRH model is that to accomplish this 'light bulb' model,
> ie you put a light bulb in the room you want light, is that it requires
> infrastructure cabling.  1 RRH AP in a house is already a failure today and
> accounts for most access complaints.
>
> Mesh radios have provided a bit of a gap fill, getting the access SSID
> closer to the device and backhauling on a separate channel with better (and
> likely fixed position ) antennas.
>
> regardless of my opinion on the full on failure of moving firewall off prem
> and the associated security risks and liabilities, single AP in a home is
> already a proven failure that has given rise to the mesh systems that are
> top sellers and top performers today.
>
> IMO, there was a scheme that gained a moment of fame and then died out of
> powerline networking and an AP per room off that powerline network.  I have
> some of these deployed with mikrotik PLA adapters and the model works
> fantastically, but the powerline networking has evolved slowly so I'm
> seeing ~200Mbps practical speeds, and the mikrotik units have 802.11n
> radios in them so also a bit of a struggle for modern speeds.   This model,
> with some development to get ~2.5Gbps practical speeds, and WiFi6 or WiFi7
> per room at very low output power, is a very practical and deployable by
> consumers setup.
>
> WiFi7 also solves some pieces of this with AP coordination and
> co-transmission, sort of like a MUMIMO with multiple APs, and that's in
> early devices already (TPLINK just launched an AP).
>
> IMO, too many hurdles for RRH models from massive amounts of unfrastructure
> to build, homes and appartment buildings that need re-wired, security and
> liability concerns of homes and business not being firewall isolated by
> stakeholders of those networks.
>
> On Wed, Mar 15, 2023 at 11:32 AM rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>
>> The 6G is a contiguous 1200MhZ. It has low power indoor (LPI) and very
>> low power (VLP) modes. The pluggable transceiver could be color coded to
>> a chanspec, then the four color map problem can be used by installers
>> per those chanspecs. https://en.wikipedia.org/wiki/Four_color_theorem
>>
>> There is no CTS with microwave "interference" The high-speed PHY rates
>> combined with low-density AP/STA ratios, ideally 1/1, decrease the
>> probability of time signal superpositions. The goal with wireless isn't
>> high densities but to unleash humans. A bunch of humans stuck in a dog
>> park isn't really being unleashed. It's the ability to move from block
>> to block so-to-speak. FiWi is cheaper than sidewalks, sanitation
>> systems, etc.
>>
>> The goal now is very low latency. Higher phy rates can achieve that and
>> leave the medium free the vast most of the time and shut down the RRH
>> too. Engineering extra capacity by orders of magnitude is better than
>> AQM. This has been the case in data centers for decades. Congestion? Add
>> a zero (or multiple by 10)
>>
>> Note: None of this is done. This is a 5-10 year project with zero
>> engineering resources assigned.
>>
>> Bob
>>> On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
>>> <rjmcmahon@rjmcmahon.com> wrote:
>>>
>>>> the AP needs to blast a CTS so every other possible conversation has
>>>> to halt.
>>>
>>> The wireless network is not a bus. This still ignores the hidden
>>> transmitter problem because there is a similar network in the next
>>> room.
>>
>

[-- Attachment #2: Type: text/plain, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Bloat] [Starlink]  On FiWi
  2023-03-15 17:53                                                                     ` [LibreQoS] [Rpm] [Bloat] [Starlink] " Dave Taht
  2023-03-15 17:59                                                                       ` dan
@ 2023-03-15 19:39                                                                       ` rjmcmahon
  1 sibling, 0 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-15 19:39 UTC (permalink / raw)
  To: Dave Taht
  Cc: Sebastian Moeller, Rpm, dan, Bruce Perens, libreqos,
	Dave Taht via Starlink, bloat

> I have sometimes thought that LiFi (https://lifi.co/) would suddenly
> come out of the woodwork,
> and we would be networking over that through the household.

I think the wishful thinking is "coming from woodwork" vs coming from 
the current and near future state of engineering. Engineering comes from 
humans solving problems who typically get paid to do so.

FiWi would leverage SFP tech. The Fi side of FiWi comes from mass NRE 
investments into the data center networks. The Wi side from mass 
investment into billions of mobile phones. Leveraging WiFi & SFP parts 
is critical to success as semiconductors are a by-the-pound business. I 
think a 1X25G VCSEL SFP, which is tolerant to dust over MMF, has a 
retail price of $40 today.  The sweet spot for DC SFP today is driven by 
1x100Gb/s serdes and I suspect angel investors are trying to improve the 
power significantly of the attached lasers. It's been said that one 
order of improvement in lowering laser power gives multiple orders of 
laser MTBF improvements. So lasers, SERDES & CMOS radios are not static 
and will constantly improve year to year per thousands of engineers 
working on them today, tomorrow & on.

The important parts of FiWi have to be pluggable - just like a light 
bulb is. The socket and wiring last (a la the fiber and antennas) - we 
just swap a bulb if it burns out, if we want a different color, if we 
want a higher foot candle rating, etc. This allows engineering cadences 
to match market cadences and pays staffs. Most engineers don't like to 
wait decades between releases so-to-speak and don't like feast & famine 
lifestyles. Moore's law was and is about human cadences too.

I don't see any engineering NRE that LiFi could leverage. Sounds cool 
though.

Bob

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Bloat] [Starlink]  On FiWi
  2023-03-15 19:33                                                                   ` [LibreQoS] [Bloat] [Starlink] " David Lang
@ 2023-03-15 19:39                                                                     ` Dave Taht
  2023-03-15 21:52                                                                       ` David Lang
  0 siblings, 1 reply; 183+ messages in thread
From: Dave Taht @ 2023-03-15 19:39 UTC (permalink / raw)
  To: David Lang
  Cc: dan, Rpm, libreqos, Bruce Perens, Dave Taht via Starlink, bloat

On Wed, Mar 15, 2023 at 12:33 PM David Lang via Rpm
<rpm@lists.bufferbloat.net> wrote:
>
> if you want another example of the failure, look at any conference center, they
> have a small number of APs with wide coverage. It works well when the place is
> empty and they walk around and test it, but when it fills up with users, the
> entire network collapses.
>
> Part of this is that wifi was really designed for sparse environments, so it's
> solution to "I didn't get my message through" is to talk slower (and louder if
> possible), which just creates more interference for other users and reduces the
> available airtime.
>
> I just finished the Scale conference in Pasadena, CA. We deployed over 100 APs
> for the conference, up to 7 in a room, on the floor (so that the attendees
> bodies attenuate the signal) at low power so that the channels could be re-used
> more readily.

How did it go? You were deploying fq_codel on the wndr3800s there as
of a few years ago, and I remember you got rave reviews... (can you
repost the link to that old data/blog/podcast?)

Did you get any good stats?

Run cake anywhere?
>
> in the cell phone world they discovered 'microcells' years ago, but with wifi
> too many people are still trying to cover the max area with the fewest possible
> number of radios. As Dan says, it just doesn't work.
>
> and on mesh radios, you need to not just use a different channel for your
> uplink, you need a different band to avoid desense on the connection to your
> users. And that uplink is going to have the same hidden transmitter and airtime
> problems competing with the other nodes also doing the uplink that it's
> scalability is very limited (even with directional antennas). Wire/fiber for the
> uplink is much better.
>
> David Lang
>
>
>
>   On Wed, 15 Mar
> 2023, dan via Bloat wrote:
>
> > Trying to do all of what is currently wanted with 1 AP in a house is a huge
> > part of the current problems with WiFi networks.  MOAR power to try to
> > overcome attenuation and reflections from walls so more power bleeds into
> > the next home/suite/apartment etc.
> >
> > In the MSP space it's been rapidly moving to an AP per room with output
> > turned down to minimum.    Doing this we can reused 5Ghz channels 50ft away
> > (through 2 walls etc...) without interference.
> >
> > One issue with the RRH model is that to accomplish this 'light bulb' model,
> > ie you put a light bulb in the room you want light, is that it requires
> > infrastructure cabling.  1 RRH AP in a house is already a failure today and
> > accounts for most access complaints.
> >
> > Mesh radios have provided a bit of a gap fill, getting the access SSID
> > closer to the device and backhauling on a separate channel with better (and
> > likely fixed position ) antennas.
> >
> > regardless of my opinion on the full on failure of moving firewall off prem
> > and the associated security risks and liabilities, single AP in a home is
> > already a proven failure that has given rise to the mesh systems that are
> > top sellers and top performers today.
> >
> > IMO, there was a scheme that gained a moment of fame and then died out of
> > powerline networking and an AP per room off that powerline network.  I have
> > some of these deployed with mikrotik PLA adapters and the model works
> > fantastically, but the powerline networking has evolved slowly so I'm
> > seeing ~200Mbps practical speeds, and the mikrotik units have 802.11n
> > radios in them so also a bit of a struggle for modern speeds.   This model,
> > with some development to get ~2.5Gbps practical speeds, and WiFi6 or WiFi7
> > per room at very low output power, is a very practical and deployable by
> > consumers setup.
> >
> > WiFi7 also solves some pieces of this with AP coordination and
> > co-transmission, sort of like a MUMIMO with multiple APs, and that's in
> > early devices already (TPLINK just launched an AP).
> >
> > IMO, too many hurdles for RRH models from massive amounts of unfrastructure
> > to build, homes and appartment buildings that need re-wired, security and
> > liability concerns of homes and business not being firewall isolated by
> > stakeholders of those networks.
> >
> > On Wed, Mar 15, 2023 at 11:32 AM rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
> >
> >> The 6G is a contiguous 1200MhZ. It has low power indoor (LPI) and very
> >> low power (VLP) modes. The pluggable transceiver could be color coded to
> >> a chanspec, then the four color map problem can be used by installers
> >> per those chanspecs. https://en.wikipedia.org/wiki/Four_color_theorem
> >>
> >> There is no CTS with microwave "interference" The high-speed PHY rates
> >> combined with low-density AP/STA ratios, ideally 1/1, decrease the
> >> probability of time signal superpositions. The goal with wireless isn't
> >> high densities but to unleash humans. A bunch of humans stuck in a dog
> >> park isn't really being unleashed. It's the ability to move from block
> >> to block so-to-speak. FiWi is cheaper than sidewalks, sanitation
> >> systems, etc.
> >>
> >> The goal now is very low latency. Higher phy rates can achieve that and
> >> leave the medium free the vast most of the time and shut down the RRH
> >> too. Engineering extra capacity by orders of magnitude is better than
> >> AQM. This has been the case in data centers for decades. Congestion? Add
> >> a zero (or multiple by 10)
> >>
> >> Note: None of this is done. This is a 5-10 year project with zero
> >> engineering resources assigned.
> >>
> >> Bob
> >>> On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
> >>> <rjmcmahon@rjmcmahon.com> wrote:
> >>>
> >>>> the AP needs to blast a CTS so every other possible conversation has
> >>>> to halt.
> >>>
> >>> The wireless network is not a bus. This still ignores the hidden
> >>> transmitter problem because there is a similar network in the next
> >>> room.
> >>
> >_______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm



-- 
Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Bloat] [Starlink]  On FiWi
  2023-03-15 19:39                                                                     ` [LibreQoS] [Rpm] [Bloat] [Starlink] " Dave Taht
@ 2023-03-15 21:52                                                                       ` David Lang
  2023-03-15 22:04                                                                         ` Dave Taht
  0 siblings, 1 reply; 183+ messages in thread
From: David Lang @ 2023-03-15 21:52 UTC (permalink / raw)
  To: Dave Taht
  Cc: David Lang, dan, Rpm, libreqos, Bruce Perens,
	Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 6555 bytes --]

On Wed, 15 Mar 2023, Dave Taht wrote:

> On Wed, Mar 15, 2023 at 12:33 PM David Lang via Rpm
> <rpm@lists.bufferbloat.net> wrote:
>>
>> if you want another example of the failure, look at any conference center, they
>> have a small number of APs with wide coverage. It works well when the place is
>> empty and they walk around and test it, but when it fills up with users, the
>> entire network collapses.
>>
>> Part of this is that wifi was really designed for sparse environments, so it's
>> solution to "I didn't get my message through" is to talk slower (and louder if
>> possible), which just creates more interference for other users and reduces the
>> available airtime.
>>
>> I just finished the Scale conference in Pasadena, CA. We deployed over 100 APs
>> for the conference, up to 7 in a room, on the floor (so that the attendees
>> bodies attenuate the signal) at low power so that the channels could be re-used
>> more readily.
>
> How did it go? You were deploying fq_codel on the wndr3800s there as
> of a few years ago, and I remember you got rave reviews... (can you
> repost the link to that old data/blog/podcast?)

no good stats this year. still using the wndr3800s. Lots of people commenting on 
how well the network did, but we were a bit behind this year and didn't get good 
monitoring in place. No cake yet.

I think this is what you mean
https://www.youtube.com/watch?v=UXvGbEYeWp0

> Did you get any good stats?
>
> Run cake anywhere?
>>
>> in the cell phone world they discovered 'microcells' years ago, but with wifi
>> too many people are still trying to cover the max area with the fewest possible
>> number of radios. As Dan says, it just doesn't work.
>>
>> and on mesh radios, you need to not just use a different channel for your
>> uplink, you need a different band to avoid desense on the connection to your
>> users. And that uplink is going to have the same hidden transmitter and airtime
>> problems competing with the other nodes also doing the uplink that it's
>> scalability is very limited (even with directional antennas). Wire/fiber for the
>> uplink is much better.
>>
>> David Lang
>>
>>
>>
>>   On Wed, 15 Mar
>> 2023, dan via Bloat wrote:
>>
>>> Trying to do all of what is currently wanted with 1 AP in a house is a huge
>>> part of the current problems with WiFi networks.  MOAR power to try to
>>> overcome attenuation and reflections from walls so more power bleeds into
>>> the next home/suite/apartment etc.
>>>
>>> In the MSP space it's been rapidly moving to an AP per room with output
>>> turned down to minimum.    Doing this we can reused 5Ghz channels 50ft away
>>> (through 2 walls etc...) without interference.
>>>
>>> One issue with the RRH model is that to accomplish this 'light bulb' model,
>>> ie you put a light bulb in the room you want light, is that it requires
>>> infrastructure cabling.  1 RRH AP in a house is already a failure today and
>>> accounts for most access complaints.
>>>
>>> Mesh radios have provided a bit of a gap fill, getting the access SSID
>>> closer to the device and backhauling on a separate channel with better (and
>>> likely fixed position ) antennas.
>>>
>>> regardless of my opinion on the full on failure of moving firewall off prem
>>> and the associated security risks and liabilities, single AP in a home is
>>> already a proven failure that has given rise to the mesh systems that are
>>> top sellers and top performers today.
>>>
>>> IMO, there was a scheme that gained a moment of fame and then died out of
>>> powerline networking and an AP per room off that powerline network.  I have
>>> some of these deployed with mikrotik PLA adapters and the model works
>>> fantastically, but the powerline networking has evolved slowly so I'm
>>> seeing ~200Mbps practical speeds, and the mikrotik units have 802.11n
>>> radios in them so also a bit of a struggle for modern speeds.   This model,
>>> with some development to get ~2.5Gbps practical speeds, and WiFi6 or WiFi7
>>> per room at very low output power, is a very practical and deployable by
>>> consumers setup.
>>>
>>> WiFi7 also solves some pieces of this with AP coordination and
>>> co-transmission, sort of like a MUMIMO with multiple APs, and that's in
>>> early devices already (TPLINK just launched an AP).
>>>
>>> IMO, too many hurdles for RRH models from massive amounts of unfrastructure
>>> to build, homes and appartment buildings that need re-wired, security and
>>> liability concerns of homes and business not being firewall isolated by
>>> stakeholders of those networks.
>>>
>>> On Wed, Mar 15, 2023 at 11:32 AM rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>>>
>>>> The 6G is a contiguous 1200MhZ. It has low power indoor (LPI) and very
>>>> low power (VLP) modes. The pluggable transceiver could be color coded to
>>>> a chanspec, then the four color map problem can be used by installers
>>>> per those chanspecs. https://en.wikipedia.org/wiki/Four_color_theorem
>>>>
>>>> There is no CTS with microwave "interference" The high-speed PHY rates
>>>> combined with low-density AP/STA ratios, ideally 1/1, decrease the
>>>> probability of time signal superpositions. The goal with wireless isn't
>>>> high densities but to unleash humans. A bunch of humans stuck in a dog
>>>> park isn't really being unleashed. It's the ability to move from block
>>>> to block so-to-speak. FiWi is cheaper than sidewalks, sanitation
>>>> systems, etc.
>>>>
>>>> The goal now is very low latency. Higher phy rates can achieve that and
>>>> leave the medium free the vast most of the time and shut down the RRH
>>>> too. Engineering extra capacity by orders of magnitude is better than
>>>> AQM. This has been the case in data centers for decades. Congestion? Add
>>>> a zero (or multiple by 10)
>>>>
>>>> Note: None of this is done. This is a 5-10 year project with zero
>>>> engineering resources assigned.
>>>>
>>>> Bob
>>>>> On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
>>>>> <rjmcmahon@rjmcmahon.com> wrote:
>>>>>
>>>>>> the AP needs to blast a CTS so every other possible conversation has
>>>>>> to halt.
>>>>>
>>>>> The wireless network is not a bus. This still ignores the hidden
>>>>> transmitter problem because there is a similar network in the next
>>>>> room.
>>>>
>>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
>
>
>
>

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Bloat] [Starlink]  On FiWi
  2023-03-15 21:52                                                                       ` David Lang
@ 2023-03-15 22:04                                                                         ` Dave Taht
  2023-03-15 22:08                                                                           ` dan
  0 siblings, 1 reply; 183+ messages in thread
From: Dave Taht @ 2023-03-15 22:04 UTC (permalink / raw)
  To: David Lang
  Cc: dan, Rpm, libreqos, Bruce Perens, Dave Taht via Starlink, bloat

On Wed, Mar 15, 2023 at 2:52 PM David Lang <david@lang.hm> wrote:
>
> On Wed, 15 Mar 2023, Dave Taht wrote:
>
> > On Wed, Mar 15, 2023 at 12:33 PM David Lang via Rpm
> > <rpm@lists.bufferbloat.net> wrote:
> >>
> >> if you want another example of the failure, look at any conference center, they
> >> have a small number of APs with wide coverage. It works well when the place is
> >> empty and they walk around and test it, but when it fills up with users, the
> >> entire network collapses.
> >>
> >> Part of this is that wifi was really designed for sparse environments, so it's
> >> solution to "I didn't get my message through" is to talk slower (and louder if
> >> possible), which just creates more interference for other users and reduces the
> >> available airtime.
> >>
> >> I just finished the Scale conference in Pasadena, CA. We deployed over 100 APs
> >> for the conference, up to 7 in a room, on the floor (so that the attendees
> >> bodies attenuate the signal) at low power so that the channels could be re-used
> >> more readily.
> >
> > How did it go? You were deploying fq_codel on the wndr3800s there as
> > of a few years ago, and I remember you got rave reviews... (can you
> > repost the link to that old data/blog/podcast?)
>
> no good stats this year. still using the wndr3800s. Lots of people commenting on
> how well the network did, but we were a bit behind this year and didn't get good
> monitoring in place. No cake yet.
>
> I think this is what you mean
> https://www.youtube.com/watch?v=UXvGbEYeWp0


A point I would like to make for the africa contingent here is that
you do not need the latest
technology for africa. We get 300Mbit out of hardware built in the
late 00s, like the wndr3800. The ath9k chipset is STILL manufactured,
the software mature, and for all I know millions of routers
like these are lying in junk bins worldwide, ready to be recycled and
reflashed.

One libreqos customer deployed libreqos, and took a look at the 600+
ubnt AGWs (ath9k based), on the shelf that could be fq_codeled,
especially on the wifi... built a custom openwrt imagebuilder image
for em, reflashed and redistributed them.

The wndr3800s were especially well built. I would expect them to last
decades. I had one failure of one that had been in the field for over
10 years... I thought it was the flash chip... no, it was the power
supply!


> > Did you get any good stats?
> >
> > Run cake anywhere?
> >>
> >> in the cell phone world they discovered 'microcells' years ago, but with wifi
> >> too many people are still trying to cover the max area with the fewest possible
> >> number of radios. As Dan says, it just doesn't work.
> >>
> >> and on mesh radios, you need to not just use a different channel for your
> >> uplink, you need a different band to avoid desense on the connection to your
> >> users. And that uplink is going to have the same hidden transmitter and airtime
> >> problems competing with the other nodes also doing the uplink that it's
> >> scalability is very limited (even with directional antennas). Wire/fiber for the
> >> uplink is much better.
> >>
> >> David Lang
> >>
> >>
> >>
> >>   On Wed, 15 Mar
> >> 2023, dan via Bloat wrote:
> >>
> >>> Trying to do all of what is currently wanted with 1 AP in a house is a huge
> >>> part of the current problems with WiFi networks.  MOAR power to try to
> >>> overcome attenuation and reflections from walls so more power bleeds into
> >>> the next home/suite/apartment etc.
> >>>
> >>> In the MSP space it's been rapidly moving to an AP per room with output
> >>> turned down to minimum.    Doing this we can reused 5Ghz channels 50ft away
> >>> (through 2 walls etc...) without interference.
> >>>
> >>> One issue with the RRH model is that to accomplish this 'light bulb' model,
> >>> ie you put a light bulb in the room you want light, is that it requires
> >>> infrastructure cabling.  1 RRH AP in a house is already a failure today and
> >>> accounts for most access complaints.
> >>>
> >>> Mesh radios have provided a bit of a gap fill, getting the access SSID
> >>> closer to the device and backhauling on a separate channel with better (and
> >>> likely fixed position ) antennas.
> >>>
> >>> regardless of my opinion on the full on failure of moving firewall off prem
> >>> and the associated security risks and liabilities, single AP in a home is
> >>> already a proven failure that has given rise to the mesh systems that are
> >>> top sellers and top performers today.
> >>>
> >>> IMO, there was a scheme that gained a moment of fame and then died out of
> >>> powerline networking and an AP per room off that powerline network.  I have
> >>> some of these deployed with mikrotik PLA adapters and the model works
> >>> fantastically, but the powerline networking has evolved slowly so I'm
> >>> seeing ~200Mbps practical speeds, and the mikrotik units have 802.11n
> >>> radios in them so also a bit of a struggle for modern speeds.   This model,
> >>> with some development to get ~2.5Gbps practical speeds, and WiFi6 or WiFi7
> >>> per room at very low output power, is a very practical and deployable by
> >>> consumers setup.
> >>>
> >>> WiFi7 also solves some pieces of this with AP coordination and
> >>> co-transmission, sort of like a MUMIMO with multiple APs, and that's in
> >>> early devices already (TPLINK just launched an AP).
> >>>
> >>> IMO, too many hurdles for RRH models from massive amounts of unfrastructure
> >>> to build, homes and appartment buildings that need re-wired, security and
> >>> liability concerns of homes and business not being firewall isolated by
> >>> stakeholders of those networks.
> >>>
> >>> On Wed, Mar 15, 2023 at 11:32 AM rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
> >>>
> >>>> The 6G is a contiguous 1200MhZ. It has low power indoor (LPI) and very
> >>>> low power (VLP) modes. The pluggable transceiver could be color coded to
> >>>> a chanspec, then the four color map problem can be used by installers
> >>>> per those chanspecs. https://en.wikipedia.org/wiki/Four_color_theorem
> >>>>
> >>>> There is no CTS with microwave "interference" The high-speed PHY rates
> >>>> combined with low-density AP/STA ratios, ideally 1/1, decrease the
> >>>> probability of time signal superpositions. The goal with wireless isn't
> >>>> high densities but to unleash humans. A bunch of humans stuck in a dog
> >>>> park isn't really being unleashed. It's the ability to move from block
> >>>> to block so-to-speak. FiWi is cheaper than sidewalks, sanitation
> >>>> systems, etc.
> >>>>
> >>>> The goal now is very low latency. Higher phy rates can achieve that and
> >>>> leave the medium free the vast most of the time and shut down the RRH
> >>>> too. Engineering extra capacity by orders of magnitude is better than
> >>>> AQM. This has been the case in data centers for decades. Congestion? Add
> >>>> a zero (or multiple by 10)
> >>>>
> >>>> Note: None of this is done. This is a 5-10 year project with zero
> >>>> engineering resources assigned.
> >>>>
> >>>> Bob
> >>>>> On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
> >>>>> <rjmcmahon@rjmcmahon.com> wrote:
> >>>>>
> >>>>>> the AP needs to blast a CTS so every other possible conversation has
> >>>>>> to halt.
> >>>>>
> >>>>> The wireless network is not a bus. This still ignores the hidden
> >>>>> transmitter problem because there is a similar network in the next
> >>>>> room.
> >>>>
> >>> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >> _______________________________________________
> >> Rpm mailing list
> >> Rpm@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/rpm
> >
> >
> >
> >



-- 
Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Bloat] [Starlink]  On FiWi
  2023-03-15 22:04                                                                         ` Dave Taht
@ 2023-03-15 22:08                                                                           ` dan
  0 siblings, 0 replies; 183+ messages in thread
From: dan @ 2023-03-15 22:08 UTC (permalink / raw)
  To: Dave Taht
  Cc: Rpm, libreqos, Bruce Perens, Dave Taht via Starlink, bloat, David Lang

[-- Attachment #1: Type: text/plain, Size: 9187 bytes --]

On Mar 15, 2023 at 4:04:27 PM, Dave Taht <dave.taht@gmail.com> wrote:

> On Wed, Mar 15, 2023 at 2:52 PM David Lang <david@lang.hm> wrote:
>
>
> On Wed, 15 Mar 2023, Dave Taht wrote:
>
>
> > On Wed, Mar 15, 2023 at 12:33 PM David Lang via Rpm
>
> > <rpm@lists.bufferbloat.net> wrote:
>
> >>
>
> >> if you want another example of the failure, look at any conference
> center, they
>
> >> have a small number of APs with wide coverage. It works well when the
> place is
>
> >> empty and they walk around and test it, but when it fills up with
> users, the
>
> >> entire network collapses.
>
> >>
>
> >> Part of this is that wifi was really designed for sparse environments,
> so it's
>
> >> solution to "I didn't get my message through" is to talk slower (and
> louder if
>
> >> possible), which just creates more interference for other users and
> reduces the
>
> >> available airtime.
>
> >>
>
> >> I just finished the Scale conference in Pasadena, CA. We deployed over
> 100 APs
>
> >> for the conference, up to 7 in a room, on the floor (so that the
> attendees
>
> >> bodies attenuate the signal) at low power so that the channels could be
> re-used
>
> >> more readily.
>
> >
>
> > How did it go? You were deploying fq_codel on the wndr3800s there as
>
> > of a few years ago, and I remember you got rave reviews... (can you
>
> > repost the link to that old data/blog/podcast?)
>
>
> no good stats this year. still using the wndr3800s. Lots of people
> commenting on
>
> how well the network did, but we were a bit behind this year and didn't
> get good
>
> monitoring in place. No cake yet.
>
>
> I think this is what you mean
>
> https://www.youtube.com/watch?v=UXvGbEYeWp0
>
>
>
> A point I would like to make for the africa contingent here is that
> you do not need the latest
> technology for africa. We get 300Mbit out of hardware built in the
> late 00s, like the wndr3800. The ath9k chipset is STILL manufactured,
> the software mature, and for all I know millions of routers
> like these are lying in junk bins worldwide, ready to be recycled and
> reflashed.
>
> One libreqos customer deployed libreqos, and took a look at the 600+
> ubnt AGWs (ath9k based), on the shelf that could be fq_codeled,
> especially on the wifi... built a custom openwrt imagebuilder image
> for em, reflashed and redistributed them.
>
> The wndr3800s were especially well built. I would expect them to last
> decades. I had one failure of one that had been in the field for over
> 10 years... I thought it was the flash chip... no, it was the power
> supply!
>
>
> > Did you get any good stats?
>
> >
>
> > Run cake anywhere?
>
> >>
>
> >> in the cell phone world they discovered 'microcells' years ago, but
> with wifi
>
> >> too many people are still trying to cover the max area with the fewest
> possible
>
> >> number of radios. As Dan says, it just doesn't work.
>
> >>
>
> >> and on mesh radios, you need to not just use a different channel for
> your
>
> >> uplink, you need a different band to avoid desense on the connection to
> your
>
> >> users. And that uplink is going to have the same hidden transmitter and
> airtime
>
> >> problems competing with the other nodes also doing the uplink that it's
>
> >> scalability is very limited (even with directional antennas).
> Wire/fiber for the
>
> >> uplink is much better.
>
> >>
>
> >> David Lang
>
> >>
>
> >>
>
> >>
>
> >>   On Wed, 15 Mar
>
> >> 2023, dan via Bloat wrote:
>
> >>
>
> >>> Trying to do all of what is currently wanted with 1 AP in a house is a
> huge
>
> >>> part of the current problems with WiFi networks.  MOAR power to try to
>
> >>> overcome attenuation and reflections from walls so more power bleeds
> into
>
> >>> the next home/suite/apartment etc.
>
> >>>
>
> >>> In the MSP space it's been rapidly moving to an AP per room with output
>
> >>> turned down to minimum.    Doing this we can reused 5Ghz channels 50ft
> away
>
> >>> (through 2 walls etc...) without interference.
>
> >>>
>
> >>> One issue with the RRH model is that to accomplish this 'light bulb'
> model,
>
> >>> ie you put a light bulb in the room you want light, is that it requires
>
> >>> infrastructure cabling.  1 RRH AP in a house is already a failure
> today and
>
> >>> accounts for most access complaints.
>
> >>>
>
> >>> Mesh radios have provided a bit of a gap fill, getting the access SSID
>
> >>> closer to the device and backhauling on a separate channel with better
> (and
>
> >>> likely fixed position ) antennas.
>
> >>>
>
> >>> regardless of my opinion on the full on failure of moving firewall off
> prem
>
> >>> and the associated security risks and liabilities, single AP in a home
> is
>
> >>> already a proven failure that has given rise to the mesh systems that
> are
>
> >>> top sellers and top performers today.
>
> >>>
>
> >>> IMO, there was a scheme that gained a moment of fame and then died out
> of
>
> >>> powerline networking and an AP per room off that powerline network.  I
> have
>
> >>> some of these deployed with mikrotik PLA adapters and the model works
>
> >>> fantastically, but the powerline networking has evolved slowly so I'm
>
> >>> seeing ~200Mbps practical speeds, and the mikrotik units have 802.11n
>
> >>> radios in them so also a bit of a struggle for modern speeds.   This
> model,
>
> >>> with some development to get ~2.5Gbps practical speeds, and WiFi6 or
> WiFi7
>
> >>> per room at very low output power, is a very practical and deployable
> by
>
> >>> consumers setup.
>
> >>>
>
> >>> WiFi7 also solves some pieces of this with AP coordination and
>
> >>> co-transmission, sort of like a MUMIMO with multiple APs, and that's in
>
> >>> early devices already (TPLINK just launched an AP).
>
> >>>
>
> >>> IMO, too many hurdles for RRH models from massive amounts of
> unfrastructure
>
> >>> to build, homes and appartment buildings that need re-wired, security
> and
>
> >>> liability concerns of homes and business not being firewall isolated by
>
> >>> stakeholders of those networks.
>
> >>>
>
> >>> On Wed, Mar 15, 2023 at 11:32 AM rjmcmahon <rjmcmahon@rjmcmahon.com>
> wrote:
>
> >>>
>
> >>>> The 6G is a contiguous 1200MhZ. It has low power indoor (LPI) and very
>
> >>>> low power (VLP) modes. The pluggable transceiver could be color coded
> to
>
> >>>> a chanspec, then the four color map problem can be used by installers
>
> >>>> per those chanspecs. https://en.wikipedia.org/wiki/Four_color_theorem
>
> >>>>
>
> >>>> There is no CTS with microwave "interference" The high-speed PHY rates
>
> >>>> combined with low-density AP/STA ratios, ideally 1/1, decrease the
>
> >>>> probability of time signal superpositions. The goal with wireless
> isn't
>
> >>>> high densities but to unleash humans. A bunch of humans stuck in a dog
>
> >>>> park isn't really being unleashed. It's the ability to move from block
>
> >>>> to block so-to-speak. FiWi is cheaper than sidewalks, sanitation
>
> >>>> systems, etc.
>
> >>>>
>
> >>>> The goal now is very low latency. Higher phy rates can achieve that
> and
>
> >>>> leave the medium free the vast most of the time and shut down the RRH
>
> >>>> too. Engineering extra capacity by orders of magnitude is better than
>
> >>>> AQM. This has been the case in data centers for decades. Congestion?
> Add
>
> >>>> a zero (or multiple by 10)
>
> >>>>
>
> >>>> Note: None of this is done. This is a 5-10 year project with zero
>
> >>>> engineering resources assigned.
>
> >>>>
>
> >>>> Bob
>
> >>>>> On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
>
> >>>>> <rjmcmahon@rjmcmahon.com> wrote:
>
> >>>>>
>
> >>>>>> the AP needs to blast a CTS so every other possible conversation has
>
> >>>>>> to halt.
>
> >>>>>
>
> >>>>> The wireless network is not a bus. This still ignores the hidden
>
> >>>>> transmitter problem because there is a similar network in the next
>
> >>>>> room.
>
> >>>>
>
> >>> _______________________________________________
>
> >> Bloat mailing list
>
> >> Bloat@lists.bufferbloat.net
>
> >> https://lists.bufferbloat.net/listinfo/bloat
>
> >> _______________________________________________
>
> >> Rpm mailing list
>
> >> Rpm@lists.bufferbloat.net
>
> >> https://lists.bufferbloat.net/listinfo/rpm
>
> >
>
> >
>
> >
>
> >
>
>
>
>
> --
> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
> Dave Täht CEO, TekLibre, LLC
>

Much of the hardware dumped on the US market in particular is especially
poorly made.  Ie, engineered for our disposable market.  Lots of netgear
products for example have a typical usable life of just 2-3 years if that,
and then the caps have busted or some patina on the boards has killed them.


I know Europe has some standards on this as well as South Korea to give
them longer life.  To the point, it’s not realistic to recycle these items
from the US to other place because they were ‘built to fail’.

[-- Attachment #2: Type: text/html, Size: 16375 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink] On FiWi
  2023-03-14 11:10                                       ` [LibreQoS] [Starlink] " Mike Puchol
  2023-03-14 16:54                                         ` [LibreQoS] [Rpm] " Robert McMahon
@ 2023-03-17 16:38                                         ` Dave Taht
  2023-03-17 18:21                                           ` Mike Puchol
  2023-03-17 19:01                                           ` [LibreQoS] [Starlink] [Rpm] " Sebastian Moeller
  1 sibling, 2 replies; 183+ messages in thread
From: Dave Taht @ 2023-03-17 16:38 UTC (permalink / raw)
  To: Mike Puchol; +Cc: Dave Taht via Starlink, Rpm, libreqos, bloat


[-- Attachment #1.1: Type: text/plain, Size: 8135 bytes --]

This is a pretty neat box:

https://mikrotik.com/product/netpower_lite_7r

What are the compelling arguments for fiber vs copper, again?


On Tue, Mar 14, 2023 at 4:10 AM Mike Puchol via Rpm <
rpm@lists.bufferbloat.net> wrote:

> Hi Bob,
>
> You hit on a set of very valid points, which I'll complement with my views
> on where the industry (the bit of it that affects WISPs) is heading, and
> what I saw at the MWC in Barcelona. Love the FiWi term :-)
>
> I have seen the vendors that supply WISPs, such as Ubiquiti, Cambium, and
> Mimosa, but also newer entrants such as Tarana, increase the performance
> and on-paper specs of their equipment. My examples below are centered on
> the African market, if you operate in Europe or the US, where you can
> charge customers a higher install fee, or even charge them a break-up fee
> if they don't return equipment, the economics work.
>
> Where currently a ~$500 sector radio could serve ~60 endpoints, at a cost
> of ~$50 per endpoint (I use this term in place of ODU/CPE, the antenna that
> you mount on the roof), and supply ~2.5 Mbps CIR per endpoint, the
> evolution is now a ~$2,000+ sector radio, a $200 endpoint, capability for
> ~150 endpoints per sector, and ~25 Mbps CIR per endpoint.
>
> If every customer a WISP installs represents, say, $100 CAPEX at install
> time ($50 for the antenna + cabling, router, etc), and you charge a $30
> install fee, you have $70 to recover, and you recover from the monthly
> contribution the customer makes. If the contribution after OPEX is, say,
> $10, it takes you 7 months to recover the full install cost. Not bad,
> doable even in low-income markets.
>
> Fast-forward to the next-generation version. Now, the CAPEX at install is
> $250, you need to recover $220, and it will take you 22 months, which is
> above the usual 18 months that investors look for.
>
> The focus, thereby, has to be the lever that has the largest effect on the
> unit economics - which is the per-customer cost. I have drawn what my ideal
> FiWi network would look like:
>
>
>
> Taking you through this - we start with a 1-port, low-cost EPON OLT (or
> you could go for 2, 4, 8 ports as you add capacity). This OLT has capacity
> for 64 ONUs on its single port. Instead of connecting the typical fiber
> infrastructure with kilometers of cables which break, require maintenance,
> etc. we insert an EPON to Ethernet converter (I added "magic" because these
> don't exist AFAIK).
>
> This converter allows us to connect our $2k sector radio, and serve the
> $200 endpoints (ODUs) over wireless point-to-multipoint up to 10km away.
> Each ODU then has a reverse converter, which gives us EPON again.
>
> Once we are back on EPON, we can insert splitters, for example,
> pre-connectorized outdoor 1:16 boxes. Every customer install now involves a
> 100 meter roll of pre-connectorized 2-core drop cable, and a $20 EPON ONU.
>
> Using this deployment method, we could connect up to 16 customers to a
> single $200 endpoint, so the enpoint CAPEX per customer is now $12.5. Add
> the ONU, cable, etc. and we have a per-install CAPEX of $82.5 (assuming the
> same $50 of extras we had before), and an even shorter break-even. In
> addition, as the endpoints support higher capacity, we can provision at
> least the same, if not more, capacity per customer.
>
> Other advantages: the $200 ODU is no longer customer equipment and CAPEX,
> but network equipment, and as such, can operate under a longer break-even
> timeline, and be financed by infrastructure PE funds, for example. As a
> result, churn has a much lower financial impact on the operator.
>
> The main reason why this wouldn't work today is that EPON, as we know, is
> synchronous, and requires the OLT to orchestrate the amount of time each
> ONU can transmit, and when. Having wireless hops and media conversions will
> introduce latencies which can break down the communications (e.g. one ONU
> may transmit, get delayed on the radio link, and end up overlapping another
> ONU that transmitted on the next slot). Thus, either the "magic" box needs
> to account for this, or an new hybrid EPON-wireless protocol developed.
>
> My main point here: the industry is moving away from the unconnected. All
> the claims I heard and saw at MWC about "connecting the unconnected" had
> zero resonance with the financial drivers that the unconnected really
> operate under, on top of IT literacy, digital skills, devices, power...
>
> Best,
>
> Mike
> On Mar 14, 2023 at 05:27 +0100, rjmcmahon via Starlink <
> starlink@lists.bufferbloat.net>, wrote:
>
> To change the topic - curious to thoughts on FiWi.
>
> Imagine a world with no copper cable called FiWi (Fiber,VCSEL/CMOS
> Radios, Antennas) and which is point to point inside a building
> connected to virtualized APs fiber hops away. Each remote radio head
> (RRH) would consume 5W or less and only when active. No need for things
> like zigbee, or meshes, or threads as each radio has a fiber connection
> via Corning's actifi or equivalent. Eliminate the AP/Client power
> imbalance. Plastics also can house smoke or other sensors.
>
> Some reminders from Paul Baran in 1994 (and from David Reed)
>
> o) Shorter range rf transceivers connected to fiber could produce a
> significant improvement - - tremendous improvement, really.
> o) a mixture of terrestrial links plus shorter range radio links has the
> effect of increasing by orders and orders of magnitude the amount of
> frequency spectrum that can be made available.
> o) By authorizing high power to support a few users to reach slightly
> longer distances we deprive ourselves of the opportunity to serve the
> many.
> o) Communications systems can be built with 10dB ratio
> o) Digital transmission when properly done allows a small signal to
> noise ratio to be used successfully to retrieve an error free signal.
> o) And, never forget, any transmission capacity not used is wasted
> forever, like water over the dam. Not using such techniques represent
> lost opportunity.
>
> And on waveguides:
>
> o) "Fiber transmission loss is ~0.5dB/km for single mode fiber,
> independent of modulation"
> o) “Copper cables and PCB traces are very frequency dependent. At
> 100Gb/s, the loss is in dB/inch."
> o) "Free space: the power density of the radio waves decreases with the
> square of distance from the transmitting antenna due to spreading of the
> electromagnetic energy in space according to the inverse square law"
>
> The sunk costs & long-lived parts of FiWi are the fiber and the CPE
> plastics & antennas, as CMOS radios+ & fiber/laser, e.g. VCSEL could be
> pluggable, allowing for field upgrades. Just like swapping out SFP in a
> data center.
>
> This approach basically drives out WiFi latency by eliminating shared
> queues and increases capacity by orders of magnitude by leveraging 10dB
> in the spatial dimension, all of which is achieved by a physical design.
> Just place enough RRHs as needed (similar to a pop up sprinkler in an
> irrigation system.)
>
> Start and build this for an MDU and the value of the building improves.
> Sadly, there seems no way to capture that value other than over long
> term use. It doesn't matter whether the leader of the HOA tries to
> capture the value or if a last mile provider tries. The value remains
> sunk or hidden with nothing on the asset side of the balance sheet.
> We've got a CAPEX spend that has to be made up via "OPEX returns" over
> years.
>
> But the asset is there.
>
> How do we do this?
>
> Bob
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
>


-- 
Come Heckle Mar 6-9 at: https://www.understandinglatency.com
<https://www.understandinglatency.com/Dave>/
Dave Täht CEO, TekLibre, LLC

[-- Attachment #1.2: Type: text/html, Size: 9641 bytes --]

[-- Attachment #2: Hybrid EPON-Wireless network.png --]
[-- Type: image/png, Size: 149871 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink] On FiWi
  2023-03-17 16:38                                         ` [LibreQoS] [Rpm] " Dave Taht
@ 2023-03-17 18:21                                           ` Mike Puchol
  2023-03-17 19:01                                           ` [LibreQoS] [Starlink] [Rpm] " Sebastian Moeller
  1 sibling, 0 replies; 183+ messages in thread
From: Mike Puchol @ 2023-03-17 18:21 UTC (permalink / raw)
  To: Dave Taht; +Cc: Dave Taht via Starlink, Rpm, libreqos, bloat

[-- Attachment #1: Type: text/plain, Size: 9113 bytes --]

A four-port EPON OLT with modules goes for $500, serves up to 256 customers.

To serve same amount you need 36 netPowers, at $140 each, total CAPEX $5,000.

What you then spend on PON splitters you also spend on PoE injectors for the netPower, and drop cable is cheaper than Ethernet (at least if you want it to send power further than 10 meters… no CCA allowed).

It’s not so clear-cut, each can fit a certain deployment scenario, so I would never argue in antagonistic terms.

Best,

Mike
On Mar 17, 2023 at 17:38 +0100, Dave Taht <dave.taht@gmail.com>, wrote:
> This is a pretty neat box:
>
> https://mikrotik.com/product/netpower_lite_7r
>
> What are the compelling arguments for fiber vs copper, again?
>
>
> > On Tue, Mar 14, 2023 at 4:10 AM Mike Puchol via Rpm <rpm@lists.bufferbloat.net> wrote:
> > > Hi Bob,
> > >
> > > You hit on a set of very valid points, which I'll complement with my views on where the industry (the bit of it that affects WISPs) is heading, and what I saw at the MWC in Barcelona. Love the FiWi term :-)
> > >
> > > I have seen the vendors that supply WISPs, such as Ubiquiti, Cambium, and Mimosa, but also newer entrants such as Tarana, increase the performance and on-paper specs of their equipment. My examples below are centered on the African market, if you operate in Europe or the US, where you can charge customers a higher install fee, or even charge them a break-up fee if they don't return equipment, the economics work.
> > >
> > > Where currently a ~$500 sector radio could serve ~60 endpoints, at a cost of ~$50 per endpoint (I use this term in place of ODU/CPE, the antenna that you mount on the roof), and supply ~2.5 Mbps CIR per endpoint, the evolution is now a ~$2,000+ sector radio, a $200 endpoint, capability for ~150 endpoints per sector, and ~25 Mbps CIR per endpoint.
> > >
> > > If every customer a WISP installs represents, say, $100 CAPEX at install time ($50 for the antenna + cabling, router, etc), and you charge a $30 install fee, you have $70 to recover, and you recover from the monthly contribution the customer makes. If the contribution after OPEX is, say, $10, it takes you 7 months to recover the full install cost. Not bad, doable even in low-income markets.
> > >
> > > Fast-forward to the next-generation version. Now, the CAPEX at install is $250, you need to recover $220, and it will take you 22 months, which is above the usual 18 months that investors look for.
> > >
> > > The focus, thereby, has to be the lever that has the largest effect on the unit economics - which is the per-customer cost. I have drawn what my ideal FiWi network would look like:
> > >
> > >
> > > <Hybrid EPON-Wireless network.png>
> > > Taking you through this - we start with a 1-port, low-cost EPON OLT (or you could go for 2, 4, 8 ports as you add capacity). This OLT has capacity for 64 ONUs on its single port. Instead of connecting the typical fiber infrastructure with kilometers of cables which break, require maintenance, etc. we insert an EPON to Ethernet converter (I added "magic" because these don't exist AFAIK).
> > >
> > > This converter allows us to connect our $2k sector radio, and serve the $200 endpoints (ODUs) over wireless point-to-multipoint up to 10km away. Each ODU then has a reverse converter, which gives us EPON again.
> > >
> > > Once we are back on EPON, we can insert splitters, for example, pre-connectorized outdoor 1:16 boxes. Every customer install now involves a 100 meter roll of pre-connectorized 2-core drop cable, and a $20 EPON ONU.
> > >
> > > Using this deployment method, we could connect up to 16 customers to a single $200 endpoint, so the enpoint CAPEX per customer is now $12.5. Add the ONU, cable, etc. and we have a per-install CAPEX of $82.5 (assuming the same $50 of extras we had before), and an even shorter break-even. In addition, as the endpoints support higher capacity, we can provision at least the same, if not more, capacity per customer.
> > >
> > > Other advantages: the $200 ODU is no longer customer equipment and CAPEX, but network equipment, and as such, can operate under a longer break-even timeline, and be financed by infrastructure PE funds, for example. As a result, churn has a much lower financial impact on the operator.
> > >
> > > The main reason why this wouldn't work today is that EPON, as we know, is synchronous, and requires the OLT to orchestrate the amount of time each ONU can transmit, and when. Having wireless hops and media conversions will introduce latencies which can break down the communications (e.g. one ONU may transmit, get delayed on the radio link, and end up overlapping another ONU that transmitted on the next slot). Thus, either the "magic" box needs to account for this, or an new hybrid EPON-wireless protocol developed.
> > >
> > > My main point here: the industry is moving away from the unconnected. All the claims I heard and saw at MWC about "connecting the unconnected" had zero resonance with the financial drivers that the unconnected really operate under, on top of IT literacy, digital skills, devices, power...
> > >
> > > Best,
> > >
> > > Mike
> > > On Mar 14, 2023 at 05:27 +0100, rjmcmahon via Starlink <starlink@lists.bufferbloat.net>, wrote:
> > > > To change the topic - curious to thoughts on FiWi.
> > > >
> > > > Imagine a world with no copper cable called FiWi (Fiber,VCSEL/CMOS
> > > > Radios, Antennas) and which is point to point inside a building
> > > > connected to virtualized APs fiber hops away. Each remote radio head
> > > > (RRH) would consume 5W or less and only when active. No need for things
> > > > like zigbee, or meshes, or threads as each radio has a fiber connection
> > > > via Corning's actifi or equivalent. Eliminate the AP/Client power
> > > > imbalance. Plastics also can house smoke or other sensors.
> > > >
> > > > Some reminders from Paul Baran in 1994 (and from David Reed)
> > > >
> > > > o) Shorter range rf transceivers connected to fiber could produce a
> > > > significant improvement - - tremendous improvement, really.
> > > > o) a mixture of terrestrial links plus shorter range radio links has the
> > > > effect of increasing by orders and orders of magnitude the amount of
> > > > frequency spectrum that can be made available.
> > > > o) By authorizing high power to support a few users to reach slightly
> > > > longer distances we deprive ourselves of the opportunity to serve the
> > > > many.
> > > > o) Communications systems can be built with 10dB ratio
> > > > o) Digital transmission when properly done allows a small signal to
> > > > noise ratio to be used successfully to retrieve an error free signal.
> > > > o) And, never forget, any transmission capacity not used is wasted
> > > > forever, like water over the dam. Not using such techniques represent
> > > > lost opportunity.
> > > >
> > > > And on waveguides:
> > > >
> > > > o) "Fiber transmission loss is ~0.5dB/km for single mode fiber,
> > > > independent of modulation"
> > > > o) “Copper cables and PCB traces are very frequency dependent. At
> > > > 100Gb/s, the loss is in dB/inch."
> > > > o) "Free space: the power density of the radio waves decreases with the
> > > > square of distance from the transmitting antenna due to spreading of the
> > > > electromagnetic energy in space according to the inverse square law"
> > > >
> > > > The sunk costs & long-lived parts of FiWi are the fiber and the CPE
> > > > plastics & antennas, as CMOS radios+ & fiber/laser, e.g. VCSEL could be
> > > > pluggable, allowing for field upgrades. Just like swapping out SFP in a
> > > > data center.
> > > >
> > > > This approach basically drives out WiFi latency by eliminating shared
> > > > queues and increases capacity by orders of magnitude by leveraging 10dB
> > > > in the spatial dimension, all of which is achieved by a physical design.
> > > > Just place enough RRHs as needed (similar to a pop up sprinkler in an
> > > > irrigation system.)
> > > >
> > > > Start and build this for an MDU and the value of the building improves.
> > > > Sadly, there seems no way to capture that value other than over long
> > > > term use. It doesn't matter whether the leader of the HOA tries to
> > > > capture the value or if a last mile provider tries. The value remains
> > > > sunk or hidden with nothing on the asset side of the balance sheet.
> > > > We've got a CAPEX spend that has to be made up via "OPEX returns" over
> > > > years.
> > > >
> > > > But the asset is there.
> > > >
> > > > How do we do this?
> > > >
> > > > Bob
> > > > _______________________________________________
> > > > Starlink mailing list
> > > > Starlink@lists.bufferbloat.net
> > > > https://lists.bufferbloat.net/listinfo/starlink
> > > _______________________________________________
> > > Rpm mailing list
> > > Rpm@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/rpm
>
>
> --
> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
> Dave Täht CEO, TekLibre, LLC

[-- Attachment #2: Type: text/html, Size: 10844 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm]  On FiWi
  2023-03-17 16:38                                         ` [LibreQoS] [Rpm] " Dave Taht
  2023-03-17 18:21                                           ` Mike Puchol
@ 2023-03-17 19:01                                           ` Sebastian Moeller
  2023-03-17 19:19                                             ` [LibreQoS] [Rpm] [Starlink] " rjmcmahon
  2023-03-17 23:15                                             ` [LibreQoS] [Bloat] [Starlink] [Rpm] On FiWi David Lang
  1 sibling, 2 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-17 19:01 UTC (permalink / raw)
  To: Dave Täht; +Cc: Mike Puchol, Dave Taht via Starlink, Rpm, libreqos, bloat

Hi Dave,



> On Mar 17, 2023, at 17:38, Dave Taht via Starlink <starlink@lists.bufferbloat.net> wrote:
> 
> This is a pretty neat box:
> 
> https://mikrotik.com/product/netpower_lite_7r
> 
> What are the compelling arguments for fiber vs copper, again?

	As far as I can tell:

Copper: 
	can carry electric power

Fiber-PON: 
	much farther reach even without amplifiers (10 Km, 20 Km, ... depending on loss budget)
	cheaper operation (less active power needed by the headend/OLT)
	less space need than all active alternatives (AON, copper ethernet)
	likely only robust passive components in the field
	Existing upgrade path for 25G and 50G is on the horizon over the same PON infrastructure
	mostly resistant to RF ingress along the path (as long as a direct lightning hit does not melt the glas ;) )

Fiber-Ethernet: 
	like fiber-PON but 
	no density advantage (needs 1 port per end device)
	even wider upgrade paths


I guess it really depends on how important "carry electric power" is to you ;) feeding these from the client side is pretty cool for consenting adults, but I would prefer not having to pay the electric bill for my ISPs active gear in the field outside the CPE/ONT...

Regards
	Sebastian


> 
> 
> On Tue, Mar 14, 2023 at 4:10 AM Mike Puchol via Rpm <rpm@lists.bufferbloat.net> wrote:
> Hi Bob,
> 
> You hit on a set of very valid points, which I'll complement with my views on where the industry (the bit of it that affects WISPs) is heading, and what I saw at the MWC in Barcelona. Love the FiWi term :-)
> 
> I have seen the vendors that supply WISPs, such as Ubiquiti, Cambium, and Mimosa, but also newer entrants such as Tarana, increase the performance and on-paper specs of their equipment. My examples below are centered on the African market, if you operate in Europe or the US, where you can charge customers a higher install fee, or even charge them a break-up fee if they don't return equipment, the economics work.
> 
> Where currently a ~$500 sector radio could serve ~60 endpoints, at a cost of ~$50 per endpoint (I use this term in place of ODU/CPE, the antenna that you mount on the roof), and supply ~2.5 Mbps CIR per endpoint, the evolution is now a ~$2,000+ sector radio, a $200 endpoint, capability for ~150 endpoints per sector, and ~25 Mbps CIR per endpoint.
> 
> If every customer a WISP installs represents, say, $100 CAPEX at install time ($50 for the antenna + cabling, router, etc), and you charge a $30 install fee, you have $70 to recover, and you recover from the monthly contribution the customer makes. If the contribution after OPEX is, say, $10, it takes you 7 months to recover the full install cost. Not bad, doable even in low-income markets.
> 
> Fast-forward to the next-generation version. Now, the CAPEX at install is $250, you need to recover $220, and it will take you 22 months, which is above the usual 18 months that investors look for.
> 
> The focus, thereby, has to be the lever that has the largest effect on the unit economics - which is the per-customer cost. I have drawn what my ideal FiWi network would look like:
> 
> 
> <Hybrid EPON-Wireless network.png>
> Taking you through this - we start with a 1-port, low-cost EPON OLT (or you could go for 2, 4, 8 ports as you add capacity). This OLT has capacity for 64 ONUs on its single port. Instead of connecting the typical fiber infrastructure with kilometers of cables which break, require maintenance, etc. we insert an EPON to Ethernet converter (I added "magic" because these don't exist AFAIK).
> 
> This converter allows us to connect our $2k sector radio, and serve the $200 endpoints (ODUs) over wireless point-to-multipoint up to 10km away. Each ODU then has a reverse converter, which gives us EPON again.
> 
> Once we are back on EPON, we can insert splitters, for example, pre-connectorized outdoor 1:16 boxes. Every customer install now involves a 100 meter roll of pre-connectorized 2-core drop cable, and a $20 EPON ONU. 
> 
> Using this deployment method, we could connect up to 16 customers to a single $200 endpoint, so the enpoint CAPEX per customer is now $12.5. Add the ONU, cable, etc. and we have a per-install CAPEX of $82.5 (assuming the same $50 of extras we had before), and an even shorter break-even. In addition, as the endpoints support higher capacity, we can provision at least the same, if not more, capacity per customer.
> 
> Other advantages: the $200 ODU is no longer customer equipment and CAPEX, but network equipment, and as such, can operate under a longer break-even timeline, and be financed by infrastructure PE funds, for example. As a result, churn has a much lower financial impact on the operator.
> 
> The main reason why this wouldn't work today is that EPON, as we know, is synchronous, and requires the OLT to orchestrate the amount of time each ONU can transmit, and when. Having wireless hops and media conversions will introduce latencies which can break down the communications (e.g. one ONU may transmit, get delayed on the radio link, and end up overlapping another ONU that transmitted on the next slot). Thus, either the "magic" box needs to account for this, or an new hybrid EPON-wireless protocol developed.
> 
> My main point here: the industry is moving away from the unconnected. All the claims I heard and saw at MWC about "connecting the unconnected" had zero resonance with the financial drivers that the unconnected really operate under, on top of IT literacy, digital skills, devices, power...
> 
> Best,
> 
> Mike
> On Mar 14, 2023 at 05:27 +0100, rjmcmahon via Starlink <starlink@lists.bufferbloat.net>, wrote:
>> To change the topic - curious to thoughts on FiWi.
>> 
>> Imagine a world with no copper cable called FiWi (Fiber,VCSEL/CMOS
>> Radios, Antennas) and which is point to point inside a building
>> connected to virtualized APs fiber hops away. Each remote radio head
>> (RRH) would consume 5W or less and only when active. No need for things
>> like zigbee, or meshes, or threads as each radio has a fiber connection
>> via Corning's actifi or equivalent. Eliminate the AP/Client power
>> imbalance. Plastics also can house smoke or other sensors.
>> 
>> Some reminders from Paul Baran in 1994 (and from David Reed)
>> 
>> o) Shorter range rf transceivers connected to fiber could produce a
>> significant improvement - - tremendous improvement, really.
>> o) a mixture of terrestrial links plus shorter range radio links has the
>> effect of increasing by orders and orders of magnitude the amount of
>> frequency spectrum that can be made available.
>> o) By authorizing high power to support a few users to reach slightly
>> longer distances we deprive ourselves of the opportunity to serve the
>> many.
>> o) Communications systems can be built with 10dB ratio
>> o) Digital transmission when properly done allows a small signal to
>> noise ratio to be used successfully to retrieve an error free signal.
>> o) And, never forget, any transmission capacity not used is wasted
>> forever, like water over the dam. Not using such techniques represent
>> lost opportunity.
>> 
>> And on waveguides:
>> 
>> o) "Fiber transmission loss is ~0.5dB/km for single mode fiber,
>> independent of modulation"
>> o) “Copper cables and PCB traces are very frequency dependent. At
>> 100Gb/s, the loss is in dB/inch."
>> o) "Free space: the power density of the radio waves decreases with the
>> square of distance from the transmitting antenna due to spreading of the
>> electromagnetic energy in space according to the inverse square law"
>> 
>> The sunk costs & long-lived parts of FiWi are the fiber and the CPE
>> plastics & antennas, as CMOS radios+ & fiber/laser, e.g. VCSEL could be
>> pluggable, allowing for field upgrades. Just like swapping out SFP in a
>> data center.
>> 
>> This approach basically drives out WiFi latency by eliminating shared
>> queues and increases capacity by orders of magnitude by leveraging 10dB
>> in the spatial dimension, all of which is achieved by a physical design.
>> Just place enough RRHs as needed (similar to a pop up sprinkler in an
>> irrigation system.)
>> 
>> Start and build this for an MDU and the value of the building improves.
>> Sadly, there seems no way to capture that value other than over long
>> term use. It doesn't matter whether the leader of the HOA tries to
>> capture the value or if a last mile provider tries. The value remains
>> sunk or hidden with nothing on the asset side of the balance sheet.
>> We've got a CAPEX spend that has to be made up via "OPEX returns" over
>> years.
>> 
>> But the asset is there.
>> 
>> How do we do this?
>> 
>> Bob
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
> 
> 
> -- 
> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/ 
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink]   On FiWi
  2023-03-17 19:01                                           ` [LibreQoS] [Starlink] [Rpm] " Sebastian Moeller
@ 2023-03-17 19:19                                             ` rjmcmahon
  2023-03-17 20:37                                               ` [LibreQoS] [Starlink] [Rpm] " Bruce Perens
  2023-03-17 23:15                                             ` [LibreQoS] [Bloat] [Starlink] [Rpm] On FiWi David Lang
  1 sibling, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-17 19:19 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Dave Täht, Dave Taht via Starlink, Mike Puchol, bloat, Rpm,
	libreqos

I think the low-power transceiver (or RRH) and fiber fronthaul is doable 
within the next 5 years. The difficult part to me seems the virtual APs 
that could service 12-256 RRHs including security monitoring & customer 
privacy.

Is there a VMWARE NSX approach to reducing the O&M costs by at least 1/2 
for the FiWi head end systems?

For power: My approach to the Boston historic neighborhood where my kids 
now live would be AC wired CPE treated as critical, life support 
infrastructure. But better may be to do as modern garage door openers 
and have standard AC charge a battery so one can operate even during 
power outages.

https://www.rsandrews.com/blog/hardwired-battery-powered-smoke-alarms-you/

Our Recommendation: Hardwired Smoke Alarms
Hardwired smoke alarms, while they require slightly more work upfront, 
are the clear choice if you’re considering replacing your home’s smoke 
alarm system. You’ll hardly ever have to deal with the annoying 
“chirping” that occurs when a battery-powered smoke detector begins to 
go dead, and your entire family will be alerted in the event that a fire 
does occur since hardwire smoke detectors can be interconnected.

Bob
> Hi Dave,
> 
> 
> 
>> On Mar 17, 2023, at 17:38, Dave Taht via Starlink 
>> <starlink@lists.bufferbloat.net> wrote:
>> 
>> This is a pretty neat box:
>> 
>> https://mikrotik.com/product/netpower_lite_7r
>> 
>> What are the compelling arguments for fiber vs copper, again?
> 
> 	As far as I can tell:
> 
> Copper:
> 	can carry electric power
> 
> Fiber-PON:
> 	much farther reach even without amplifiers (10 Km, 20 Km, ...
> depending on loss budget)
> 	cheaper operation (less active power needed by the headend/OLT)
> 	less space need than all active alternatives (AON, copper ethernet)
> 	likely only robust passive components in the field
> 	Existing upgrade path for 25G and 50G is on the horizon over the same
> PON infrastructure
> 	mostly resistant to RF ingress along the path (as long as a direct
> lightning hit does not melt the glas ;) )
> 
> Fiber-Ethernet:
> 	like fiber-PON but
> 	no density advantage (needs 1 port per end device)
> 	even wider upgrade paths
> 
> 
> I guess it really depends on how important "carry electric power" is
> to you ;) feeding these from the client side is pretty cool for
> consenting adults, but I would prefer not having to pay the electric
> bill for my ISPs active gear in the field outside the CPE/ONT...
> 
> Regards
> 	Sebastian
> 
> 
>> 
>> 
>> On Tue, Mar 14, 2023 at 4:10 AM Mike Puchol via Rpm 
>> <rpm@lists.bufferbloat.net> wrote:
>> Hi Bob,
>> 
>> You hit on a set of very valid points, which I'll complement with my 
>> views on where the industry (the bit of it that affects WISPs) is 
>> heading, and what I saw at the MWC in Barcelona. Love the FiWi term 
>> :-)
>> 
>> I have seen the vendors that supply WISPs, such as Ubiquiti, Cambium, 
>> and Mimosa, but also newer entrants such as Tarana, increase the 
>> performance and on-paper specs of their equipment. My examples below 
>> are centered on the African market, if you operate in Europe or the 
>> US, where you can charge customers a higher install fee, or even 
>> charge them a break-up fee if they don't return equipment, the 
>> economics work.
>> 
>> Where currently a ~$500 sector radio could serve ~60 endpoints, at a 
>> cost of ~$50 per endpoint (I use this term in place of ODU/CPE, the 
>> antenna that you mount on the roof), and supply ~2.5 Mbps CIR per 
>> endpoint, the evolution is now a ~$2,000+ sector radio, a $200 
>> endpoint, capability for ~150 endpoints per sector, and ~25 Mbps CIR 
>> per endpoint.
>> 
>> If every customer a WISP installs represents, say, $100 CAPEX at 
>> install time ($50 for the antenna + cabling, router, etc), and you 
>> charge a $30 install fee, you have $70 to recover, and you recover 
>> from the monthly contribution the customer makes. If the contribution 
>> after OPEX is, say, $10, it takes you 7 months to recover the full 
>> install cost. Not bad, doable even in low-income markets.
>> 
>> Fast-forward to the next-generation version. Now, the CAPEX at install 
>> is $250, you need to recover $220, and it will take you 22 months, 
>> which is above the usual 18 months that investors look for.
>> 
>> The focus, thereby, has to be the lever that has the largest effect on 
>> the unit economics - which is the per-customer cost. I have drawn what 
>> my ideal FiWi network would look like:
>> 
>> 
>> <Hybrid EPON-Wireless network.png>
>> Taking you through this - we start with a 1-port, low-cost EPON OLT 
>> (or you could go for 2, 4, 8 ports as you add capacity). This OLT has 
>> capacity for 64 ONUs on its single port. Instead of connecting the 
>> typical fiber infrastructure with kilometers of cables which break, 
>> require maintenance, etc. we insert an EPON to Ethernet converter (I 
>> added "magic" because these don't exist AFAIK).
>> 
>> This converter allows us to connect our $2k sector radio, and serve 
>> the $200 endpoints (ODUs) over wireless point-to-multipoint up to 10km 
>> away. Each ODU then has a reverse converter, which gives us EPON 
>> again.
>> 
>> Once we are back on EPON, we can insert splitters, for example, 
>> pre-connectorized outdoor 1:16 boxes. Every customer install now 
>> involves a 100 meter roll of pre-connectorized 2-core drop cable, and 
>> a $20 EPON ONU.
>> 
>> Using this deployment method, we could connect up to 16 customers to a 
>> single $200 endpoint, so the enpoint CAPEX per customer is now $12.5. 
>> Add the ONU, cable, etc. and we have a per-install CAPEX of $82.5 
>> (assuming the same $50 of extras we had before), and an even shorter 
>> break-even. In addition, as the endpoints support higher capacity, we 
>> can provision at least the same, if not more, capacity per customer.
>> 
>> Other advantages: the $200 ODU is no longer customer equipment and 
>> CAPEX, but network equipment, and as such, can operate under a longer 
>> break-even timeline, and be financed by infrastructure PE funds, for 
>> example. As a result, churn has a much lower financial impact on the 
>> operator.
>> 
>> The main reason why this wouldn't work today is that EPON, as we know, 
>> is synchronous, and requires the OLT to orchestrate the amount of time 
>> each ONU can transmit, and when. Having wireless hops and media 
>> conversions will introduce latencies which can break down the 
>> communications (e.g. one ONU may transmit, get delayed on the radio 
>> link, and end up overlapping another ONU that transmitted on the next 
>> slot). Thus, either the "magic" box needs to account for this, or an 
>> new hybrid EPON-wireless protocol developed.
>> 
>> My main point here: the industry is moving away from the unconnected. 
>> All the claims I heard and saw at MWC about "connecting the 
>> unconnected" had zero resonance with the financial drivers that the 
>> unconnected really operate under, on top of IT literacy, digital 
>> skills, devices, power...
>> 
>> Best,
>> 
>> Mike
>> On Mar 14, 2023 at 05:27 +0100, rjmcmahon via Starlink 
>> <starlink@lists.bufferbloat.net>, wrote:
>>> To change the topic - curious to thoughts on FiWi.
>>> 
>>> Imagine a world with no copper cable called FiWi (Fiber,VCSEL/CMOS
>>> Radios, Antennas) and which is point to point inside a building
>>> connected to virtualized APs fiber hops away. Each remote radio head
>>> (RRH) would consume 5W or less and only when active. No need for 
>>> things
>>> like zigbee, or meshes, or threads as each radio has a fiber 
>>> connection
>>> via Corning's actifi or equivalent. Eliminate the AP/Client power
>>> imbalance. Plastics also can house smoke or other sensors.
>>> 
>>> Some reminders from Paul Baran in 1994 (and from David Reed)
>>> 
>>> o) Shorter range rf transceivers connected to fiber could produce a
>>> significant improvement - - tremendous improvement, really.
>>> o) a mixture of terrestrial links plus shorter range radio links has 
>>> the
>>> effect of increasing by orders and orders of magnitude the amount of
>>> frequency spectrum that can be made available.
>>> o) By authorizing high power to support a few users to reach slightly
>>> longer distances we deprive ourselves of the opportunity to serve the
>>> many.
>>> o) Communications systems can be built with 10dB ratio
>>> o) Digital transmission when properly done allows a small signal to
>>> noise ratio to be used successfully to retrieve an error free signal.
>>> o) And, never forget, any transmission capacity not used is wasted
>>> forever, like water over the dam. Not using such techniques represent
>>> lost opportunity.
>>> 
>>> And on waveguides:
>>> 
>>> o) "Fiber transmission loss is ~0.5dB/km for single mode fiber,
>>> independent of modulation"
>>> o) “Copper cables and PCB traces are very frequency dependent. At
>>> 100Gb/s, the loss is in dB/inch."
>>> o) "Free space: the power density of the radio waves decreases with 
>>> the
>>> square of distance from the transmitting antenna due to spreading of 
>>> the
>>> electromagnetic energy in space according to the inverse square law"
>>> 
>>> The sunk costs & long-lived parts of FiWi are the fiber and the CPE
>>> plastics & antennas, as CMOS radios+ & fiber/laser, e.g. VCSEL could 
>>> be
>>> pluggable, allowing for field upgrades. Just like swapping out SFP in 
>>> a
>>> data center.
>>> 
>>> This approach basically drives out WiFi latency by eliminating shared
>>> queues and increases capacity by orders of magnitude by leveraging 
>>> 10dB
>>> in the spatial dimension, all of which is achieved by a physical 
>>> design.
>>> Just place enough RRHs as needed (similar to a pop up sprinkler in an
>>> irrigation system.)
>>> 
>>> Start and build this for an MDU and the value of the building 
>>> improves.
>>> Sadly, there seems no way to capture that value other than over long
>>> term use. It doesn't matter whether the leader of the HOA tries to
>>> capture the value or if a last mile provider tries. The value remains
>>> sunk or hidden with nothing on the asset side of the balance sheet.
>>> We've got a CAPEX spend that has to be made up via "OPEX returns" 
>>> over
>>> years.
>>> 
>>> But the asset is there.
>>> 
>>> How do we do this?
>>> 
>>> Bob
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
>> 
>> 
>> --
>> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> 
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] On FiWi
  2023-03-17 19:19                                             ` [LibreQoS] [Rpm] [Starlink] " rjmcmahon
@ 2023-03-17 20:37                                               ` Bruce Perens
  2023-03-17 20:57                                                 ` rjmcmahon
  0 siblings, 1 reply; 183+ messages in thread
From: Bruce Perens @ 2023-03-17 20:37 UTC (permalink / raw)
  To: rjmcmahon; +Cc: Sebastian Moeller, libreqos, Dave Taht via Starlink, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 696 bytes --]

On Fri, Mar 17, 2023 at 12:19 PM rjmcmahon via Starlink <
starlink@lists.bufferbloat.net> wrote:You’ll hardly ever have to deal with
the annoying

> “chirping” that occurs when a battery-powered smoke detector begins to
> go dead, and your entire family will be alerted in the event that a fire
> does occur since hardwire smoke detectors can be interconnected.
>

Off-topic, but the sensors in these hardwired units expire after 10 years,
and they start beeping. The batteries in modern battery-powered units with
wireless links expire after 10 years, along with the rest of the unit, and
they start beeping.
There are exceptions, the first-generation Nest was pretty bad.

[-- Attachment #2: Type: text/html, Size: 1041 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] On FiWi
  2023-03-17 20:37                                               ` [LibreQoS] [Starlink] [Rpm] " Bruce Perens
@ 2023-03-17 20:57                                                 ` rjmcmahon
  2023-03-17 22:50                                                   ` Bruce Perens
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-17 20:57 UTC (permalink / raw)
  To: Bruce Perens
  Cc: Sebastian Moeller, libreqos, Dave Taht via Starlink, Rpm, bloat

I'm curious as to why the detectors have to be replaced every 10 years. 
Regardless, modern sensors could give a thermal map of the entire 
complex 24x7x365. Fire officials would have a better set of eyes when 
they showed up as the sensor system & network could provide thermals as 
a time series.

Also, another "killer app" for Boston is digital image correlation & the 
cameras monitor stresses and strains on historic buildings valued at 
about $10M each. And that's undervalued because they're really 
irreplaceable. Similar for some in the Netherladns. Monitoring the 
groundwater with samples every 4 mos is ok - better to monitor the 
structure itself 24x7x365.

https://www.sciencedirect.com/topics/engineering/digital-image-correlation
https://www.bostongroundwater.org/

Bob

On 2023-03-17 13:37, Bruce Perens wrote:
> On Fri, Mar 17, 2023 at 12:19 PM rjmcmahon via Starlink
> <starlink@lists.bufferbloat.net> wrote:You’ll hardly ever have to
> deal with the annoying
> 
>> “chirping” that occurs when a battery-powered smoke detector
>> begins to
>> go dead, and your entire family will be alerted in the event that a
>> fire
>> does occur since hardwire smoke detectors can be interconnected.
> 
> Off-topic, but the sensors in these hardwired units expire after 10
> years, and they start beeping. The batteries in modern battery-powered
> units with wireless links expire after 10 years, along with the rest
> of the unit, and they start beeping.
> 
> There are exceptions, the first-generation Nest was pretty bad.


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] On FiWi
  2023-03-17 20:57                                                 ` rjmcmahon
@ 2023-03-17 22:50                                                   ` Bruce Perens
  2023-03-18 18:18                                                     ` rjmcmahon
  0 siblings, 1 reply; 183+ messages in thread
From: Bruce Perens @ 2023-03-17 22:50 UTC (permalink / raw)
  To: rjmcmahon; +Cc: Sebastian Moeller, libreqos, Dave Taht via Starlink, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 786 bytes --]

On Fri, Mar 17, 2023 at 1:57 PM rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:

> I'm curious as to why the detectors have to be replaced every 10 years.


Dust, grease from cooking oil vapors, insects, mold, etc. accumulate, and
it's so expensive to clean those little sensors, and there is so much
liability associated with them, that it's cheaper to replace the head every
10 years. Electrolytic capacitors have a limited lifetime and that is also
a good reason to replace the device.

The basic sensor architecture is photoelectric, the older ones used an
americium pelllet that detected gas ionization which was changed by the
presence of smoke. The half-life on the americium ones is at least 400
years (there is more than one isotope, that's the shortest-life one).

[-- Attachment #2: Type: text/html, Size: 1143 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] [Rpm]  On FiWi
  2023-03-17 19:01                                           ` [LibreQoS] [Starlink] [Rpm] " Sebastian Moeller
  2023-03-17 19:19                                             ` [LibreQoS] [Rpm] [Starlink] " rjmcmahon
@ 2023-03-17 23:15                                             ` David Lang
  1 sibling, 0 replies; 183+ messages in thread
From: David Lang @ 2023-03-17 23:15 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Dave Täht, Dave Taht via Starlink, Mike Puchol, bloat, Rpm,
	libreqos

[-- Attachment #1: Type: text/plain, Size: 9784 bytes --]

'can carry electric power' can also be a drawback, it provides a path for a 
problem in one piece of equipment to damage other equipment (power supply short 
to logic, lightning strike, ground loops, etc)

David Lang

On Fri, 17 Mar 2023, Sebastian Moeller via Bloat wrote:

> Hi Dave,
>
>
>
>> On Mar 17, 2023, at 17:38, Dave Taht via Starlink <starlink@lists.bufferbloat.net> wrote:
>> 
>> This is a pretty neat box:
>> 
>> https://mikrotik.com/product/netpower_lite_7r
>> 
>> What are the compelling arguments for fiber vs copper, again?
>
> 	As far as I can tell:
>
> Copper:
> 	can carry electric power
>
> Fiber-PON:
> 	much farther reach even without amplifiers (10 Km, 20 Km, ... depending on loss budget)
> 	cheaper operation (less active power needed by the headend/OLT)
> 	less space need than all active alternatives (AON, copper ethernet)
> 	likely only robust passive components in the field
> 	Existing upgrade path for 25G and 50G is on the horizon over the same PON infrastructure
> 	mostly resistant to RF ingress along the path (as long as a direct lightning hit does not melt the glas ;) )
>
> Fiber-Ethernet:
> 	like fiber-PON but
> 	no density advantage (needs 1 port per end device)
> 	even wider upgrade paths
>
>
> I guess it really depends on how important "carry electric power" is to you ;) feeding these from the client side is pretty cool for consenting adults, but I would prefer not having to pay the electric bill for my ISPs active gear in the field outside the CPE/ONT...
>
> Regards
> 	Sebastian
>
>
>> 
>> 
>> On Tue, Mar 14, 2023 at 4:10 AM Mike Puchol via Rpm <rpm@lists.bufferbloat.net> wrote:
>> Hi Bob,
>> 
>> You hit on a set of very valid points, which I'll complement with my views on where the industry (the bit of it that affects WISPs) is heading, and what I saw at the MWC in Barcelona. Love the FiWi term :-)
>> 
>> I have seen the vendors that supply WISPs, such as Ubiquiti, Cambium, and Mimosa, but also newer entrants such as Tarana, increase the performance and on-paper specs of their equipment. My examples below are centered on the African market, if you operate in Europe or the US, where you can charge customers a higher install fee, or even charge them a break-up fee if they don't return equipment, the economics work.
>> 
>> Where currently a ~$500 sector radio could serve ~60 endpoints, at a cost of ~$50 per endpoint (I use this term in place of ODU/CPE, the antenna that you mount on the roof), and supply ~2.5 Mbps CIR per endpoint, the evolution is now a ~$2,000+ sector radio, a $200 endpoint, capability for ~150 endpoints per sector, and ~25 Mbps CIR per endpoint.
>> 
>> If every customer a WISP installs represents, say, $100 CAPEX at install time ($50 for the antenna + cabling, router, etc), and you charge a $30 install fee, you have $70 to recover, and you recover from the monthly contribution the customer makes. If the contribution after OPEX is, say, $10, it takes you 7 months to recover the full install cost. Not bad, doable even in low-income markets.
>> 
>> Fast-forward to the next-generation version. Now, the CAPEX at install is $250, you need to recover $220, and it will take you 22 months, which is above the usual 18 months that investors look for.
>> 
>> The focus, thereby, has to be the lever that has the largest effect on the unit economics - which is the per-customer cost. I have drawn what my ideal FiWi network would look like:
>> 
>> 
>> <Hybrid EPON-Wireless network.png>
>> Taking you through this - we start with a 1-port, low-cost EPON OLT (or you could go for 2, 4, 8 ports as you add capacity). This OLT has capacity for 64 ONUs on its single port. Instead of connecting the typical fiber infrastructure with kilometers of cables which break, require maintenance, etc. we insert an EPON to Ethernet converter (I added "magic" because these don't exist AFAIK).
>> 
>> This converter allows us to connect our $2k sector radio, and serve the $200 endpoints (ODUs) over wireless point-to-multipoint up to 10km away. Each ODU then has a reverse converter, which gives us EPON again.
>> 
>> Once we are back on EPON, we can insert splitters, for example, pre-connectorized outdoor 1:16 boxes. Every customer install now involves a 100 meter roll of pre-connectorized 2-core drop cable, and a $20 EPON ONU. 
>> 
>> Using this deployment method, we could connect up to 16 customers to a single $200 endpoint, so the enpoint CAPEX per customer is now $12.5. Add the ONU, cable, etc. and we have a per-install CAPEX of $82.5 (assuming the same $50 of extras we had before), and an even shorter break-even. In addition, as the endpoints support higher capacity, we can provision at least the same, if not more, capacity per customer.
>> 
>> Other advantages: the $200 ODU is no longer customer equipment and CAPEX, but network equipment, and as such, can operate under a longer break-even timeline, and be financed by infrastructure PE funds, for example. As a result, churn has a much lower financial impact on the operator.
>> 
>> The main reason why this wouldn't work today is that EPON, as we know, is synchronous, and requires the OLT to orchestrate the amount of time each ONU can transmit, and when. Having wireless hops and media conversions will introduce latencies which can break down the communications (e.g. one ONU may transmit, get delayed on the radio link, and end up overlapping another ONU that transmitted on the next slot). Thus, either the "magic" box needs to account for this, or an new hybrid EPON-wireless protocol developed.
>> 
>> My main point here: the industry is moving away from the unconnected. All the claims I heard and saw at MWC about "connecting the unconnected" had zero resonance with the financial drivers that the unconnected really operate under, on top of IT literacy, digital skills, devices, power...
>> 
>> Best,
>> 
>> Mike
>> On Mar 14, 2023 at 05:27 +0100, rjmcmahon via Starlink <starlink@lists.bufferbloat.net>, wrote:
>>> To change the topic - curious to thoughts on FiWi.
>>> 
>>> Imagine a world with no copper cable called FiWi (Fiber,VCSEL/CMOS
>>> Radios, Antennas) and which is point to point inside a building
>>> connected to virtualized APs fiber hops away. Each remote radio head
>>> (RRH) would consume 5W or less and only when active. No need for things
>>> like zigbee, or meshes, or threads as each radio has a fiber connection
>>> via Corning's actifi or equivalent. Eliminate the AP/Client power
>>> imbalance. Plastics also can house smoke or other sensors.
>>> 
>>> Some reminders from Paul Baran in 1994 (and from David Reed)
>>> 
>>> o) Shorter range rf transceivers connected to fiber could produce a
>>> significant improvement - - tremendous improvement, really.
>>> o) a mixture of terrestrial links plus shorter range radio links has the
>>> effect of increasing by orders and orders of magnitude the amount of
>>> frequency spectrum that can be made available.
>>> o) By authorizing high power to support a few users to reach slightly
>>> longer distances we deprive ourselves of the opportunity to serve the
>>> many.
>>> o) Communications systems can be built with 10dB ratio
>>> o) Digital transmission when properly done allows a small signal to
>>> noise ratio to be used successfully to retrieve an error free signal.
>>> o) And, never forget, any transmission capacity not used is wasted
>>> forever, like water over the dam. Not using such techniques represent
>>> lost opportunity.
>>> 
>>> And on waveguides:
>>> 
>>> o) "Fiber transmission loss is ~0.5dB/km for single mode fiber,
>>> independent of modulation"
>>> o) “Copper cables and PCB traces are very frequency dependent. At
>>> 100Gb/s, the loss is in dB/inch."
>>> o) "Free space: the power density of the radio waves decreases with the
>>> square of distance from the transmitting antenna due to spreading of the
>>> electromagnetic energy in space according to the inverse square law"
>>> 
>>> The sunk costs & long-lived parts of FiWi are the fiber and the CPE
>>> plastics & antennas, as CMOS radios+ & fiber/laser, e.g. VCSEL could be
>>> pluggable, allowing for field upgrades. Just like swapping out SFP in a
>>> data center.
>>> 
>>> This approach basically drives out WiFi latency by eliminating shared
>>> queues and increases capacity by orders of magnitude by leveraging 10dB
>>> in the spatial dimension, all of which is achieved by a physical design.
>>> Just place enough RRHs as needed (similar to a pop up sprinkler in an
>>> irrigation system.)
>>> 
>>> Start and build this for an MDU and the value of the building improves.
>>> Sadly, there seems no way to capture that value other than over long
>>> term use. It doesn't matter whether the leader of the HOA tries to
>>> capture the value or if a last mile provider tries. The value remains
>>> sunk or hidden with nothing on the asset side of the balance sheet.
>>> We've got a CAPEX spend that has to be made up via "OPEX returns" over
>>> years.
>>> 
>>> But the asset is there.
>>> 
>>> How do we do this?
>>> 
>>> Bob
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
>> 
>> 
>> -- 
>> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/ 
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] On FiWi
  2023-03-17 22:50                                                   ` Bruce Perens
@ 2023-03-18 18:18                                                     ` rjmcmahon
  2023-03-18 19:57                                                       ` dan
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-18 18:18 UTC (permalink / raw)
  To: Bruce Perens
  Cc: Sebastian Moeller, libreqos, Dave Taht via Starlink, Rpm, bloat

>> I'm curious as to why the detectors have to be replaced every 10
>> years.
> 
> Dust, grease from cooking oil vapors, insects, mold, etc. accumulate,
> and it's so expensive to clean those little sensors, and there is so
> much liability associated with them, that it's cheaper to replace the
> head every 10 years. Electrolytic capacitors have a limited lifetime
> and that is also a good reason to replace the device.
> 
> The basic sensor architecture is photoelectric, the older ones used an
> americium pelllet that detected gas ionization which was changed by
> the presence of smoke. The half-life on the americium ones is at least
> 400 years (there is more than one isotope, that's the shortest-life
> one).

Thanks for this. That makes sense. I do think the FiWi transceivers & 
sensors need to be pluggable & detect failures, particularly early on 
due to infant mortality.

"Infant mortality is a special equipment failure mode that shows the 
probability of failure being highest when the equipment is first 
started, but reduces as time goes on. Eventually, the probability of 
failure levels off after time."

https://www.upkeep.com/blog/infant-mortality-equipment-failure#:~:text=Infant%20mortality%20is%20a%20special,failure%20levels%20off%20after%20time.

Also curious about thermal imaging inside a building - what sensor tech 
to use and at what cost? The Bronx fire occurred because poor people in 
public housing don't have access to electric heat pumps & used a space 
heater instead. It's very sad we as a society do this, i.e. make sure 
rich people can drive Teslas with heat pumps but only provide the worst 
type of heating to children from families that aren't so fortunate.

https://www.cnn.com/2022/01/10/us/nyc-bronx-apartment-fire-monday/index.html

"A malfunctioning electric space heater in a bedroom was the source of 
an apartment building fire Sunday in the Bronx that killed 17 people, 
including 8 children, making it one of the worst fires in the city’s 
history, New York Mayor Eric Adams said Monday."

Bob

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] On FiWi
  2023-03-18 18:18                                                     ` rjmcmahon
@ 2023-03-18 19:57                                                       ` dan
  2023-03-18 20:40                                                         ` rjmcmahon
  0 siblings, 1 reply; 183+ messages in thread
From: dan @ 2023-03-18 19:57 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Bruce Perens, Dave Taht via Starlink, Rpm, Sebastian Moeller,
	bloat, libreqos

[-- Attachment #1: Type: text/plain, Size: 2859 bytes --]

On Sat, Mar 18, 2023 at 12:18 PM rjmcmahon via LibreQoS <
libreqos@lists.bufferbloat.net> wrote:

> >> I'm curious as to why the detectors have to be replaced every 10
> >> years.
> >
> > Dust, grease from cooking oil vapors, insects, mold, etc. accumulate,
> > and it's so expensive to clean those little sensors, and there is so
> > much liability associated with them, that it's cheaper to replace the
> > head every 10 years. Electrolytic capacitors have a limited lifetime
> > and that is also a good reason to replace the device.
> >
> > The basic sensor architecture is photoelectric, the older ones used an
> > americium pelllet that detected gas ionization which was changed by
> > the presence of smoke. The half-life on the americium ones is at least
> > 400 years (there is more than one isotope, that's the shortest-life
> > one).
>
> Thanks for this. That makes sense. I do think the FiWi transceivers &
> sensors need to be pluggable & detect failures, particularly early on
> due to infant mortality.
>
> "Infant mortality is a special equipment failure mode that shows the
> probability of failure being highest when the equipment is first
> started, but reduces as time goes on. Eventually, the probability of
> failure levels off after time."
>
>
> https://www.upkeep.com/blog/infant-mortality-equipment-failure#:~:text=Infant%20mortality%20is%20a%20special,failure%20levels%20off%20after%20time
> .
>
> Also curious about thermal imaging inside a building - what sensor tech
> to use and at what cost? The Bronx fire occurred because poor people in
> public housing don't have access to electric heat pumps & used a space
> heater instead. It's very sad we as a society do this, i.e. make sure
> rich people can drive Teslas with heat pumps but only provide the worst
> type of heating to children from families that aren't so fortunate.
>
>
> https://www.cnn.com/2022/01/10/us/nyc-bronx-apartment-fire-monday/index.html
>
> "A malfunctioning electric space heater in a bedroom was the source of
> an apartment building fire Sunday in the Bronx that killed 17 people,
> including 8 children, making it one of the worst fires in the city’s
> history, New York Mayor Eric Adams said Monday."
>
> Bob
> _______________________________________________
> LibreQoS mailing list
> LibreQoS@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/libreqos
>
All of the states use cases are already handled by inexpensive lorawan
sensors and are already covered by multiple lorawan networks in NYC and
most urban centers in the US.  There is no need for a new infrastructure,
it’s already there.  Not to mention NBIoT/catm radios.

This is all just general cheapness and lack of liability keeping these out
of widespread deployment. It’s not lack of tech on the market today.

[-- Attachment #2: Type: text/html, Size: 3944 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm] On FiWi
  2023-03-18 19:57                                                       ` dan
@ 2023-03-18 20:40                                                         ` rjmcmahon
  2023-03-19 10:26                                                           ` Michael Richardson
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-18 20:40 UTC (permalink / raw)
  To: dan
  Cc: Bruce Perens, Dave Taht via Starlink, Rpm, Sebastian Moeller,
	bloat, libreqos

  > All of the states use cases are already handled by inexpensive 
lorawan
> sensors and are already covered by multiple lorawan networks in NYC
> and most urban centers in the US.  There is no need for a new
> infrastructure, it’s already there.  Not to mention NBIoT/catm
> radios.
> 
> This is all just general cheapness and lack of liability keeping these
> out of widespread deployment. It’s not lack of tech on the market
> today.

What is the footprint of lorawan networks and what's the velocity of 
growth? What's the cost per square foot both capex and operations, 
maintaining & monitoring lorawan? What's that compared to the WiFi 
install base, i.e. now we have train even installers and maintainers on 
purpose built technology vs just use what most people know because it's 
common? This all looks like ethernet, token ring, fddi, netbios, decnet, 
etc. where the single approach of IP over WiFi/ethernet with fiber 
fronthaul wave guides and backhauls' waveguides per the ISP seems the 
effective way forward. I don't think it's in society's interest to have 
so disparate networks technologies as we have learned from IP and the 
internet. My guess is lorawan will never get built out across the planet 
as has been done for IP. I can tell that every country is adopting IP 
because they're using a free IP tool to measure their networks.

https://sourceforge.net/projects/iperf2/files/stats/map?dates=2014-02-06%20to%202023-03-18&period=daily

Bob

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink]  [Rpm] On FiWi
  2023-03-18 20:40                                                         ` rjmcmahon
@ 2023-03-19 10:26                                                           ` Michael Richardson
  2023-03-19 21:00                                                             ` [LibreQoS] On metrics rjmcmahon
  2023-03-20 20:46                                                             ` [LibreQoS] [Rpm] [Starlink] On FiWi Frantisek Borsik
  0 siblings, 2 replies; 183+ messages in thread
From: Michael Richardson @ 2023-03-19 10:26 UTC (permalink / raw)
  To: rjmcmahon, dan, Rpm, libreqos, Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 582 bytes --]


{lots of lists on the CC}

The problem I have with lorawan is that it's too small for anything but the
smallest sensors.  When it breaks (due to infant death or just vanadalism)
who is going to notice enough to fix it?  My belief is that people won't
break things that they like/depend upon.  Or at least, that there will be
social pressure not to.

Better to have a protected 1Mb/s sensor lan within a 144Mb/s wifi than a
adjacent lorawan.

--
Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works
 -= IPv6 IoT consulting =-                      *I*LIKE*TRAINS*




[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* [LibreQoS] On metrics
  2023-03-19 10:26                                                           ` Michael Richardson
@ 2023-03-19 21:00                                                             ` rjmcmahon
  2023-03-20  0:26                                                               ` dan
  2023-03-20 20:46                                                             ` [LibreQoS] [Rpm] [Starlink] On FiWi Frantisek Borsik
  1 sibling, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-19 21:00 UTC (permalink / raw)
  To: Michael Richardson; +Cc: dan, Rpm, libreqos, Dave Taht via Starlink, bloat

Hi All,

It seems getting the metrics right is critical. Our industry can't be 
reporting things that mislead or misassign blame. The medical community 
doesn't treat people for cancer without having a high degree they've 
gotten the diagnostics correct as an example.

An initial metric, per this group, would be geared towards 
responsiveness or the speed of causality. Here, we may need to include 
linear distance, the power required to achieve a responsiveness and to 
take account of Pareto efficiencies, where one device's better 
responsiveness can't make another's worse.

An example per a possible FiWi new & comprehensive metric: A rating 
could be something like 10K responses per second at 1Km terrestrial 
(fiber) cable / 6m radius free space range / 5W total / 0-impact to 
others. If consumers can learn to read nutrition labels they can also 
learn to read these.

Maybe a device produces a scan code qr based upon its e3e measurement 
and the scan code qr loads a page with human interpretable analysis? 
Similar to how we now pull up menus on our mobile phones listing the 
food items and the nutrition information that's available to seat at a 
table. Then, in a perfect world, there is a rating per each link hop or 
better, network jurisdiction. Each jurisdiction could decide if they 
want to participate or not, similar to connecting up an autonomous 
system or not. I think measurements of network jurisdictions without 
prior agreements are unfair. The lack of measurement capability is 
likely enough pressure needed to motivate actions.

Bob

PS. As a side note, and a shameless plug, iperf 2 now supports 
bounceback and a big issue has been clock sync for one way delays (OWD.) 
Per a comment from Jean Tourrhiles 
https://sourceforge.net/p/iperf2/tickets/242/ I added some unsync 
detections in the bounceback measurements. Contact me directly if your 
engineering team needs more information on iperf 2.

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] On metrics
  2023-03-19 21:00                                                             ` [LibreQoS] On metrics rjmcmahon
@ 2023-03-20  0:26                                                               ` dan
  2023-03-20  3:03                                                                 ` [LibreQoS] [Starlink] " David Lang
  0 siblings, 1 reply; 183+ messages in thread
From: dan @ 2023-03-20  0:26 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Rpm, libreqos, Dave Taht via Starlink, bloat, Michael Richardson

[-- Attachment #1: Type: text/plain, Size: 4607 bytes --]

On Mar 19, 2023 at 3:00:35 PM, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:

> Hi All,
>
> It seems getting the metrics right is critical. Our industry can't be
> reporting things that mislead or misassign blame. The medical community
> doesn't treat people for cancer without having a high degree they've
> gotten the diagnostics correct as an example.
>
> An initial metric, per this group, would be geared towards
> responsiveness or the speed of causality. Here, we may need to include
> linear distance, the power required to achieve a responsiveness and to
> take account of Pareto efficiencies, where one device's better
> responsiveness can't make another's worse.
>
> An example per a possible FiWi new & comprehensive metric: A rating
> could be something like 10K responses per second at 1Km terrestrial
> (fiber) cable / 6m radius free space range / 5W total / 0-impact to
> others. If consumers can learn to read nutrition labels they can also
> learn to read these.
>
> Maybe a device produces a scan code qr based upon its e3e measurement
> and the scan code qr loads a page with human interpretable analysis?
> Similar to how we now pull up menus on our mobile phones listing the
> food items and the nutrition information that's available to seat at a
> table. Then, in a perfect world, there is a rating per each link hop or
> better, network jurisdiction. Each jurisdiction could decide if they
> want to participate or not, similar to connecting up an autonomous
> system or not. I think measurements of network jurisdictions without
> prior agreements are unfair. The lack of measurement capability is
> likely enough pressure needed to motivate actions.
>
> Bob
>
> PS. As a side note, and a shameless plug, iperf 2 now supports
> bounceback and a big issue has been clock sync for one way delays (OWD.)
> Per a comment from Jean Tourrhiles
> https://sourceforge.net/p/iperf2/tickets/242/ I added some unsync
> detections in the bounceback measurements. Contact me directly if your
> engineering team needs more information on iperf 2.
>

A food nutrition label is actually a great example of bad information in
consumer hands.  Since adding those, Americans weights have ballooned.  I’m
not saying they are in direct correlation, but that information has
definitely not caused an improvement in health by any measure at all.
Definitely not a model to pursue.

There needs to be a clear distinction between what’s valuable to the
consumer and what’s valuable to the ISP to improve services. These are
dramatically different pieces of data.   For the consumer, information that
directions their choice of product is important.  Details about various
points in the process are useless to them.  How many hops has no value to
them, only the latenc, jitter, throughput, and probably some rating on slow
start or other things that are part of the ISP equation but entirely for
the purpose of making better choices for their needs.  10k responses in x
seconds is meaningless to a consumer.  I have never, and I’m not
exaggerating, EVER had a home user or IT guy or point person for an MSP
ever ask about packet rates or any of the stats that keep getting brought
up.  This is a solution looking for a problem.

Consumers really need things like published performance specs so they can
assemble their needs like an a la carte menu.  What do you do, what’s
important to you, what details support that need, and they need that in a
simple way.   Like a little app that says “how many 1080p TVs or 4K TVs,
how many gaming consoles, do you take zoom calls or VoIP/phone calls.  Do
you send large emails, videos, or pictures.”

Put another way, all the specs are like telling a soccer mom the torque
curve of their minivan.

If ‘we’ the industry make a nutrition type label that has numbers on it
that are not useful in the context of a consumer decision making process in
a direct x gives you y way, it creates data that will get misinterpreted.

These stats should be made available for the providers pushing data so that
they can make sure they are meeting the human readable and useful data.  I
care about the AS path and the latency on my upstream services to various
providers, I can try to make better choices on buying lumen or hurricane
and how I will route those things because I understand how they affect the
service.  Those are really useful numbers for me and any ISP that wants to
have higher “A+ for gaming because latency and jitter are low and bandwidth
is adequate” ratings.

[-- Attachment #2: Type: text/html, Size: 5723 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] On metrics
  2023-03-20  0:26                                                               ` dan
@ 2023-03-20  3:03                                                                 ` David Lang
  0 siblings, 0 replies; 183+ messages in thread
From: David Lang @ 2023-03-20  3:03 UTC (permalink / raw)
  To: dan
  Cc: rjmcmahon, Rpm, Dave Taht via Starlink, Michael Richardson,
	libreqos, bloat

[-- Attachment #1: Type: text/plain, Size: 989 bytes --]

> Consumers really need things like published performance specs so they can
> assemble their needs like an a la carte menu.  What do you do, what’s
> important to you, what details support that need, and they need that in a
> simple way.   Like a little app that says “how many 1080p TVs or 4K TVs,
> how many gaming consoles, do you take zoom calls or VoIP/phone calls.  Do
> you send large emails, videos, or pictures.”

The problem is that these needs really are not that heavy. Among my ISP 
connections, I have a 8/1 dsl connection, even when I fail over to that, I can 
run my 4k tv + a couple other HD TVs + email (although it's at the ragged edge, 
trying to play 4k at 2x speed can hiccup, and zoom calls can stutter when large 
emails/downloads flow)

realistically, any router can handle this speed, the question is if it has 
fq_codel/cake to keep the bulk loads from interfering with the other work.

Even starlink roaming is higher performance than this :-)

David Lang

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink]  On FiWi
  2023-03-19 10:26                                                           ` Michael Richardson
  2023-03-19 21:00                                                             ` [LibreQoS] On metrics rjmcmahon
@ 2023-03-20 20:46                                                             ` Frantisek Borsik
  2023-03-20 21:28                                                               ` dan
  1 sibling, 1 reply; 183+ messages in thread
From: Frantisek Borsik @ 2023-03-20 20:46 UTC (permalink / raw)
  Cc: rjmcmahon, dan, Rpm, libreqos, Dave Taht via Starlink, bloat,
	Michael Richardson

[-- Attachment #1: Type: text/plain, Size: 4100 bytes --]

Late to the party, also not an engineer...but if there's something I have
learned during my time with RF elements:

--- 99% of the vendors out there (and most of the ISPs, I dare to say, as
well) don't know/care/respect thing as "simple", as physics.

--- 2.4GHz was lost because of this, and 5GHz was saved like "5 minutes to
midnight" for ISPs, by RF elements Horns (and UltraHorns, UltraDish,
Asymmetrical Horns later on), basically, that have inspired ("Imitation is
the sincerest form of flattery that mediocrity can pay to greatness." Oscar
Wilde) some other vendors of the antennas to bring their own version of
Horns etc.

--- sure, lot of improvements in order to fight noise, modulate, virtualise
(like Tarana Wireless) were done on the AP (radio) side, but still -
physics is physics and it was overlooked and neglected for such a LONG time.

--- ISPs were told by the vendors to basically BLAST through the noise and
many more BS like this. So they did as they were told, they were blasting
and blasting. Those that were getting smarter, switched to RF elements
Horns, stopped blasting, started to being reasonable with topology ("if
Your customers are 5 miles away from the AP, You will not blast like crazy
for 10 miles, because You will pick up all the noise") and they even
started to cooperate - frequency coordination, colocation - with other ISPs
on the same towers etc (the same co-ordination needs to be done between the
ISP behind the CPEs now - on the Wi-Fi routers of their customers.)

The similar development I was able to see when I got into Wi-Fi (while at
TurrisTech <https://blog.cerowrt.org/post/tango_on_turris/> - secure,
powerful open source Wi-Fi routers). The same story, basically, for vendors
as well as ISPs. No actual respect for the underlying physics, attempts to
blast-over the noise, chasing clouds ("muah WiFi 6, 6E....oh no, here comes
Wi-Fi 7 and this will change EVERYTHING ---> see, it was a lot of "fun" to
see this happening with 5G, and the amount of over-promise and
under-delivery BS was and even still is, staggering.)
The whole Wi-Fi industry is chasing (almost) empty numbers (bandwidth)
instead of focusing on bufferbloat (latency, jitter...).
Thanks to Domos for putting together the Understanding Latency webinar
series. I know that most of You are aware of latency as the most important
metric we should focus on nowadays in order to improve the overall Internet
experience, but still...
About 6 hours watch of watching. And rewatching:
https://www.youtube.com/watch?v=KdTPz5srJ8M
https://www.youtube.com/watch?v=tAVwmUG21OY
https://www.youtube.com/watch?v=MRmcWyIVXvg

Also, there is one more thing to add re Wi-Fi. If You can cable, You should
always cable. Mesh as we know it, would be a way better Wi-Fi enhancement,
if the mesh units would be wired as much as possible. We will reduce the
noice, grow smart and save spectrum.

Thanks for the great discussion.

All the best,

Frank

Frantisek (Frank) Borsik



https://www.linkedin.com/in/frantisekborsik

Signal, Telegram, WhatsApp: +421919416714

iMessage, mobile: +420775230885

Skype: casioa5302ca

frantisek.borsik@gmail.com


On Sun, Mar 19, 2023 at 11:27 AM Michael Richardson via Rpm <
rpm@lists.bufferbloat.net> wrote:

>
> {lots of lists on the CC}
>
> The problem I have with lorawan is that it's too small for anything but the
> smallest sensors.  When it breaks (due to infant death or just vanadalism)
> who is going to notice enough to fix it?  My belief is that people won't
> break things that they like/depend upon.  Or at least, that there will be
> social pressure not to.
>
> Better to have a protected 1Mb/s sensor lan within a 144Mb/s wifi than a
> adjacent lorawan.
>
> --
> Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works
>  -= IPv6 IoT consulting =-                      *I*LIKE*TRAINS*
>
>
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
>

[-- Attachment #2: Type: text/html, Size: 6240 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink]  On FiWi
  2023-03-20 20:46                                                             ` [LibreQoS] [Rpm] [Starlink] On FiWi Frantisek Borsik
@ 2023-03-20 21:28                                                               ` dan
  2023-03-20 21:38                                                                 ` Frantisek Borsik
  2023-03-21  0:10                                                                 ` [LibreQoS] [Starlink] [Rpm] On FiWi Brandon Butterworth
  0 siblings, 2 replies; 183+ messages in thread
From: dan @ 2023-03-20 21:28 UTC (permalink / raw)
  To: Frantisek Borsik
  Cc: rjmcmahon, Rpm, libreqos, Dave Taht via Starlink, bloat,
	Michael Richardson

[-- Attachment #1: Type: text/plain, Size: 5464 bytes --]

I more or less agree with you Frantisek.   There are throughput numbers
that are need for current gen and next gen services, but those are often
met with 50-100Mbps plans today that are enough to handle multiple 4K
streams plus browsing and so forth, yet no one talks about latency and
packet loss and other useful metrics at all and consumers are not able to
and never will be able to understand more that a couple of numbers.  This
is an industry problem and unless we have some sort of working group that
is pushing this like the 'got milk?' advertisements I'm not sure how we
will ever get there.  The big vendors that have pushed docsis to extremes
have no interest in these other details, they win on the big 'speed' number
and will advertising all sorts of performance around that number.

We need a marketing/lobby group.  Not wispa or other individual industry
groups, but one specifically for *ISPs that will contribute as well as
implement policies and put that out on social media etc etc.  i don't know
how we get there without a big player (ie Netflix, hulu..) contributing.

On Mon, Mar 20, 2023 at 2:46 PM Frantisek Borsik <frantisek.borsik@gmail.com>
wrote:

>
> Late to the party, also not an engineer...but if there's something I have
> learned during my time with RF elements:
>
> --- 99% of the vendors out there (and most of the ISPs, I dare to say, as
> well) don't know/care/respect thing as "simple", as physics.
>
> --- 2.4GHz was lost because of this, and 5GHz was saved like "5 minutes to
> midnight" for ISPs, by RF elements Horns (and UltraHorns, UltraDish,
> Asymmetrical Horns later on), basically, that have inspired ("Imitation is
> the sincerest form of flattery that mediocrity can pay to greatness." Oscar
> Wilde) some other vendors of the antennas to bring their own version of
> Horns etc.
>
> --- sure, lot of improvements in order to fight noise, modulate,
> virtualise (like Tarana Wireless) were done on the AP (radio) side, but
> still - physics is physics and it was overlooked and neglected for such a
> LONG time.
>
> --- ISPs were told by the vendors to basically BLAST through the noise and
> many more BS like this. So they did as they were told, they were blasting
> and blasting. Those that were getting smarter, switched to RF elements
> Horns, stopped blasting, started to being reasonable with topology ("if
> Your customers are 5 miles away from the AP, You will not blast like crazy
> for 10 miles, because You will pick up all the noise") and they even
> started to cooperate - frequency coordination, colocation - with other ISPs
> on the same towers etc (the same co-ordination needs to be done between the
> ISP behind the CPEs now - on the Wi-Fi routers of their customers.)
>
> The similar development I was able to see when I got into Wi-Fi (while at
> TurrisTech <https://blog.cerowrt.org/post/tango_on_turris/> - secure,
> powerful open source Wi-Fi routers). The same story, basically, for vendors
> as well as ISPs. No actual respect for the underlying physics, attempts to
> blast-over the noise, chasing clouds ("muah WiFi 6, 6E....oh no, here comes
> Wi-Fi 7 and this will change EVERYTHING ---> see, it was a lot of "fun" to
> see this happening with 5G, and the amount of over-promise and
> under-delivery BS was and even still is, staggering.)
> The whole Wi-Fi industry is chasing (almost) empty numbers (bandwidth)
> instead of focusing on bufferbloat (latency, jitter...).
> Thanks to Domos for putting together the Understanding Latency webinar
> series. I know that most of You are aware of latency as the most important
> metric we should focus on nowadays in order to improve the overall Internet
> experience, but still...
> About 6 hours watch of watching. And rewatching:
> https://www.youtube.com/watch?v=KdTPz5srJ8M
> https://www.youtube.com/watch?v=tAVwmUG21OY
> https://www.youtube.com/watch?v=MRmcWyIVXvg
>
> Also, there is one more thing to add re Wi-Fi. If You can cable, You
> should always cable. Mesh as we know it, would be a way better Wi-Fi
> enhancement, if the mesh units would be wired as much as possible. We will
> reduce the noice, grow smart and save spectrum.
>
> Thanks for the great discussion.
>
> All the best,
>
> Frank
>
> Frantisek (Frank) Borsik
>
>
>
> https://www.linkedin.com/in/frantisekborsik
>
> Signal, Telegram, WhatsApp: +421919416714
>
> iMessage, mobile: +420775230885
>
> Skype: casioa5302ca
>
> frantisek.borsik@gmail.com
>
>
> On Sun, Mar 19, 2023 at 11:27 AM Michael Richardson via Rpm <
> rpm@lists.bufferbloat.net> wrote:
>
>>
>> {lots of lists on the CC}
>>
>> The problem I have with lorawan is that it's too small for anything but
>> the
>> smallest sensors.  When it breaks (due to infant death or just vanadalism)
>> who is going to notice enough to fix it?  My belief is that people won't
>> break things that they like/depend upon.  Or at least, that there will be
>> social pressure not to.
>>
>> Better to have a protected 1Mb/s sensor lan within a 144Mb/s wifi than a
>> adjacent lorawan.
>>
>> --
>> Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works
>>  -= IPv6 IoT consulting =-                      *I*LIKE*TRAINS*
>>
>>
>>
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
>>
>

[-- Attachment #2: Type: text/html, Size: 7722 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink]  On FiWi
  2023-03-20 21:28                                                               ` dan
@ 2023-03-20 21:38                                                                 ` Frantisek Borsik
  2023-03-20 22:02                                                                   ` [LibreQoS] On FiWi power envelope rjmcmahon
  2023-03-21  0:10                                                                 ` [LibreQoS] [Starlink] [Rpm] On FiWi Brandon Butterworth
  1 sibling, 1 reply; 183+ messages in thread
From: Frantisek Borsik @ 2023-03-20 21:38 UTC (permalink / raw)
  To: dan
  Cc: rjmcmahon, Rpm, libreqos, Dave Taht via Starlink, bloat,
	Michael Richardson

[-- Attachment #1: Type: text/plain, Size: 6596 bytes --]

Thanks, Dan. So we got here, but how to get out of this craziness.
The question is what (if anything)  we can actually learn from the very
beginning of the Internet. If I remember correctly, there was as part of
this discussion here (or in the other thread) on IP vs LoRaWAN.
Can we use something from the good ole IP frame work that would help us to
do this?
Also, is it even possible at all?

Btw, here is a good example of explaining, learning of the performance
(latency, jitter, bufferbloat) side of the Internet, shared with the
customers by Robert, the guy behind the beginning of LibreQoS:
https://jackrabbitwireless.com/performance/
Great inspiration.

All the best,

Frank

Frantisek (Frank) Borsik



https://www.linkedin.com/in/frantisekborsik

Signal, Telegram, WhatsApp: +421919416714

iMessage, mobile: +420775230885

Skype: casioa5302ca

frantisek.borsik@gmail.com


On Mon, Mar 20, 2023 at 10:28 PM dan <dandenson@gmail.com> wrote:

> I more or less agree with you Frantisek.   There are throughput numbers
> that are need for current gen and next gen services, but those are often
> met with 50-100Mbps plans today that are enough to handle multiple 4K
> streams plus browsing and so forth, yet no one talks about latency and
> packet loss and other useful metrics at all and consumers are not able to
> and never will be able to understand more that a couple of numbers.  This
> is an industry problem and unless we have some sort of working group that
> is pushing this like the 'got milk?' advertisements I'm not sure how we
> will ever get there.  The big vendors that have pushed docsis to extremes
> have no interest in these other details, they win on the big 'speed' number
> and will advertising all sorts of performance around that number.
>
> We need a marketing/lobby group.  Not wispa or other individual industry
> groups, but one specifically for *ISPs that will contribute as well as
> implement policies and put that out on social media etc etc.  i don't know
> how we get there without a big player (ie Netflix, hulu..) contributing.
>
> On Mon, Mar 20, 2023 at 2:46 PM Frantisek Borsik <
> frantisek.borsik@gmail.com> wrote:
>
>>
>> Late to the party, also not an engineer...but if there's something I have
>> learned during my time with RF elements:
>>
>> --- 99% of the vendors out there (and most of the ISPs, I dare to say, as
>> well) don't know/care/respect thing as "simple", as physics.
>>
>> --- 2.4GHz was lost because of this, and 5GHz was saved like "5 minutes
>> to midnight" for ISPs, by RF elements Horns (and UltraHorns, UltraDish,
>> Asymmetrical Horns later on), basically, that have inspired ("Imitation is
>> the sincerest form of flattery that mediocrity can pay to greatness." Oscar
>> Wilde) some other vendors of the antennas to bring their own version of
>> Horns etc.
>>
>> --- sure, lot of improvements in order to fight noise, modulate,
>> virtualise (like Tarana Wireless) were done on the AP (radio) side, but
>> still - physics is physics and it was overlooked and neglected for such a
>> LONG time.
>>
>> --- ISPs were told by the vendors to basically BLAST through the noise
>> and many more BS like this. So they did as they were told, they were
>> blasting and blasting. Those that were getting smarter, switched to RF
>> elements Horns, stopped blasting, started to being reasonable with topology
>> ("if Your customers are 5 miles away from the AP, You will not blast like
>> crazy for 10 miles, because You will pick up all the noise") and they even
>> started to cooperate - frequency coordination, colocation - with other ISPs
>> on the same towers etc (the same co-ordination needs to be done between the
>> ISP behind the CPEs now - on the Wi-Fi routers of their customers.)
>>
>> The similar development I was able to see when I got into Wi-Fi (while at
>> TurrisTech <https://blog.cerowrt.org/post/tango_on_turris/> - secure,
>> powerful open source Wi-Fi routers). The same story, basically, for vendors
>> as well as ISPs. No actual respect for the underlying physics, attempts to
>> blast-over the noise, chasing clouds ("muah WiFi 6, 6E....oh no, here comes
>> Wi-Fi 7 and this will change EVERYTHING ---> see, it was a lot of "fun" to
>> see this happening with 5G, and the amount of over-promise and
>> under-delivery BS was and even still is, staggering.)
>> The whole Wi-Fi industry is chasing (almost) empty numbers (bandwidth)
>> instead of focusing on bufferbloat (latency, jitter...).
>> Thanks to Domos for putting together the Understanding Latency webinar
>> series. I know that most of You are aware of latency as the most important
>> metric we should focus on nowadays in order to improve the overall Internet
>> experience, but still...
>> About 6 hours watch of watching. And rewatching:
>> https://www.youtube.com/watch?v=KdTPz5srJ8M
>> https://www.youtube.com/watch?v=tAVwmUG21OY
>> https://www.youtube.com/watch?v=MRmcWyIVXvg
>>
>> Also, there is one more thing to add re Wi-Fi. If You can cable, You
>> should always cable. Mesh as we know it, would be a way better Wi-Fi
>> enhancement, if the mesh units would be wired as much as possible. We will
>> reduce the noice, grow smart and save spectrum.
>>
>> Thanks for the great discussion.
>>
>> All the best,
>>
>> Frank
>>
>> Frantisek (Frank) Borsik
>>
>>
>>
>> https://www.linkedin.com/in/frantisekborsik
>>
>> Signal, Telegram, WhatsApp: +421919416714
>>
>> iMessage, mobile: +420775230885
>>
>> Skype: casioa5302ca
>>
>> frantisek.borsik@gmail.com
>>
>>
>> On Sun, Mar 19, 2023 at 11:27 AM Michael Richardson via Rpm <
>> rpm@lists.bufferbloat.net> wrote:
>>
>>>
>>> {lots of lists on the CC}
>>>
>>> The problem I have with lorawan is that it's too small for anything but
>>> the
>>> smallest sensors.  When it breaks (due to infant death or just
>>> vanadalism)
>>> who is going to notice enough to fix it?  My belief is that people won't
>>> break things that they like/depend upon.  Or at least, that there will be
>>> social pressure not to.
>>>
>>> Better to have a protected 1Mb/s sensor lan within a 144Mb/s wifi than a
>>> adjacent lorawan.
>>>
>>> --
>>> Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works
>>>  -= IPv6 IoT consulting =-                      *I*LIKE*TRAINS*
>>>
>>>
>>>
>>> _______________________________________________
>>> Rpm mailing list
>>> Rpm@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm
>>>
>>

[-- Attachment #2: Type: text/html, Size: 10348 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* [LibreQoS] On FiWi power envelope
  2023-03-20 21:38                                                                 ` Frantisek Borsik
@ 2023-03-20 22:02                                                                   ` rjmcmahon
  2023-03-20 23:47                                                                     ` [LibreQoS] [Starlink] " Bruce Perens
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-20 22:02 UTC (permalink / raw)
  To: Frantisek Borsik
  Cc: dan, Rpm, libreqos, Dave Taht via Starlink, bloat, Michael Richardson

If I'm reading things correctly, the per fire alarm power rating is 120V 
at 80 mA or 9.6 W. The per power FiWi transceiver estimate is 2 Watts 
per spatial stream at 160MhZ and 1 Watt for the fiber. Looks like a 
retrofit of a fire alarm system would have sufficient power for FiWi 
radio heads. Then it's punching a few holes, run fiber, splice, patch & 
paint which is very straightforward work for the trades. Rich people as 
early adopters could show off their infinitely capable in-home network. 
Installers could do a two-for deal, buy one and I'll install another in 
a less fortunate community. 
https://www.thespruce.com/install-hardwired-smoke-detectors-1152329

Sharktank passed on the Ring deal - imagine having a real, life-support 
capable, & future-proof network vs just a silly doorbell w/camera.

Bob

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] On FiWi power envelope
  2023-03-20 22:02                                                                   ` [LibreQoS] On FiWi power envelope rjmcmahon
@ 2023-03-20 23:47                                                                     ` Bruce Perens
  0 siblings, 0 replies; 183+ messages in thread
From: Bruce Perens @ 2023-03-20 23:47 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Frantisek Borsik, Dave Taht via Starlink, dan,
	Michael Richardson, libreqos, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 1217 bytes --]

It's time to break this discussion off into its own list, isn't it?

On Mon, Mar 20, 2023 at 3:03 PM rjmcmahon via Starlink <
starlink@lists.bufferbloat.net> wrote:

> If I'm reading things correctly, the per fire alarm power rating is 120V
> at 80 mA or 9.6 W. The per power FiWi transceiver estimate is 2 Watts
> per spatial stream at 160MhZ and 1 Watt for the fiber. Looks like a
> retrofit of a fire alarm system would have sufficient power for FiWi
> radio heads. Then it's punching a few holes, run fiber, splice, patch &
> paint which is very straightforward work for the trades. Rich people as
> early adopters could show off their infinitely capable in-home network.
> Installers could do a two-for deal, buy one and I'll install another in
> a less fortunate community.
> https://www.thespruce.com/install-hardwired-smoke-detectors-1152329
>
> Sharktank passed on the Ring deal - imagine having a real, life-support
> capable, & future-proof network vs just a silly doorbell w/camera.
>
> Bob
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>


-- 
Bruce Perens K6BP

[-- Attachment #2: Type: text/html, Size: 2023 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm]   On FiWi
  2023-03-20 21:28                                                               ` dan
  2023-03-20 21:38                                                                 ` Frantisek Borsik
@ 2023-03-21  0:10                                                                 ` Brandon Butterworth
  2023-03-21  5:21                                                                   ` Frantisek Borsik
  2023-03-21 12:30                                                                   ` [LibreQoS] [Rpm] [Starlink] " Sebastian Moeller
  1 sibling, 2 replies; 183+ messages in thread
From: Brandon Butterworth @ 2023-03-21  0:10 UTC (permalink / raw)
  To: dan
  Cc: Frantisek Borsik, Dave Taht via Starlink, Michael Richardson,
	libreqos, Rpm, rjmcmahon, bloat, brandon

On Mon Mar 20, 2023 at 03:28:57PM -0600, dan via Starlink wrote:
> I more or less agree with you Frantisek.   There are throughput numbers
> that are need for current gen and next gen services, but those are often
> met with 50-100Mbps plans today that are enough to handle multiple 4K
> streams plus browsing and so forth

It is for now, question is how busy will it get and will that be before
the next upgrade round.

This is why there's a push to sell gigabit in the UK.

It gives newcomer altnets something the consumers can understand - big
number - to market against the incumbents sweatng old assets
with incremental upgrades that will become a problem. From my personal
point of view (doing active ethernet) it seems pointless making
equipment more expensive to enable lower speeds to be sold.

> yet no one talks about latency and packet loss and other useful metrics

Gamers get it and rate ISPs on it, nobody else cares. Part of the
reason for throwing bandwith at the home is to ensure the hard to
replace distribution and house drop is never the problem. Backhaul
becomes the limit and they can upgrade that more easily when market
pressure with speedtests show there is a problem.

> We need a marketing/lobby group.  Not wispa or other individual industry
> groups, but one specifically for *ISPs that will contribute as well as
> implement policies and put that out on social media etc etc.  i don't know
> how we get there without a big player (ie Netflix, hulu..) contributing.

Peak time congestion through average stream speed reduction is faily obvious
in playback stats. Any large platform has lots of data on which ISPs
are performing well.

We can share stats with the ISPs and tell A that they are performing
worse than B,C,D if there is a problem. I did want to publish it so
the public could choose the best but legal were not comfortable
with that.

brandon

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm]  On FiWi
  2023-03-21  0:10                                                                 ` [LibreQoS] [Starlink] [Rpm] On FiWi Brandon Butterworth
@ 2023-03-21  5:21                                                                   ` Frantisek Borsik
  2023-03-21 11:26                                                                     ` [LibreQoS] Annoyed at 5/1 Mbps Rich Brown
  2023-03-21 12:29                                                                     ` [LibreQoS] [Starlink] [Rpm] On FiWi Brandon Butterworth
  2023-03-21 12:30                                                                   ` [LibreQoS] [Rpm] [Starlink] " Sebastian Moeller
  1 sibling, 2 replies; 183+ messages in thread
From: Frantisek Borsik @ 2023-03-21  5:21 UTC (permalink / raw)
  To: brandon, dan
  Cc: Michael Richardson, bloat, Rpm, Dave Taht via Starlink, libreqos,
	rjmcmahon

[-- Attachment #1: Type: text/plain, Size: 3586 bytes --]

Even at Friday evening Netflix time, there’s hardly more than 25/5 Mbps
consumed.
Also, the real improvements that will be really felt by the people are on
the bufferbloat front (enterprise as well as residential)

If there’s just single one talk that everyone should watch from that
Understanding Latency webinar series I have shared, it’s this one, with
Gino Dion (Nokia Bell Labs), Magnus Olden (Domos - Latency Management) and
Angus Laurie-Pile (GameBench):
https://m.youtube.com/watch?v=MRmcWyIVXvg&t=1358s
It’s all about the 1-25Gbps misconception, what we did to put it out there
as techies, and what can be done to show the customers to change that…40
minutes, but it’s WORTHWHILE.
Really shows that it goes beyond gamers - they were just a canary in the
coal mine pre-covid.

Now, I hope to really piss You off with the following statement  :-P but:

even sub 5/1 Mbps “broadband” in Africa with bufferbloat fixed on as many
hops along the internet journey from a data center to the customers mobile
device (or with just LibreQoS middle box in the ISP’s network) is feeling
way better than 25Gbps XG-PON. The only time the XG-PON guy could really
feel like a king of the world would be during his speedtest.



All the best,

Frank
Frantisek (Frank) Borsik


https://www.linkedin.com/in/frantisekborsik

Signal, Telegram, WhatsApp: +421919416714

iMessage, mobile: +420775230885

Skype: casioa5302ca

frantisek.borsik@gmail.com





On 21 March 2023 at 1:10:21 AM, Brandon Butterworth (brandon@rd.bbc.co.uk)
wrote:

> On Mon Mar 20, 2023 at 03:28:57PM -0600, dan via Starlink wrote:
>
> I more or less agree with you Frantisek. There are throughput numbers
> that are need for current gen and next gen services, but those are often
> met with 50-100Mbps plans today that are enough to handle multiple 4K
> streams plus browsing and so forth
>
>
> It is for now, question is how busy will it get and will that be before
> the next upgrade round.
>
> This is why there's a push to sell gigabit in the UK.
>
> It gives newcomer altnets something the consumers can understand - big
> number - to market against the incumbents sweatng old assets
> with incremental upgrades that will become a problem. From my personal
> point of view (doing active ethernet) it seems pointless making
> equipment more expensive to enable lower speeds to be sold.
>
> yet no one talks about latency and packet loss and other useful metrics
>
>
> Gamers get it and rate ISPs on it, nobody else cares. Part of the
> reason for throwing bandwith at the home is to ensure the hard to
> replace distribution and house drop is never the problem. Backhaul
> becomes the limit and they can upgrade that more easily when market
> pressure with speedtests show there is a problem.
>
> We need a marketing/lobby group. Not wispa or other individual industry
> groups, but one specifically for *ISPs that will contribute as well as
> implement policies and put that out on social media etc etc. i don't know
> how we get there without a big player (ie Netflix, hulu..) contributing.
>
>
> Peak time congestion through average stream speed reduction is faily
> obvious
> in playback stats. Any large platform has lots of data on which ISPs
> are performing well.
>
> We can share stats with the ISPs and tell A that they are performing
> worse than B,C,D if there is a problem. I did want to publish it so
> the public could choose the best but legal were not comfortable
> with that.
>
> brandon
>

[-- Attachment #2: Type: text/html, Size: 6370 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] Annoyed at 5/1 Mbps...
  2023-03-21  5:21                                                                   ` Frantisek Borsik
@ 2023-03-21 11:26                                                                     ` Rich Brown
  2023-03-21 12:31                                                                       ` [LibreQoS] [Starlink] " Sebastian Moeller
  2023-03-21 12:29                                                                     ` [LibreQoS] [Starlink] [Rpm] On FiWi Brandon Butterworth
  1 sibling, 1 reply; 183+ messages in thread
From: Rich Brown @ 2023-03-21 11:26 UTC (permalink / raw)
  To: Frantisek Borsik
  Cc: brandon, dan, Dave Taht via Starlink, Michael Richardson,
	libreqos, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 1182 bytes --]



> On Mar 21, 2023, at 1:21 AM, Frantisek Borsik via Rpm <rpm@lists.bufferbloat.net> wrote:
> 
> Now, I hope to really piss You off with the following statement  :-P but:
> 
> even sub 5/1 Mbps “broadband” in Africa with bufferbloat fixed on as many hops along the internet journey from a data center to the customers mobile device (or with just LibreQoS middle box in the ISP’s network) is feeling way better than 25Gbps XG-PON. The only time the XG-PON guy could really feel like a king of the world would be during his speedtest.

Nope. Sorry - this doesn't piss me off :-) It's just true. 

- 7mbps/768kbps DSL with an IQrouter works fine for two simultaneous Zoom conferences. (Even though no one would think that it's fast.)
- I recommend people on a budget drop their ISP speed so they can afford a router that does SQM https://forum.openwrt.org/t/so-you-have-500mbps-1gbps-fiber-and-need-a-router-read-this-first/90305/40 <https://forum.openwrt.org/t/so-you-have-500mbps-1gbps-fiber-and-need-a-router-read-this-first/90305/40>

The people that get annoyed are those who just upgraded to 1Gbps service and still are getting fragged in their games.

Rich

[-- Attachment #2: Type: text/html, Size: 2877 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Rpm]  On FiWi
  2023-03-21  5:21                                                                   ` Frantisek Borsik
  2023-03-21 11:26                                                                     ` [LibreQoS] Annoyed at 5/1 Mbps Rich Brown
@ 2023-03-21 12:29                                                                     ` Brandon Butterworth
  1 sibling, 0 replies; 183+ messages in thread
From: Brandon Butterworth @ 2023-03-21 12:29 UTC (permalink / raw)
  To: Frantisek Borsik
  Cc: brandon, dan, Michael Richardson, bloat, Rpm,
	Dave Taht via Starlink, libreqos, rjmcmahon

On Mon Mar 20, 2023 at 10:21:10PM -0700, Frantisek Borsik wrote:
> Even at Friday evening Netflix time, there?s hardly more than 25/5 Mbps
> consumed.

Today. Today has never been a good target when planning builds that
need to last the next decade. Fibre affords us the luxury of sufficient
capacity to reduce the infrastructure churn where we choose to.

> Also, the real improvements that will be really felt by the people are on
> the bufferbloat front (enterprise as well as residential)

That's a separate matter and needs addressing whatever the delivery
technology and speed.

> If there?s just single one talk that everyone should watch from that
> Understanding Latency webinar series I have shared, it?s this one, with
> Gino Dion (Nokia Bell Labs), Magnus Olden (Domos - Latency Management) and
> Angus Laurie-Pile (GameBench):
> https://m.youtube.com/watch?v=MRmcWyIVXvg&t=1358s
> It?s all about the 1-25Gbps misconception, what we did to put it out there
> as techies, and what can be done to show the customers to change that?40
> minutes, but it?s WORTHWHILE.

TL;DL

I got "how can we monetise latency", says it all, nothing gets fixed
without a premium and the way they were talking that means most do
not get the fix as it becomes an incentive to increase latency to force
more payment. The speed is immaterial in that.

> Now, I hope to really piss You off with the following statement  :-P but:
> 
> even sub 5/1 Mbps ?broadband? in Africa with bufferbloat fixed on as many
> hops along the internet journey from a data center to the customers mobile
> device (or with just LibreQoS middle box in the ISP?s network) is feeling
> way better than 25Gbps XG-PON. The only time the XG-PON guy could really
> feel like a king of the world would be during his speedtest.

So? Some companies will find ways to do things badly regardless, others
make best of what they have. Nothing to get annoyed at nor an argument
to not build faster networks.

I think I may mave missed your point. What are you suggesting, we don't
build faster networks? A new (faster) network build is a great opportunity
to fix bufferbloat.

brandon

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink]    On FiWi
  2023-03-21  0:10                                                                 ` [LibreQoS] [Starlink] [Rpm] On FiWi Brandon Butterworth
  2023-03-21  5:21                                                                   ` Frantisek Borsik
@ 2023-03-21 12:30                                                                   ` Sebastian Moeller
  2023-03-21 17:42                                                                     ` rjmcmahon
  1 sibling, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-21 12:30 UTC (permalink / raw)
  To: brandon; +Cc: dan, Rpm, libreqos, Dave Taht via Starlink, bloat

Hi Brandon,


> On Mar 21, 2023, at 01:10, Brandon Butterworth via Rpm <rpm@lists.bufferbloat.net> wrote:
> 
> On Mon Mar 20, 2023 at 03:28:57PM -0600, dan via Starlink wrote:
>> I more or less agree with you Frantisek.   There are throughput numbers
>> that are need for current gen and next gen services, but those are often
>> met with 50-100Mbps plans today that are enough to handle multiple 4K
>> streams plus browsing and so forth
> 
> It is for now, question is how busy will it get and will that be before
> the next upgrade round.

	I agree these are rates that can work pretty well (assuming the upload is wide enough). This is also orthogonal to the point that both copper access networks, have already or a close to reaching their reasonable end of life, so replacing copper with fiber seems a good idea to future proof the access network. But once you do that you realize that actual traffic (at least for big ISPs that do not need to buy much transit and get cost neural peerings) is not that costly, so offering a 1 Gbps plan instead of a 100 Mbps is a no brainer, the customer is unlikely to actually source/sink that much more traffic and you might get a few pound/EUR/$ more out of essentially the same load.

> 
> This is why there's a push to sell gigabit in the UK.

	I think this also holds for the EU.

> 
> It gives newcomer altnets something the consumers can understand - big
> number - to market against the incumbents sweatng old assets
> with incremental upgrades that will become a problem. From my personal
> point of view (doing active ethernet) it seems pointless making
> equipment more expensive to enable lower speeds to be sold.


One additional reason for the "push for the gigabit" is political in nature. The national level of fiber deployment is taken as sort of digital trump game in which different countries want to look good, taking available capacity (and more so the giga-prefix) as proxy for digitalization and modernity. So if there are politic "mandates/desires" to have a high average capacity, then ISPs will follow that mandate, especially since that is basically an extension of the existing marketing anyways...


>> yet no one talks about latency and packet loss and other useful metrics

	Fun fact, I am currently diagnosing issues with my ISP regarding packet-loss, one of their gateways produces ~1% packet loss in the download direction independent of load, wrecking havoc with speedtest results (Not even BBR will tolerate 1% random loss without a noticeable throghuput hit) and hence resulting in months of customer complaints the ISP did not manage to root-cause and fix... Realistically the packetloss rate without load should be really close to 0


> Gamers get it and rate ISPs on it, nobody else cares. Part of the
> reason for throwing bandwith at the home is to ensure the hard to
> replace distribution and house drop is never the problem. Backhaul
> becomes the limit and they can upgrade that more easily when market
> pressure with speedtests show there is a problem.
> 
>> We need a marketing/lobby group.  Not wispa or other individual industry
>> groups, but one specifically for *ISPs that will contribute as well as
>> implement policies and put that out on social media etc etc.  i don't know
>> how we get there without a big player (ie Netflix, hulu..) contributing.
> 
> Peak time congestion through average stream speed reduction is faily obvious
> in playback stats. Any large platform has lots of data on which ISPs
> are performing well.
> 
> We can share stats with the ISPs and tell A that they are performing
> worse than B,C,D if there is a problem. I did want to publish it so
> the public could choose the best but legal were not comfortable
> with that.
> 
> brandon
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] Annoyed at 5/1 Mbps...
  2023-03-21 11:26                                                                     ` [LibreQoS] Annoyed at 5/1 Mbps Rich Brown
@ 2023-03-21 12:31                                                                       ` Sebastian Moeller
  2023-03-21 12:53                                                                         ` Rich Brown
  2023-03-21 17:22                                                                         ` dan
  0 siblings, 2 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-21 12:31 UTC (permalink / raw)
  To: Rich Brown; +Cc: Frantisek Borsik, dan, bloat, libreqos

I have to push back gently on this...

XG(S)-PON is gross 10Gbps (after FEC you are left with around 8,6 Gbps), Noki's proprietary (aka not ITU) @% Gbps PON seems to be abbreviated 25GS-PON.

Now XGS-PON allows maximally 128 end-nodes in the tree, so:
8600/128 = 67.18 Mbps/subscriber

unless the ISPs royally screwed up the configuration there should be a CIR per subscriber of around 60 Mbps. So setting your cake shaper to 50 Mbps shpuld give you:
a) 10 times the throughput of the 5/1 Mbps DSL (ignoring overhead compensation for a change, which likely will be in favor of PON)
b) decent low latency, round robin delay for full MTU packets between 128 active nodes would be: 
	packet/sec: ((8.6 * 1000^3)/(1500*8)) = 716666.666667
	millisec/packet: 1000 / ((8.6 * 1000^3)/(1500*8)) = 0.00139534883721
	round-robin delay 128: 128 * 1000 / ((8.6 * 1000^3)/(1500*8)) = 0.178604651163 milliseconds...

	DSL uses a 4KHz clock so 1000/4000 = 0.25 millisecond quantization
So XGS-PON has at least theoretical potential to deliver lower latency than DSL, but the details depend on if/how packets are aggregated. HOWEVER the 125µsec GPON frames can be shared between different ONUs in upstream and downstream direction... so these are not a hard quantisation but more the interval between control information required for the access grant cycle...

c) robustness against RF noise sources and electricity/lightning

So I am not su sure I would prefer the 5/1 (A)DSL over a PON... 

That however is orthogonal to me preferring a competent ISP that takes care of keeping latency under load at bay.



> On Mar 21, 2023, at 12:26, Rich Brown via Starlink <starlink@lists.bufferbloat.net> wrote:
> 
> 
> 
>> On Mar 21, 2023, at 1:21 AM, Frantisek Borsik via Rpm <rpm@lists.bufferbloat.net> wrote:
>> 
>> Now, I hope to really piss You off with the following statement  :-P but:
>> 
>> even sub 5/1 Mbps “broadband” in Africa with bufferbloat fixed on as many hops along the internet journey from a data center to the customers mobile device (or with just LibreQoS middle box in the ISP’s network) is feeling way better than 25Gbps XG-PON. The only time the XG-PON guy could really feel like a king of the world would be during his speedtest.
> 
> Nope. Sorry - this doesn't piss me off :-) It's just true. 
> 
> - 7mbps/768kbps DSL with an IQrouter works fine for two simultaneous Zoom conferences. (Even though no one would think that it's fast.)
> - I recommend people on a budget drop their ISP speed so they can afford a router that does SQM https://forum.openwrt.org/t/so-you-have-500mbps-1gbps-fiber-and-need-a-router-read-this-first/90305/40

	Even simpler, even on a 100Gbps link nobody stops you from setting your shaper to 50/10 if that is all your router can deliver (and I agree if there are cheaper plans closer to the 50/10 it makes economic sense to scale down the plan)...


> 
> The people that get annoyed are those who just upgraded to 1Gbps service and still are getting fragged in their games.
> 
> Rich
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] Annoyed at 5/1 Mbps...
  2023-03-21 12:31                                                                       ` [LibreQoS] [Starlink] " Sebastian Moeller
@ 2023-03-21 12:53                                                                         ` Rich Brown
  2023-03-21 17:22                                                                         ` dan
  1 sibling, 0 replies; 183+ messages in thread
From: Rich Brown @ 2023-03-21 12:53 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: Frantisek Borsik, dan, bloat, libreqos

[-- Attachment #1: Type: text/plain, Size: 452 bytes --]



> On Mar 21, 2023, at 8:31 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
> 
> I have to push back gently on this...
> 
...

> So I am not su sure I would prefer the 5/1 (A)DSL over a PON... 
> 
> That however is orthogonal to me preferring a competent ISP that takes care of keeping latency under load at bay.

OK. I concede. PON (or even a 25/25mbps connection) is way better than DSL. As long as I can use a router with SQM :-)

Rich

[-- Attachment #2: Type: text/html, Size: 3890 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] Annoyed at 5/1 Mbps...
  2023-03-21 12:31                                                                       ` [LibreQoS] [Starlink] " Sebastian Moeller
  2023-03-21 12:53                                                                         ` Rich Brown
@ 2023-03-21 17:22                                                                         ` dan
  2023-03-21 19:04                                                                           ` Sebastian Moeller
  1 sibling, 1 reply; 183+ messages in thread
From: dan @ 2023-03-21 17:22 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: Rich Brown, Frantisek Borsik, bloat, libreqos

[-- Attachment #1: Type: text/plain, Size: 3851 bytes --]

GPON is TDMA so the latency is going to be at a minimum the RTT * connected
ONUs, vs DSL which is a fixed ratio/scheduler.

Standard GPON deployments are typically well over 1 second to the OLT.  Not
that it's bad or anything, but in comparison GPON has very 'wireless' like
best case latency but without the wireless variances.

On Tue, Mar 21, 2023 at 6:31 AM Sebastian Moeller <moeller0@gmx.de> wrote:

> I have to push back gently on this...
>
> XG(S)-PON is gross 10Gbps (after FEC you are left with around 8,6 Gbps),
> Noki's proprietary (aka not ITU) @% Gbps PON seems to be abbreviated
> 25GS-PON.
>
> Now XGS-PON allows maximally 128 end-nodes in the tree, so:
> 8600/128 = 67.18 Mbps/subscriber
>
> unless the ISPs royally screwed up the configuration there should be a CIR
> per subscriber of around 60 Mbps. So setting your cake shaper to 50 Mbps
> shpuld give you:
> a) 10 times the throughput of the 5/1 Mbps DSL (ignoring overhead
> compensation for a change, which likely will be in favor of PON)
> b) decent low latency, round robin delay for full MTU packets between 128
> active nodes would be:
>         packet/sec: ((8.6 * 1000^3)/(1500*8)) = 716666.666667
>         millisec/packet: 1000 / ((8.6 * 1000^3)/(1500*8)) =
> 0.00139534883721
>         round-robin delay 128: 128 * 1000 / ((8.6 * 1000^3)/(1500*8)) =
> 0.178604651163 milliseconds...
>
>         DSL uses a 4KHz clock so 1000/4000 = 0.25 millisecond quantization
> So XGS-PON has at least theoretical potential to deliver lower latency
> than DSL, but the details depend on if/how packets are aggregated. HOWEVER
> the 125µsec GPON frames can be shared between different ONUs in upstream
> and downstream direction... so these are not a hard quantisation but more
> the interval between control information required for the access grant
> cycle...
>
> c) robustness against RF noise sources and electricity/lightning
>
> So I am not su sure I would prefer the 5/1 (A)DSL over a PON...
>
> That however is orthogonal to me preferring a competent ISP that takes
> care of keeping latency under load at bay.
>
>
>
> > On Mar 21, 2023, at 12:26, Rich Brown via Starlink <
> starlink@lists.bufferbloat.net> wrote:
> >
> >
> >
> >> On Mar 21, 2023, at 1:21 AM, Frantisek Borsik via Rpm <
> rpm@lists.bufferbloat.net> wrote:
> >>
> >> Now, I hope to really piss You off with the following statement  :-P
> but:
> >>
> >> even sub 5/1 Mbps “broadband” in Africa with bufferbloat fixed on as
> many hops along the internet journey from a data center to the customers
> mobile device (or with just LibreQoS middle box in the ISP’s network) is
> feeling way better than 25Gbps XG-PON. The only time the XG-PON guy could
> really feel like a king of the world would be during his speedtest.
> >
> > Nope. Sorry - this doesn't piss me off :-) It's just true.
> >
> > - 7mbps/768kbps DSL with an IQrouter works fine for two simultaneous
> Zoom conferences. (Even though no one would think that it's fast.)
> > - I recommend people on a budget drop their ISP speed so they can afford
> a router that does SQM
> https://forum.openwrt.org/t/so-you-have-500mbps-1gbps-fiber-and-need-a-router-read-this-first/90305/40
>
>         Even simpler, even on a 100Gbps link nobody stops you from setting
> your shaper to 50/10 if that is all your router can deliver (and I agree if
> there are cheaper plans closer to the 50/10 it makes economic sense to
> scale down the plan)...
>
>
> >
> > The people that get annoyed are those who just upgraded to 1Gbps service
> and still are getting fragged in their games.
> >
> > Rich
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
>
>

[-- Attachment #2: Type: text/html, Size: 4755 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink]    On FiWi
  2023-03-21 12:30                                                                   ` [LibreQoS] [Rpm] [Starlink] " Sebastian Moeller
@ 2023-03-21 17:42                                                                     ` rjmcmahon
  2023-03-21 18:08                                                                       ` rjmcmahon
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-21 17:42 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: brandon, Rpm, Dave Taht via Starlink, dan, libreqos, bloat

I think we may all be still stuck on numbers. Since infinity is taken, 
the new marketing number is "infinity & beyond" per Buzz Lightyear

Here's what I want, I'm sure others have ideas too:

o) We all deserve COPPA. Get the advertiser & their cohorts to stop 
mining my data & communications - limit or prohibit access to my 
information by those who continue to violate privacy rights
o) An unlimited storage offering with the lowest possible latency paid 
for annually. That equipment ends up as close as possible to my main 
home per speed of light limits.
o) Security of my network including 24x7x365 monitoring for breaches and 
for performance
  o) Access to any cloud software app. Google & Apple are getting 
something like 30% for every app on a phone. Seems like a last-mile 
provider should get a revenue share for hosting apps that aren't being 
downloaded. Blockbuster did this for DVDs before streaming took over. 
Revenue shares done properly, while imperfect, can work.
o) A life-support capable, future proof, componentized, leash-free, 
in-home network that is dual-homed over the last mile for redundancy
o) Per room FiWi and sensors that can be replaced and upgraded by me 
ordering and swapping the parts without an ISP getting all my neighbors' 
consensus & buy in
o) VPN capabilities & offerings to the content rights owners' 
intellectual property for when the peering agreements fall apart
o) Video conferencing that works 24x7x365 on all devices
o) A single & robust shut-off circuit

Bob

PS. I think the sweet spot may turn out to be 100Gb/s when considering 
climate impact. Type 2 emissions are a big deal so we need to deliver 
the fastest causality possible (incl. no queueing) at the lowest energy 
consumption engineers can achieve.


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink]    On FiWi
  2023-03-21 17:42                                                                     ` rjmcmahon
@ 2023-03-21 18:08                                                                       ` rjmcmahon
  2023-03-21 18:51                                                                         ` Frantisek Borsik
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-21 18:08 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Sebastian Moeller, Dave Taht via Starlink, dan, brandon,
	libreqos, Rpm, bloat

Also, I want my network to be the color clear because I value 
transparency, honesty, and clarity.

https://carbuzz.com/news/car-colors-are-more-important-to-buyers-than-you-think

"There are many factors to consider when buying a new car, from price 
and comfort to safety equipment. For many people, color is another 
important factor since it reflects their personality."

"In a study by Automotive Color Preferences 2021 Consumer Survey, 4,000 
people aged 25 to 60 in four of the largest car markets in the world 
(China, Germany, Mexico and the US) were asked about their car color 
preferences. Out of these, 88 percent said that color is a key deciding 
factor when buying a car."

Bob
> I think we may all be still stuck on numbers. Since infinity is taken,
> the new marketing number is "infinity & beyond" per Buzz Lightyear
> 
> Here's what I want, I'm sure others have ideas too:
> 
> o) We all deserve COPPA. Get the advertiser & their cohorts to stop
> mining my data & communications - limit or prohibit access to my
> information by those who continue to violate privacy rights
> o) An unlimited storage offering with the lowest possible latency paid
> for annually. That equipment ends up as close as possible to my main
> home per speed of light limits.
> o) Security of my network including 24x7x365 monitoring for breaches
> and for performance
>  o) Access to any cloud software app. Google & Apple are getting
> something like 30% for every app on a phone. Seems like a last-mile
> provider should get a revenue share for hosting apps that aren't being
> downloaded. Blockbuster did this for DVDs before streaming took over.
> Revenue shares done properly, while imperfect, can work.
> o) A life-support capable, future proof, componentized, leash-free,
> in-home network that is dual-homed over the last mile for redundancy
> o) Per room FiWi and sensors that can be replaced and upgraded by me
> ordering and swapping the parts without an ISP getting all my
> neighbors' consensus & buy in
> o) VPN capabilities & offerings to the content rights owners'
> intellectual property for when the peering agreements fall apart
> o) Video conferencing that works 24x7x365 on all devices
> o) A single & robust shut-off circuit
> 
> Bob
> 
> PS. I think the sweet spot may turn out to be 100Gb/s when considering
> climate impact. Type 2 emissions are a big deal so we need to deliver
> the fastest causality possible (incl. no queueing) at the lowest
> energy consumption engineers can achieve.
> 
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink]  On FiWi
  2023-03-21 18:08                                                                       ` rjmcmahon
@ 2023-03-21 18:51                                                                         ` Frantisek Borsik
  2023-03-21 19:58                                                                           ` rjmcmahon
  0 siblings, 1 reply; 183+ messages in thread
From: Frantisek Borsik @ 2023-03-21 18:51 UTC (permalink / raw)
  Cc: Rpm, dan, brandon, libreqos, Dave Taht via Starlink, bloat, rjmcmahon

[-- Attachment #1: Type: text/plain, Size: 5688 bytes --]

I do believe that we all want to get the best - latency and speed,
hopefully, in this particular order :-)
The problem was that from the very beginning of the Internet (yeah, I was
still not here, on this planet, when it all started), everything was
optimised for speed, bandwidth and other numbers, but not so much for
bufferbloat in general.
Some of the things that goes into it in the need for speed, are directly
against the fixing latency...and it was not setup for it. Gamers and Covid
(work from home, the need for the enterprise network but in homes...)
brings it into conversation, thankfully, and now we will deal with it.

Also, there is another thing I see and it's *a negative sentiment against
anything business* (monetisation of, say - lower latency solutions) in
general. If it comes from the general geeky/open source/etc folks, I can
understand it a bit. But it comes also from the business people - assuming
some of You works in big corporations or run ISPs. I'm all against
cronyism, but to throw out the baby with the bathwater - to say that doing
business (i.e. getting paid for delivering something that is missing/fixing
something that is implementing insufficiently) is wrong, to look at it with
disdain, is asinine.

This has the connection with the general "Net Neutrality" (NN) sentiment. I
have 2 suggestions for reading from the other side of the aisle, on this
topic: https://www.martingeddes.com/1261-2
<https://www.martingeddes.com/1261-2>/ (Martin was censored by all major
social media back then, during the days of NN fight in the FCC and
elsewhere.) Second thing is written by one and only Dave Taht:
https://blog.cerowrt.org/post/net_neutrality_customers/

*To conclude, we need to find the way how to benchmark and/or communicate
(translate, if You will) the whole variety of the quality of network
statistics/metrics (which are complex) *like QoE, QoS, latency, jitter,
bufferbloat...to something, that is meaningful for the end user. See this
short proposition of the* Quality of Outcome* by Domos:
https://www.youtube.com/watch?app=desktop&v=MRmcWyIVXvg&t=4185s
There is definitely a lot of work on this - and also on the finding the
right benchmark and its actual measurement side, but it's a step in the
right direction.

*Looking forward to seeing Your take on that proposed Quality of Outcome.
Thanks a lot.*

All the best,

Frank

Frantisek (Frank) Borsik



https://www.linkedin.com/in/frantisekborsik

Signal, Telegram, WhatsApp: +421919416714

iMessage, mobile: +420775230885

Skype: casioa5302ca

frantisek.borsik@gmail.com


On Tue, Mar 21, 2023 at 7:08 PM rjmcmahon via Rpm <rpm@lists.bufferbloat.net>
wrote:

> Also, I want my network to be the color clear because I value
> transparency, honesty, and clarity.
>
>
> https://carbuzz.com/news/car-colors-are-more-important-to-buyers-than-you-think
>
> "There are many factors to consider when buying a new car, from price
> and comfort to safety equipment. For many people, color is another
> important factor since it reflects their personality."
>
> "In a study by Automotive Color Preferences 2021 Consumer Survey, 4,000
> people aged 25 to 60 in four of the largest car markets in the world
> (China, Germany, Mexico and the US) were asked about their car color
> preferences. Out of these, 88 percent said that color is a key deciding
> factor when buying a car."
>
> Bob
> > I think we may all be still stuck on numbers. Since infinity is taken,
> > the new marketing number is "infinity & beyond" per Buzz Lightyear
> >
> > Here's what I want, I'm sure others have ideas too:
> >
> > o) We all deserve COPPA. Get the advertiser & their cohorts to stop
> > mining my data & communications - limit or prohibit access to my
> > information by those who continue to violate privacy rights
> > o) An unlimited storage offering with the lowest possible latency paid
> > for annually. That equipment ends up as close as possible to my main
> > home per speed of light limits.
> > o) Security of my network including 24x7x365 monitoring for breaches
> > and for performance
> >  o) Access to any cloud software app. Google & Apple are getting
> > something like 30% for every app on a phone. Seems like a last-mile
> > provider should get a revenue share for hosting apps that aren't being
> > downloaded. Blockbuster did this for DVDs before streaming took over.
> > Revenue shares done properly, while imperfect, can work.
> > o) A life-support capable, future proof, componentized, leash-free,
> > in-home network that is dual-homed over the last mile for redundancy
> > o) Per room FiWi and sensors that can be replaced and upgraded by me
> > ordering and swapping the parts without an ISP getting all my
> > neighbors' consensus & buy in
> > o) VPN capabilities & offerings to the content rights owners'
> > intellectual property for when the peering agreements fall apart
> > o) Video conferencing that works 24x7x365 on all devices
> > o) A single & robust shut-off circuit
> >
> > Bob
> >
> > PS. I think the sweet spot may turn out to be 100Gb/s when considering
> > climate impact. Type 2 emissions are a big deal so we need to deliver
> > the fastest causality possible (incl. no queueing) at the lowest
> > energy consumption engineers can achieve.
> >
> > _______________________________________________
> > Rpm mailing list
> > Rpm@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/rpm
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
>

[-- Attachment #2: Type: text/html, Size: 8176 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] Annoyed at 5/1 Mbps...
  2023-03-21 17:22                                                                         ` dan
@ 2023-03-21 19:04                                                                           ` Sebastian Moeller
  2023-03-23 18:23                                                                             ` dan
  0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-21 19:04 UTC (permalink / raw)
  To: dan; +Cc: Rich Brown, Frantisek Borsik, bloat, libreqos

Hi Dan,


> On Mar 21, 2023, at 18:22, dan <dandenson@gmail.com> wrote:
> 
> GPON is TDMA so the latency is going to be at a minimum the RTT * connected ONUs, vs DSL which is a fixed ratio/scheduler.  

	Assuming no proactive grants... are these a thing in PON or only in DOCSIS?, but since GPON frames can be shared between ONUs how do you derive the "RTT * connected ONUs" formula?


> Standard GPON deployments are typically well over 1 second to the OLT.

	I read that as millisecond, which would mean 8 GPON frames... for sending the request, processing and arbitrating all requests, assign transmit slots and send the transmit maps back to the ONUs, which then actually need to send the packets... RTT should not be all that noticeable, at 20 Km the wave propagation of light in fiber would be around 2*(20000/300000000 * 3/2)*1000 = 0.2 milliseconds... (not sure what a realistic maximum length for a PON tree is, which probably depends on a number of things anyway, but google says up to 20 Km for GPON)... but that RTT would be the same for active ethernet...

>  Not that it's bad or anything, but in comparison GPON has very 'wireless' like best case latency but without the wireless variances.

	All centrally scheduled link layers will have similar challenges I guess?

Kind Regards
	Sebastian

> 
> On Tue, Mar 21, 2023 at 6:31 AM Sebastian Moeller <moeller0@gmx.de> wrote:
> I have to push back gently on this...
> 
> XG(S)-PON is gross 10Gbps (after FEC you are left with around 8,6 Gbps), Noki's proprietary (aka not ITU) @% Gbps PON seems to be abbreviated 25GS-PON.
> 
> Now XGS-PON allows maximally 128 end-nodes in the tree, so:
> 8600/128 = 67.18 Mbps/subscriber
> 
> unless the ISPs royally screwed up the configuration there should be a CIR per subscriber of around 60 Mbps. So setting your cake shaper to 50 Mbps shpuld give you:
> a) 10 times the throughput of the 5/1 Mbps DSL (ignoring overhead compensation for a change, which likely will be in favor of PON)
> b) decent low latency, round robin delay for full MTU packets between 128 active nodes would be: 
>         packet/sec: ((8.6 * 1000^3)/(1500*8)) = 716666.666667
>         millisec/packet: 1000 / ((8.6 * 1000^3)/(1500*8)) = 0.00139534883721
>         round-robin delay 128: 128 * 1000 / ((8.6 * 1000^3)/(1500*8)) = 0.178604651163 milliseconds...
> 
>         DSL uses a 4KHz clock so 1000/4000 = 0.25 millisecond quantization
> So XGS-PON has at least theoretical potential to deliver lower latency than DSL, but the details depend on if/how packets are aggregated. HOWEVER the 125µsec GPON frames can be shared between different ONUs in upstream and downstream direction... so these are not a hard quantisation but more the interval between control information required for the access grant cycle...
> 
> c) robustness against RF noise sources and electricity/lightning
> 
> So I am not su sure I would prefer the 5/1 (A)DSL over a PON... 
> 
> That however is orthogonal to me preferring a competent ISP that takes care of keeping latency under load at bay.
> 
> 
> 
> > On Mar 21, 2023, at 12:26, Rich Brown via Starlink <starlink@lists.bufferbloat.net> wrote:
> > 
> > 
> > 
> >> On Mar 21, 2023, at 1:21 AM, Frantisek Borsik via Rpm <rpm@lists.bufferbloat.net> wrote:
> >> 
> >> Now, I hope to really piss You off with the following statement  :-P but:
> >> 
> >> even sub 5/1 Mbps “broadband” in Africa with bufferbloat fixed on as many hops along the internet journey from a data center to the customers mobile device (or with just LibreQoS middle box in the ISP’s network) is feeling way better than 25Gbps XG-PON. The only time the XG-PON guy could really feel like a king of the world would be during his speedtest.
> > 
> > Nope. Sorry - this doesn't piss me off :-) It's just true. 
> > 
> > - 7mbps/768kbps DSL with an IQrouter works fine for two simultaneous Zoom conferences. (Even though no one would think that it's fast.)
> > - I recommend people on a budget drop their ISP speed so they can afford a router that does SQM https://forum.openwrt.org/t/so-you-have-500mbps-1gbps-fiber-and-need-a-router-read-this-first/90305/40
> 
>         Even simpler, even on a 100Gbps link nobody stops you from setting your shaper to 50/10 if that is all your router can deliver (and I agree if there are cheaper plans closer to the 50/10 it makes economic sense to scale down the plan)...
> 
> 
> > 
> > The people that get annoyed are those who just upgraded to 1Gbps service and still are getting fragged in their games.
> > 
> > Rich
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
> 


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Rpm] [Starlink]  On FiWi
  2023-03-21 18:51                                                                         ` Frantisek Borsik
@ 2023-03-21 19:58                                                                           ` rjmcmahon
  2023-03-21 20:06                                                                             ` [LibreQoS] [Bloat] " David Lang
  2023-03-25 19:39                                                                             ` [LibreQoS] On fiber as critical infrastructure w/Comcast chat rjmcmahon
  0 siblings, 2 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-21 19:58 UTC (permalink / raw)
  To: Frantisek Borsik
  Cc: Rpm, dan, brandon, libreqos, Dave Taht via Starlink, bloat

I was around when BGP & other critical junctures 
https://en.wikipedia.org/wiki/Critical_juncture_theory  the commercial 
internet. Here's a short write-up from another thread with some thoughts 
(Note: there are no queues in the Schramm Model 
https://en.wikipedia.org/wiki/Schramm%27s_model_of_communication )

On why we're here.

I think Stuart's point about not having the correct framing is spot on. 
I also think part of that may come from the internet's origin story 
so-to-speak. In the early days of the commercial internet, ISPs formed 
by buying MODEM banks from suppliers and connecting them to the 
telephone company central offices (thanks Strowger!) and then leasing T1 
lines from the same telco, connecting the two.  Products like a Cisco 
Access Gateway were used for the MODEM side. The 4K independent ISPs 
formed in the U.S. took advantage of statistical multiplexing per IP 
packets to optimize the PSTN's time division multiplexing (TDM) design. 
That design had a lot of extra capacity because of the mother's day 
problem - the network had to carry the peak volume of calls. It was 
always odd to me that the telephone companies basically contracted out 
statistical to TDM coupling of networks and didn't do it themselves. 
This was rectified with broadband and most all the independent ISPs went 
out of business.

IP statistical multiplexing was great except for one thing. The attached 
computers were faster than their network i/o so TCP had to do things 
like congestion control to avoid network collapse based on congestion 
signals (and a very imperfect control loop.) Basically, that extra TDM 
capacity for voice calls was consumed very quickly. This set in motion 
the idea that network channel capacity is a proxy for computer speed as 
when networks are underprovisioned and congested that's basically 
accurate. Van Jacobson's work was most always about congestion on what 
today are bandwidth constrained networks.

This also started a bit of a cultural war colloquially known as 
Bellheads vs Netheads. The human engineers took sides more or less. The 
netheads mostly kept increasing capacity. The market demand curve for 
computer connections drove this. It's come to a head though, in that 
netheads most always overprovisioned similar to solving the mother's day 
problem. (This is different from the electric build out where the goal 
is to drive peak and average loads to merge in order to keep generators 
efficient at a constant speed.)

Many were first stuck with the concept of bandwidth scarcity per those 
origins. But then came bandwidth abundance and many haven't adjusted. 
Mental block number one. Mental block two occurs when one sees all that 
bandwidth and says, let's use it all as it's going to be scarce, like a 
Great Depression-era person hoarding basic items.

A digression; This isn't that much different in the early days before 
Einstein. Einstein changed thinking by realizing that the speed of 
causality was defined or limited by the speed of massless particles, 
i.e. energy or photons. We all come from energy in one way or another. 
So of course it makes sense that our causality system, e.g. aging, is 
determined by that speed. It had to be relative for Maxwell's equations 
to be held true - which Einstein agreed with as true irrelevant of 
inertial frame. A leap for us comes when we realize that the speed of 
causality, i.e. time, is fundamentally the speed of energy.  It's true 
for all clocks, objects, etc. even computers.

So when we engineer systems that queue information, we don't slow down 
energy, we slow down information. Computers are mass information tools 
so slowing down information slows down distributed compute. As Stuart 
says, "It's the latency, stupid".  It's physics too.

I was trying to explain to a dark fiber provider that I wanted 100Gb/s 
SFPs to a residential building in Boston. They said, nobody needs 
100Gb/s and that's correct from a link capacity perspective. But the 
economics & energy required for the lowest latency ber bit delivered 
actually is 100Gb/s SERDES attached to lasers attached to fiber.

What we really want is low latency at the lowest energy possible, and 
also to be unleashed from cables (as we're not dogs.) Hence FiWi.

Bob

> I do believe that we all want to get the best - latency and speed,
> hopefully, in this particular order :-)
> The problem was that from the very beginning of the Internet (yeah, I
> was still not here, on this planet, when it all started), everything
> was optimised for speed, bandwidth and other numbers, but not so much
> for bufferbloat in general.
> Some of the things that goes into it in the need for speed, are
> directly against the fixing latency...and it was not setup for it.
> Gamers and Covid (work from home, the need for the enterprise network
> but in homes...) brings it into conversation, thankfully, and now we
> will deal with it.
> 
> Also, there is another thing I see and it's a negative sentiment
> against anything business (monetisation of, say - lower latency
> solutions) in general. If it comes from the general geeky/open
> source/etc folks, I can understand it a bit. But it comes also from
> the business people - assuming some of You works in big corporations
> or run ISPs. I'm all against cronyism, but to throw out the baby with
> the bathwater - to say that doing business (i.e. getting paid for
> delivering something that is missing/fixing something that is
> implementing insufficiently) is wrong, to look at it with disdain, is
> asinine.
> 
> This has the connection with the general "Net Neutrality" (NN)
> sentiment. I have 2 suggestions for reading from the other side of the
> aisle, on this topic: https://www.martingeddes.com/1261-2 [1]/ (Martin
> was censored by all major social media back then, during the days of
> NN fight in the FCC and elsewhere.) Second thing is written by one and
> only Dave Taht:
> https://blog.cerowrt.org/post/net_neutrality_customers/
> 
> To conclude, we need to find the way how to benchmark and/or
> communicate (translate, if You will) the whole variety of the quality
> of network statistics/metrics (which are complex) like QoE, QoS,
> latency, jitter, bufferbloat...to something, that is meaningful for
> the end user. See this short proposition of the Quality of Outcome by
> Domos: https://www.youtube.com/watch?app=desktop&v=MRmcWyIVXvg&t=4185s
> There is definitely a lot of work on this - and also on the finding
> the right benchmark and its actual measurement side, but it's a step
> in the right direction.
> 
> Looking forward to seeing Your take on that proposed Quality of
> Outcome. Thanks a lot.
> 
> All the best,
> 
> Frank
> 
> Frantisek (Frank) Borsik
> 
> https://www.linkedin.com/in/frantisekborsik
> 
> Signal, Telegram, WhatsApp: +421919416714
> 
> iMessage, mobile: +420775230885
> 
> Skype: casioa5302ca
> 
> frantisek.borsik@gmail.com
> 
> On Tue, Mar 21, 2023 at 7:08 PM rjmcmahon via Rpm
> <rpm@lists.bufferbloat.net> wrote:
> 
>> Also, I want my network to be the color clear because I value
>> transparency, honesty, and clarity.
>> 
>> 
> https://carbuzz.com/news/car-colors-are-more-important-to-buyers-than-you-think
>> 
>> "There are many factors to consider when buying a new car, from
>> price
>> and comfort to safety equipment. For many people, color is another
>> important factor since it reflects their personality."
>> 
>> "In a study by Automotive Color Preferences 2021 Consumer Survey,
>> 4,000
>> people aged 25 to 60 in four of the largest car markets in the world
>> 
>> (China, Germany, Mexico and the US) were asked about their car color
>> 
>> preferences. Out of these, 88 percent said that color is a key
>> deciding
>> factor when buying a car."
>> 
>> Bob
>>> I think we may all be still stuck on numbers. Since infinity is
>> taken,
>>> the new marketing number is "infinity & beyond" per Buzz Lightyear
>>> 
>>> Here's what I want, I'm sure others have ideas too:
>>> 
>>> o) We all deserve COPPA. Get the advertiser & their cohorts to
>> stop
>>> mining my data & communications - limit or prohibit access to my
>>> information by those who continue to violate privacy rights
>>> o) An unlimited storage offering with the lowest possible latency
>> paid
>>> for annually. That equipment ends up as close as possible to my
>> main
>>> home per speed of light limits.
>>> o) Security of my network including 24x7x365 monitoring for
>> breaches
>>> and for performance
>>> o) Access to any cloud software app. Google & Apple are getting
>>> something like 30% for every app on a phone. Seems like a
>> last-mile
>>> provider should get a revenue share for hosting apps that aren't
>> being
>>> downloaded. Blockbuster did this for DVDs before streaming took
>> over.
>>> Revenue shares done properly, while imperfect, can work.
>>> o) A life-support capable, future proof, componentized,
>> leash-free,
>>> in-home network that is dual-homed over the last mile for
>> redundancy
>>> o) Per room FiWi and sensors that can be replaced and upgraded by
>> me
>>> ordering and swapping the parts without an ISP getting all my
>>> neighbors' consensus & buy in
>>> o) VPN capabilities & offerings to the content rights owners'
>>> intellectual property for when the peering agreements fall apart
>>> o) Video conferencing that works 24x7x365 on all devices
>>> o) A single & robust shut-off circuit
>>> 
>>> Bob
>>> 
>>> PS. I think the sweet spot may turn out to be 100Gb/s when
>> considering
>>> climate impact. Type 2 emissions are a big deal so we need to
>> deliver
>>> the fastest causality possible (incl. no queueing) at the lowest
>>> energy consumption engineers can achieve.
>>> 
>>> _______________________________________________
>>> Rpm mailing list
>>> Rpm@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
> 
> 
> Links:
> ------
> [1] https://www.martingeddes.com/1261-2

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Rpm] [Starlink]  On FiWi
  2023-03-21 19:58                                                                           ` rjmcmahon
@ 2023-03-21 20:06                                                                             ` David Lang
  2023-03-25 19:39                                                                             ` [LibreQoS] On fiber as critical infrastructure w/Comcast chat rjmcmahon
  1 sibling, 0 replies; 183+ messages in thread
From: David Lang @ 2023-03-21 20:06 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Frantisek Borsik, Dave Taht via Starlink, dan, brandon, libreqos,
	Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 11635 bytes --]

I'll point out that pre-Internet, there was UUCP and dialups between the 
computers, not even always-on links. So latency was 'wait until the next dialup 
session' and bandwidth was the critical issue.

most of the early applications worked with this environment, so the transition 
to always-connected still didn't have a strong latency driver. It's only as the 
web grew (and other real-time apps were introduced) that latency began to be 
more significant than bulk bandwitdth.

but as you say, people haven't wrapped their heads around 'bandwidth is 
available' yet.

David Lang


On Tue, 21 Mar 2023, rjmcmahon via Bloat wrote:

> Date: Tue, 21 Mar 2023 12:58:17 -0700
> From: rjmcmahon via Bloat <bloat@lists.bufferbloat.net>
> Reply-To: rjmcmahon <rjmcmahon@rjmcmahon.com>
> To: Frantisek Borsik <frantisek.borsik@gmail.com>
> Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>,
>     dan <dandenson@gmail.com>, brandon@rd.bbc.co.uk,
>     libreqos <libreqos@lists.bufferbloat.net>,
>     Rpm <rpm@lists.bufferbloat.net>, bloat <bloat@lists.bufferbloat.net>
> Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] On FiWi
> 
> I was around when BGP & other critical junctures 
> https://en.wikipedia.org/wiki/Critical_juncture_theory  the commercial 
> internet. Here's a short write-up from another thread with some thoughts 
> (Note: there are no queues in the Schramm Model 
> https://en.wikipedia.org/wiki/Schramm%27s_model_of_communication )
>
> On why we're here.
>
> I think Stuart's point about not having the correct framing is spot on. 
> I also think part of that may come from the internet's origin story 
> so-to-speak. In the early days of the commercial internet, ISPs formed 
> by buying MODEM banks from suppliers and connecting them to the 
> telephone company central offices (thanks Strowger!) and then leasing T1 
> lines from the same telco, connecting the two.  Products like a Cisco 
> Access Gateway were used for the MODEM side. The 4K independent ISPs 
> formed in the U.S. took advantage of statistical multiplexing per IP 
> packets to optimize the PSTN's time division multiplexing (TDM) design. 
> That design had a lot of extra capacity because of the mother's day 
> problem - the network had to carry the peak volume of calls. It was 
> always odd to me that the telephone companies basically contracted out 
> statistical to TDM coupling of networks and didn't do it themselves. 
> This was rectified with broadband and most all the independent ISPs went 
> out of business.
>
> IP statistical multiplexing was great except for one thing. The attached 
> computers were faster than their network i/o so TCP had to do things 
> like congestion control to avoid network collapse based on congestion 
> signals (and a very imperfect control loop.) Basically, that extra TDM 
> capacity for voice calls was consumed very quickly. This set in motion 
> the idea that network channel capacity is a proxy for computer speed as 
> when networks are underprovisioned and congested that's basically 
> accurate. Van Jacobson's work was most always about congestion on what 
> today are bandwidth constrained networks.
>
> This also started a bit of a cultural war colloquially known as 
> Bellheads vs Netheads. The human engineers took sides more or less. The 
> netheads mostly kept increasing capacity. The market demand curve for 
> computer connections drove this. It's come to a head though, in that 
> netheads most always overprovisioned similar to solving the mother's day 
> problem. (This is different from the electric build out where the goal 
> is to drive peak and average loads to merge in order to keep generators 
> efficient at a constant speed.)
>
> Many were first stuck with the concept of bandwidth scarcity per those 
> origins. But then came bandwidth abundance and many haven't adjusted. 
> Mental block number one. Mental block two occurs when one sees all that 
> bandwidth and says, let's use it all as it's going to be scarce, like a 
> Great Depression-era person hoarding basic items.
>
> A digression; This isn't that much different in the early days before 
> Einstein. Einstein changed thinking by realizing that the speed of 
> causality was defined or limited by the speed of massless particles, 
> i.e. energy or photons. We all come from energy in one way or another. 
> So of course it makes sense that our causality system, e.g. aging, is 
> determined by that speed. It had to be relative for Maxwell's equations 
> to be held true - which Einstein agreed with as true irrelevant of 
> inertial frame. A leap for us comes when we realize that the speed of 
> causality, i.e. time, is fundamentally the speed of energy.  It's true 
> for all clocks, objects, etc. even computers.
>
> So when we engineer systems that queue information, we don't slow down 
> energy, we slow down information. Computers are mass information tools 
> so slowing down information slows down distributed compute. As Stuart 
> says, "It's the latency, stupid".  It's physics too.
>
> I was trying to explain to a dark fiber provider that I wanted 100Gb/s 
> SFPs to a residential building in Boston. They said, nobody needs 
> 100Gb/s and that's correct from a link capacity perspective. But the 
> economics & energy required for the lowest latency ber bit delivered 
> actually is 100Gb/s SERDES attached to lasers attached to fiber.
>
> What we really want is low latency at the lowest energy possible, and 
> also to be unleashed from cables (as we're not dogs.) Hence FiWi.
>
> Bob
>
>> I do believe that we all want to get the best - latency and speed,
>> hopefully, in this particular order :-)
>> The problem was that from the very beginning of the Internet (yeah, I
>> was still not here, on this planet, when it all started), everything
>> was optimised for speed, bandwidth and other numbers, but not so much
>> for bufferbloat in general.
>> Some of the things that goes into it in the need for speed, are
>> directly against the fixing latency...and it was not setup for it.
>> Gamers and Covid (work from home, the need for the enterprise network
>> but in homes...) brings it into conversation, thankfully, and now we
>> will deal with it.
>> 
>> Also, there is another thing I see and it's a negative sentiment
>> against anything business (monetisation of, say - lower latency
>> solutions) in general. If it comes from the general geeky/open
>> source/etc folks, I can understand it a bit. But it comes also from
>> the business people - assuming some of You works in big corporations
>> or run ISPs. I'm all against cronyism, but to throw out the baby with
>> the bathwater - to say that doing business (i.e. getting paid for
>> delivering something that is missing/fixing something that is
>> implementing insufficiently) is wrong, to look at it with disdain, is
>> asinine.
>> 
>> This has the connection with the general "Net Neutrality" (NN)
>> sentiment. I have 2 suggestions for reading from the other side of the
>> aisle, on this topic: https://www.martingeddes.com/1261-2 [1]/ (Martin
>> was censored by all major social media back then, during the days of
>> NN fight in the FCC and elsewhere.) Second thing is written by one and
>> only Dave Taht:
>> https://blog.cerowrt.org/post/net_neutrality_customers/
>> 
>> To conclude, we need to find the way how to benchmark and/or
>> communicate (translate, if You will) the whole variety of the quality
>> of network statistics/metrics (which are complex) like QoE, QoS,
>> latency, jitter, bufferbloat...to something, that is meaningful for
>> the end user. See this short proposition of the Quality of Outcome by
>> Domos: https://www.youtube.com/watch?app=desktop&v=MRmcWyIVXvg&t=4185s
>> There is definitely a lot of work on this - and also on the finding
>> the right benchmark and its actual measurement side, but it's a step
>> in the right direction.
>> 
>> Looking forward to seeing Your take on that proposed Quality of
>> Outcome. Thanks a lot.
>> 
>> All the best,
>> 
>> Frank
>> 
>> Frantisek (Frank) Borsik
>> 
>> https://www.linkedin.com/in/frantisekborsik
>> 
>> Signal, Telegram, WhatsApp: +421919416714
>> 
>> iMessage, mobile: +420775230885
>> 
>> Skype: casioa5302ca
>> 
>> frantisek.borsik@gmail.com
>> 
>> On Tue, Mar 21, 2023 at 7:08 PM rjmcmahon via Rpm
>> <rpm@lists.bufferbloat.net> wrote:
>> 
>>> Also, I want my network to be the color clear because I value
>>> transparency, honesty, and clarity.
>>> 
>>> 
>> 
> https://carbuzz.com/news/car-colors-are-more-important-to-buyers-than-you-think
>>> 
>>> "There are many factors to consider when buying a new car, from
>>> price
>>> and comfort to safety equipment. For many people, color is another
>>> important factor since it reflects their personality."
>>> 
>>> "In a study by Automotive Color Preferences 2021 Consumer Survey,
>>> 4,000
>>> people aged 25 to 60 in four of the largest car markets in the world
>>> 
>>> (China, Germany, Mexico and the US) were asked about their car color
>>> 
>>> preferences. Out of these, 88 percent said that color is a key
>>> deciding
>>> factor when buying a car."
>>> 
>>> Bob
>>>> I think we may all be still stuck on numbers. Since infinity is
>>> taken,
>>>> the new marketing number is "infinity & beyond" per Buzz Lightyear
>>>> 
>>>> Here's what I want, I'm sure others have ideas too:
>>>> 
>>>> o) We all deserve COPPA. Get the advertiser & their cohorts to
>>> stop
>>>> mining my data & communications - limit or prohibit access to my
>>>> information by those who continue to violate privacy rights
>>>> o) An unlimited storage offering with the lowest possible latency
>>> paid
>>>> for annually. That equipment ends up as close as possible to my
>>> main
>>>> home per speed of light limits.
>>>> o) Security of my network including 24x7x365 monitoring for
>>> breaches
>>>> and for performance
>>>> o) Access to any cloud software app. Google & Apple are getting
>>>> something like 30% for every app on a phone. Seems like a
>>> last-mile
>>>> provider should get a revenue share for hosting apps that aren't
>>> being
>>>> downloaded. Blockbuster did this for DVDs before streaming took
>>> over.
>>>> Revenue shares done properly, while imperfect, can work.
>>>> o) A life-support capable, future proof, componentized,
>>> leash-free,
>>>> in-home network that is dual-homed over the last mile for
>>> redundancy
>>>> o) Per room FiWi and sensors that can be replaced and upgraded by
>>> me
>>>> ordering and swapping the parts without an ISP getting all my
>>>> neighbors' consensus & buy in
>>>> o) VPN capabilities & offerings to the content rights owners'
>>>> intellectual property for when the peering agreements fall apart
>>>> o) Video conferencing that works 24x7x365 on all devices
>>>> o) A single & robust shut-off circuit
>>>> 
>>>> Bob
>>>> 
>>>> PS. I think the sweet spot may turn out to be 100Gb/s when
>>> considering
>>>> climate impact. Type 2 emissions are a big deal so we need to
>>> deliver
>>>> the fastest causality possible (incl. no queueing) at the lowest
>>>> energy consumption engineers can achieve.
>>>> 
>>>> _______________________________________________
>>>> Rpm mailing list
>>>> Rpm@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/rpm
>>> _______________________________________________
>>> Rpm mailing list
>>> Rpm@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm
>> 
>> 
>> Links:
>> ------
>> [1] https://www.martingeddes.com/1261-2
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] Annoyed at 5/1 Mbps...
  2023-03-21 19:04                                                                           ` Sebastian Moeller
@ 2023-03-23 18:23                                                                             ` dan
  0 siblings, 0 replies; 183+ messages in thread
From: dan @ 2023-03-23 18:23 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: Rich Brown, Frantisek Borsik, bloat, libreqos

[-- Attachment #1: Type: text/plain, Size: 6033 bytes --]

All TDMA has to keep all clients connected in each scheduling window so
it's at a minimum of '(client count x .5RTT) + root->client broadcast'.
most of the time it's actually client count x RTT because lots of TDMA tech
only talks to one client per frame.  ie, all 802.11ac and earlier radios,
even most proprietary products.  'ax wifi or more specifically OFDMA based
products are the better .5RTT numbers.

Active ethernet is not this.  it's a 1:1 full duplex link hop to hop.  If
that's home runs great, but if it's a routed network then we will always be
a multiple of the copper ports on each side x hops plus whatever routing
delays in the routers/switches.  all adds up to more latency but via
different paths.

I can tell you that putting 10 hardware accelerated routers (I used
Mikrotik NP16 in my bend test) adds up to about the same latency as a GPON
system.




On Tue, Mar 21, 2023 at 1:04 PM Sebastian Moeller <moeller0@gmx.de> wrote:

> Hi Dan,
>
>
> > On Mar 21, 2023, at 18:22, dan <dandenson@gmail.com> wrote:
> >
> > GPON is TDMA so the latency is going to be at a minimum the RTT *
> connected ONUs, vs DSL which is a fixed ratio/scheduler.
>
>         Assuming no proactive grants... are these a thing in PON or only
> in DOCSIS?, but since GPON frames can be shared between ONUs how do you
> derive the "RTT * connected ONUs" formula?
>
>
> > Standard GPON deployments are typically well over 1 second to the OLT.
>
>         I read that as millisecond, which would mean 8 GPON frames... for
> sending the request, processing and arbitrating all requests, assign
> transmit slots and send the transmit maps back to the ONUs, which then
> actually need to send the packets... RTT should not be all that noticeable,
> at 20 Km the wave propagation of light in fiber would be around
> 2*(20000/300000000 * 3/2)*1000 = 0.2 milliseconds... (not sure what a
> realistic maximum length for a PON tree is, which probably depends on a
> number of things anyway, but google says up to 20 Km for GPON)... but that
> RTT would be the same for active ethernet...
>
> >  Not that it's bad or anything, but in comparison GPON has very
> 'wireless' like best case latency but without the wireless variances.
>
>         All centrally scheduled link layers will have similar challenges I
> guess?
>
> Kind Regards
>         Sebastian
>
> >
> > On Tue, Mar 21, 2023 at 6:31 AM Sebastian Moeller <moeller0@gmx.de>
> wrote:
> > I have to push back gently on this...
> >
> > XG(S)-PON is gross 10Gbps (after FEC you are left with around 8,6 Gbps),
> Noki's proprietary (aka not ITU) @% Gbps PON seems to be abbreviated
> 25GS-PON.
> >
> > Now XGS-PON allows maximally 128 end-nodes in the tree, so:
> > 8600/128 = 67.18 Mbps/subscriber
> >
> > unless the ISPs royally screwed up the configuration there should be a
> CIR per subscriber of around 60 Mbps. So setting your cake shaper to 50
> Mbps shpuld give you:
> > a) 10 times the throughput of the 5/1 Mbps DSL (ignoring overhead
> compensation for a change, which likely will be in favor of PON)
> > b) decent low latency, round robin delay for full MTU packets between
> 128 active nodes would be:
> >         packet/sec: ((8.6 * 1000^3)/(1500*8)) = 716666.666667
> >         millisec/packet: 1000 / ((8.6 * 1000^3)/(1500*8)) =
> 0.00139534883721
> >         round-robin delay 128: 128 * 1000 / ((8.6 * 1000^3)/(1500*8)) =
> 0.178604651163 milliseconds...
> >
> >         DSL uses a 4KHz clock so 1000/4000 = 0.25 millisecond
> quantization
> > So XGS-PON has at least theoretical potential to deliver lower latency
> than DSL, but the details depend on if/how packets are aggregated. HOWEVER
> the 125µsec GPON frames can be shared between different ONUs in upstream
> and downstream direction... so these are not a hard quantisation but more
> the interval between control information required for the access grant
> cycle...
> >
> > c) robustness against RF noise sources and electricity/lightning
> >
> > So I am not su sure I would prefer the 5/1 (A)DSL over a PON...
> >
> > That however is orthogonal to me preferring a competent ISP that takes
> care of keeping latency under load at bay.
> >
> >
> >
> > > On Mar 21, 2023, at 12:26, Rich Brown via Starlink <
> starlink@lists.bufferbloat.net> wrote:
> > >
> > >
> > >
> > >> On Mar 21, 2023, at 1:21 AM, Frantisek Borsik via Rpm <
> rpm@lists.bufferbloat.net> wrote:
> > >>
> > >> Now, I hope to really piss You off with the following statement  :-P
> but:
> > >>
> > >> even sub 5/1 Mbps “broadband” in Africa with bufferbloat fixed on as
> many hops along the internet journey from a data center to the customers
> mobile device (or with just LibreQoS middle box in the ISP’s network) is
> feeling way better than 25Gbps XG-PON. The only time the XG-PON guy could
> really feel like a king of the world would be during his speedtest.
> > >
> > > Nope. Sorry - this doesn't piss me off :-) It's just true.
> > >
> > > - 7mbps/768kbps DSL with an IQrouter works fine for two simultaneous
> Zoom conferences. (Even though no one would think that it's fast.)
> > > - I recommend people on a budget drop their ISP speed so they can
> afford a router that does SQM
> https://forum.openwrt.org/t/so-you-have-500mbps-1gbps-fiber-and-need-a-router-read-this-first/90305/40
> >
> >         Even simpler, even on a 100Gbps link nobody stops you from
> setting your shaper to 50/10 if that is all your router can deliver (and I
> agree if there are cheaper plans closer to the 50/10 it makes economic
> sense to scale down the plan)...
> >
> >
> > >
> > > The people that get annoyed are those who just upgraded to 1Gbps
> service and still are getting fragged in their games.
> > >
> > > Rich
> > > _______________________________________________
> > > Starlink mailing list
> > > Starlink@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/starlink
> >
>
>

[-- Attachment #2: Type: text/html, Size: 7332 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* [LibreQoS] On fiber as critical infrastructure w/Comcast chat
  2023-03-21 19:58                                                                           ` rjmcmahon
  2023-03-21 20:06                                                                             ` [LibreQoS] [Bloat] " David Lang
@ 2023-03-25 19:39                                                                             ` rjmcmahon
  2023-03-25 20:09                                                                               ` [LibreQoS] [Starlink] " Bruce Perens
                                                                                                 ` (2 more replies)
  1 sibling, 3 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-25 19:39 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Frantisek Borsik, Dave Taht via Starlink, dan, brandon, libreqos,
	Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 1290 bytes --]

Hi All,

I've been trying to modernize a building in Boston where I'm an HOA 
board member over the last 18 mos. I perceive the broadband network as a 
critical infrastructure to our 5 unit building.

Unfortunately, Comcast staff doesn't seem to agree. The agent basically 
closed the chat on me mid-stream (chat attached.) I've been at this for 
about 18 mos now.

While I think bufferbloat is a big issue, the bigger issue is that our 
last-mile providers must change their cultures to understand that life 
support use cases that require proper pathways, conduits & cabling can 
no longer be ignored. These buildings have coaxial thrown over the 
exterior walls done in the 80s then drilling holes without consideration 
of structures. This and the lack of environmental protections for our 
HOA's critical infrastructure is disheartening. It's past time to remove 
this shoddy work on our building and all buildings in Boston as well as 
across the globe.

My hope was by now I'd have shown through actions what a historic 
building in Boston looks like when we, as humans in our short lives, act 
as both stewards of history and as responsible guardians to those that 
share living spaces and neighborhoods today & tomorrow. Motivating 
humans to better serve one another is hard.

Bob

[-- Attachment #2: comcast.pdf --]
[-- Type: application/pdf, Size: 115724 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 19:39                                                                             ` [LibreQoS] On fiber as critical infrastructure w/Comcast chat rjmcmahon
@ 2023-03-25 20:09                                                                               ` Bruce Perens
  2023-03-25 20:47                                                                                 ` rjmcmahon
  2023-03-25 20:15                                                                               ` [LibreQoS] [Bloat] " Sebastian Moeller
  2023-03-25 20:27                                                                               ` [LibreQoS] " rjmcmahon
  2 siblings, 1 reply; 183+ messages in thread
From: Bruce Perens @ 2023-03-25 20:09 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Rpm, dan, Frantisek Borsik, libreqos, Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 2995 bytes --]

I've never met a Comcast sales person who was able to operate at the level
you're talking about. I think you would do better with a smaller company.

I think you were also unrealistic if not disingenuous about lives put at
risk. Alarms do not require more than 300 baud.

Comcast would actually like to sell individual internet service for each of
the five units. That's what they're geared to do. You're not going to get
that very high speed rate for that ridiculously low price and fan it out to
five domiciles. They would offer that for a single home and the users that
could be expected in a single home, or maybe a small business but I think
they would charge a business more. I pay Comcast more for a very small
business at a lower rate.

I think realistically the fiber connections you're talking about at the
data rate you request in the privilege of fanning out to five domiciles
should cost about $2400 per month.

I get the complaint about wires on the outside etc. But who are you
expecting to do that work? If you expect Comcast and their competitors to
do that as part of their standard installation, you're asking for tens of
thousands of dollars of work, and if that is to be the standard then
everyone must pay much more than today. Nobody wants that, and most folks
don't care about the current standard of installation. If this mattered
enough to your homeowners association, they could pay for it.




On Sat, Mar 25, 2023, 12:39 rjmcmahon via Starlink <
starlink@lists.bufferbloat.net> wrote:

> Hi All,
>
> I've been trying to modernize a building in Boston where I'm an HOA
> board member over the last 18 mos. I perceive the broadband network as a
> critical infrastructure to our 5 unit building.
>
> Unfortunately, Comcast staff doesn't seem to agree. The agent basically
> closed the chat on me mid-stream (chat attached.) I've been at this for
> about 18 mos now.
>
> While I think bufferbloat is a big issue, the bigger issue is that our
> last-mile providers must change their cultures to understand that life
> support use cases that require proper pathways, conduits & cabling can
> no longer be ignored. These buildings have coaxial thrown over the
> exterior walls done in the 80s then drilling holes without consideration
> of structures. This and the lack of environmental protections for our
> HOA's critical infrastructure is disheartening. It's past time to remove
> this shoddy work on our building and all buildings in Boston as well as
> across the globe.
>
> My hope was by now I'd have shown through actions what a historic
> building in Boston looks like when we, as humans in our short lives, act
> as both stewards of history and as responsible guardians to those that
> share living spaces and neighborhoods today & tomorrow. Motivating
> humans to better serve one another is hard.
>
> Bob_______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>

[-- Attachment #2: Type: text/html, Size: 3897 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 19:39                                                                             ` [LibreQoS] On fiber as critical infrastructure w/Comcast chat rjmcmahon
  2023-03-25 20:09                                                                               ` [LibreQoS] [Starlink] " Bruce Perens
@ 2023-03-25 20:15                                                                               ` Sebastian Moeller
  2023-03-25 20:43                                                                                 ` rjmcmahon
  2023-03-25 20:27                                                                               ` [LibreQoS] " rjmcmahon
  2 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-25 20:15 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Rpm, dan, Frantisek Borsik, brandon, libreqos,
	Dave Taht via Starlink, bloat

Hi Bob,


somewhat sad. Have you considered that your described requirements and the use-case might be outside of the mass-market envelope for which the big ISPs taylor/rig their processes? Maybe, not sure that is an option, if you approach this as a "business"* asking for a fiber uplink for an already "wired" 5 unit property you might get better service? You still would need to do the in-house re-wiring, but you likely would avoid scripted hot-lines that hang up when in the allotted time the agent sees little chance of "closing" the call. All (big) ISPs I know treat hotline as a cost factor and not as the first line of customer retention...
I would also not be amazed if Boston had smaller ISPs that are willing and able to listen to customers (but that might be a bit more expensive than the big ISPs).
That or try to get your foot into Comcast's PR department to sell them on the "reference installation" for all Boston historic buildings, so they can offset the custom tailoring effort with the expected good press of doing the "right thing" publicly.

Good luck
	Sebastian


*) I understand you are not, but I assume the business units to have more leeway to actually offer more bespoke solutions than the likely cost-optimized to Mars and back residental customer unit.


> On Mar 25, 2023, at 20:39, rjmcmahon via Bloat <bloat@lists.bufferbloat.net> wrote:
> 
> Hi All,
> 
> I've been trying to modernize a building in Boston where I'm an HOA board member over the last 18 mos. I perceive the broadband network as a critical infrastructure to our 5 unit building.
> 
> Unfortunately, Comcast staff doesn't seem to agree. The agent basically closed the chat on me mid-stream (chat attached.) I've been at this for about 18 mos now.
> 
> While I think bufferbloat is a big issue, the bigger issue is that our last-mile providers must change their cultures to understand that life support use cases that require proper pathways, conduits & cabling can no longer be ignored. These buildings have coaxial thrown over the exterior walls done in the 80s then drilling holes without consideration of structures. This and the lack of environmental protections for our HOA's critical infrastructure is disheartening. It's past time to remove this shoddy work on our building and all buildings in Boston as well as across the globe.
> 
> My hope was by now I'd have shown through actions what a historic building in Boston looks like when we, as humans in our short lives, act as both stewards of history and as responsible guardians to those that share living spaces and neighborhoods today & tomorrow. Motivating humans to better serve one another is hard.
> 
> Bob<comcast.pdf>_______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 19:39                                                                             ` [LibreQoS] On fiber as critical infrastructure w/Comcast chat rjmcmahon
  2023-03-25 20:09                                                                               ` [LibreQoS] [Starlink] " Bruce Perens
  2023-03-25 20:15                                                                               ` [LibreQoS] [Bloat] " Sebastian Moeller
@ 2023-03-25 20:27                                                                               ` rjmcmahon
  2 siblings, 0 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-25 20:27 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Frantisek Borsik, Dave Taht via Starlink, dan, brandon, libreqos,
	Rpm, bloat

To be fair, this isn't unique to Comcast. I hit similar issues in NYC 
with Verizon.

I think we really need to educate people that life support capable 
communications networks are now critical infrastructure.

And, per climate impact, we may want to add Jaffe's network power 
(capacity over delay) over distance & energy. Fixed wireless offerings 
are an energy waste and generate excessive type 2 emissions. A cell 
tower is about 1-5kW for 60 connections or roughly 100-500W per remote 
client at 1 Gb/s with high latencies. A FiWi network will require 3-5W 
for 2.8 Gb/s and speed of light over fiber ultra low latencies.

I think we really need our broadband providers to lead here and that 
fiber to WiFi is the only viable end game if we care about our impacts.

"The average cellular base station, which comprises the tower and the 
radio equipment attached to it, can use anywhere from about one to five 
kilowatts (kW), depending on whether the radio equipment is housed in an 
air-conditioned building, how old the tower is and how many transceivers 
are in the base station. Most of the energy is used by the radio to 
transmit and receive cell-phone signals."

Bob
> Hi All,
> 
> I've been trying to modernize a building in Boston where I'm an HOA
> board member over the last 18 mos. I perceive the broadband network as
> a critical infrastructure to our 5 unit building.
> 
> Unfortunately, Comcast staff doesn't seem to agree. The agent
> basically closed the chat on me mid-stream (chat attached.) I've been
> at this for about 18 mos now.
> 
> While I think bufferbloat is a big issue, the bigger issue is that our
> last-mile providers must change their cultures to understand that life
> support use cases that require proper pathways, conduits & cabling can
> no longer be ignored. These buildings have coaxial thrown over the
> exterior walls done in the 80s then drilling holes without
> consideration of structures. This and the lack of environmental
> protections for our HOA's critical infrastructure is disheartening.
> It's past time to remove this shoddy work on our building and all
> buildings in Boston as well as across the globe.
> 
> My hope was by now I'd have shown through actions what a historic
> building in Boston looks like when we, as humans in our short lives,
> act as both stewards of history and as responsible guardians to those
> that share living spaces and neighborhoods today & tomorrow.
> Motivating humans to better serve one another is hard.
> 
> Bob

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 20:15                                                                               ` [LibreQoS] [Bloat] " Sebastian Moeller
@ 2023-03-25 20:43                                                                                 ` rjmcmahon
  2023-03-25 21:08                                                                                   ` [LibreQoS] [Starlink] " Bruce Perens
  2023-03-26 10:34                                                                                   ` [LibreQoS] [Bloat] " Sebastian Moeller
  0 siblings, 2 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-25 20:43 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Rpm, dan, Frantisek Borsik, brandon, libreqos,
	Dave Taht via Starlink, bloat

It's not just one phone call. I've been figuring this out for about two 
years now. I've been working with some strategic people in Boston, colos 
& dark fiber providers, and professional installers that wired up many 
of the Boston universities, some universities themselves to offer co-ops 
to students to run networsk, trainings for DIC and other high value IoT 
offerings, blue collar principals (with staffs of about 100) to help 
them learn to install fiber and provide better jobs for their employees.

My conclusion is that Comcast is best suited for the job as the 
broadband provider, at least in Boston, for multiple reasons. One chat 
isn't going to block me ;)

The point of the thread is that we still do not treat digital 
communications infrastructure as life support critical. It reminds me of 
Elon Musk and his claims on FSD. I could do the whole thing myself - but 
that's not going to achieve what's needed. We need systems that our 
loved ones can call and those systems will care for them. Similar to how 
the medical community works, though imperfect, in caring for our loved 
one's and their healths.

I think we all are responsible for changing our belief sets & developing 
ourselves to better serve others. Most won't act until they can actually 
see what's possible. So let's start to show them.

Bob

> Hi Bob,
> 
> 
> somewhat sad. Have you considered that your described requirements and
> the use-case might be outside of the mass-market envelope for which
> the big ISPs taylor/rig their processes? Maybe, not sure that is an
> option, if you approach this as a "business"* asking for a fiber
> uplink for an already "wired" 5 unit property you might get better
> service? You still would need to do the in-house re-wiring, but you
> likely would avoid scripted hot-lines that hang up when in the
> allotted time the agent sees little chance of "closing" the call. All
> (big) ISPs I know treat hotline as a cost factor and not as the first
> line of customer retention...
> I would also not be amazed if Boston had smaller ISPs that are willing
> and able to listen to customers (but that might be a bit more
> expensive than the big ISPs).
> That or try to get your foot into Comcast's PR department to sell them
> on the "reference installation" for all Boston historic buildings, so
> they can offset the custom tailoring effort with the expected good
> press of doing the "right thing" publicly.
> 
> Good luck
> 	Sebastian
> 
> 
> *) I understand you are not, but I assume the business units to have
> more leeway to actually offer more bespoke solutions than the likely
> cost-optimized to Mars and back residental customer unit.
> 
> 
>> On Mar 25, 2023, at 20:39, rjmcmahon via Bloat 
>> <bloat@lists.bufferbloat.net> wrote:
>> 
>> Hi All,
>> 
>> I've been trying to modernize a building in Boston where I'm an HOA 
>> board member over the last 18 mos. I perceive the broadband network as 
>> a critical infrastructure to our 5 unit building.
>> 
>> Unfortunately, Comcast staff doesn't seem to agree. The agent 
>> basically closed the chat on me mid-stream (chat attached.) I've been 
>> at this for about 18 mos now.
>> 
>> While I think bufferbloat is a big issue, the bigger issue is that our 
>> last-mile providers must change their cultures to understand that life 
>> support use cases that require proper pathways, conduits & cabling can 
>> no longer be ignored. These buildings have coaxial thrown over the 
>> exterior walls done in the 80s then drilling holes without 
>> consideration of structures. This and the lack of environmental 
>> protections for our HOA's critical infrastructure is disheartening. 
>> It's past time to remove this shoddy work on our building and all 
>> buildings in Boston as well as across the globe.
>> 
>> My hope was by now I'd have shown through actions what a historic 
>> building in Boston looks like when we, as humans in our short lives, 
>> act as both stewards of history and as responsible guardians to those 
>> that share living spaces and neighborhoods today & tomorrow. 
>> Motivating humans to better serve one another is hard.
>> 
>> Bob<comcast.pdf>_______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 20:09                                                                               ` [LibreQoS] [Starlink] " Bruce Perens
@ 2023-03-25 20:47                                                                                 ` rjmcmahon
  0 siblings, 0 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-25 20:47 UTC (permalink / raw)
  To: Bruce Perens
  Cc: Rpm, dan, Frantisek Borsik, libreqos, Dave Taht via Starlink, bloat

The cost of the labor is less than one might think. I've found it's 
cheaper to train young people in the trades to do this work vs using an 
overpriced company that mostly targets "rich corporations."

It's also a golden egg or geese that can lay golden eggs thing. Let's 
train our youth well here. Some of us will be pushing up daisies before 
they finish. None of us have a guarantee of tomorrow.

Bob
> I've never met a Comcast sales person who was able to operate at the
> level you're talking about. I think you would do better with a smaller
> company.
> 
> I think you were also unrealistic if not disingenuous about lives put
> at risk. Alarms do not require more than 300 baud.
> 
> Comcast would actually like to sell individual internet service for
> each of the five units. That's what they're geared to do. You're not
> going to get that very high speed rate for that ridiculously low price
> and fan it out to five domiciles. They would offer that for a single
> home and the users that could be expected in a single home, or maybe a
> small business but I think they would charge a business more. I pay
> Comcast more for a very small business at a lower rate.
> 
> I think realistically the fiber connections you're talking about at
> the data rate you request in the privilege of fanning out to five
> domiciles should cost about $2400 per month.
> 
> I get the complaint about wires on the outside etc. But who are you
> expecting to do that work? If you expect Comcast and their competitors
> to do that as part of their standard installation, you're asking for
> tens of thousands of dollars of work, and if that is to be the
> standard then everyone must pay much more than today. Nobody wants
> that, and most folks don't care about the current standard of
> installation. If this mattered enough to your homeowners association,
> they could pay for it.
> 
> On Sat, Mar 25, 2023, 12:39 rjmcmahon via Starlink
> <starlink@lists.bufferbloat.net> wrote:
> 
>> Hi All,
>> 
>> I've been trying to modernize a building in Boston where I'm an HOA
>> board member over the last 18 mos. I perceive the broadband network
>> as a
>> critical infrastructure to our 5 unit building.
>> 
>> Unfortunately, Comcast staff doesn't seem to agree. The agent
>> basically
>> closed the chat on me mid-stream (chat attached.) I've been at this
>> for
>> about 18 mos now.
>> 
>> While I think bufferbloat is a big issue, the bigger issue is that
>> our
>> last-mile providers must change their cultures to understand that
>> life
>> support use cases that require proper pathways, conduits & cabling
>> can
>> no longer be ignored. These buildings have coaxial thrown over the
>> exterior walls done in the 80s then drilling holes without
>> consideration
>> of structures. This and the lack of environmental protections for
>> our
>> HOA's critical infrastructure is disheartening. It's past time to
>> remove
>> this shoddy work on our building and all buildings in Boston as well
>> as
>> across the globe.
>> 
>> My hope was by now I'd have shown through actions what a historic
>> building in Boston looks like when we, as humans in our short lives,
>> act
>> as both stewards of history and as responsible guardians to those
>> that
>> share living spaces and neighborhoods today & tomorrow. Motivating
>> humans to better serve one another is hard.
>> 
>> Bob_______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 20:43                                                                                 ` rjmcmahon
@ 2023-03-25 21:08                                                                                   ` Bruce Perens
  2023-03-25 22:04                                                                                     ` Robert McMahon
  2023-03-26 10:34                                                                                   ` [LibreQoS] [Bloat] " Sebastian Moeller
  1 sibling, 1 reply; 183+ messages in thread
From: Bruce Perens @ 2023-03-25 21:08 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Sebastian Moeller, Dave Taht via Starlink, dan, Frantisek Borsik,
	libreqos, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 1731 bytes --]

On Sat, Mar 25, 2023 at 1:44 PM rjmcmahon via Starlink <
starlink@lists.bufferbloat.net> wrote:

> The point of the thread is that we still do not treat digital
> communications infrastructure as life support critical.


When I was younger there was a standard way to do this. Fire alarms had a
dedicated pair directly to the fire department or a local alarm station.
This wasn't dial-tone, it was a DC pair that would drop a trouble
notification if DC was interrupted, and I think it would reverse polarity
to indicate alarm. If DC was interrupted, that would also turn off the
boiler in the building.

Today my home fire alarms are wireless and have cellular back to their main
Comcast connection, and detect CO, smoke, and temperature. This would not
meet insurance requirements for a commercial building, they still have all
of the sensors wired, with cellular backup.

I don't think you are considering what life-support-critical digital
communications would really cost. Start with metal conduit and
fire-resistant wiring throughout the structure. Provide redundant power for
*every* fan-out box (we just had a 24-hour power interruption here due to
storms). AT&T provides 4 hour power for "Lightspeed" tombstone boxes that
fan out telephone, beyond that a truck has to drive out and plug in a
generator, or you are out of luck if it's a wide-are outage like we just
had. Wire areas in a redundant loop rather than a tree. Supervise every
home so that interruptions are serviced automatically. Provide a 4-hour SLA.

The phone company used to do what you are asking for. The high prices this
required are the main reason that everyone has jumped off of using the
legacy telco for telephony.

[-- Attachment #2: Type: text/html, Size: 2145 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 21:08                                                                                   ` [LibreQoS] [Starlink] " Bruce Perens
@ 2023-03-25 22:04                                                                                     ` Robert McMahon
  2023-03-25 22:50                                                                                       ` dan
                                                                                                         ` (2 more replies)
  0 siblings, 3 replies; 183+ messages in thread
From: Robert McMahon @ 2023-03-25 22:04 UTC (permalink / raw)
  To: Bruce Perens
  Cc: Sebastian Moeller, Dave Taht via Starlink, dan, Frantisek Borsik,
	libreqos, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 3837 bytes --]

Hi Bruce,

I think you may be the right guy to solve this. I too remember the days of dry wire sold by the RBOCs.

I found a structured wire fire alarm install to cost $100k for our building or $20k per unit. The labor and materials is about $25k. The other $75k is liability related costs, similar to a bike helmet, $10 in parts, $40 in insurance. So it's not labor nor equipment that drives the expenses. My opinion is poor people shouldn't have to pay for insurance to insurance companies, companies that figure figures for a living.

A digression: I could do an LMR 600 passive cable system looped with Wilkinson power dividers, patch antennas and nests to protect the egress escape ladder for about $10 to $15K. Don't need an SLA. We've basically priced protecting human lives to only rich people.

We need to use technology and our cleverness to fix this version of "expense bloat."

Look at Boston public water for an example. Way too expensive to pipe water in from 15 miles away in the early days. So people who did it claimed alcoholism (and that "immorality") would be eliminated by providing clean and pure potable public water.  Alcholics would choose pathogen free water over spirits. Rich people got enough water for themselves and even for their private fountains so society stopped this initiative.

It was a motivated doctor who taught rich people that their health was tied to public health. And public health was being impacted because pathogens being spread to poor people who didn't get potable public water would by addressed by ubiquitous potable water supplies. The fire chief was put in charge. See Ties That Bind

https://upittpress.org/books/9780822961475/

Now, in the U.S, most do get potable water even to flush a toilet. It's taken for granted.

I think it's on us to do similar for digital communication networks. They're needed far beyond entertainment, and we need to get public safety elements engaged too.

Bob

On Mar 25, 2023, 2:08 PM, at 2:08 PM, Bruce Perens <bruce@perens.com> wrote:
>On Sat, Mar 25, 2023 at 1:44 PM rjmcmahon via Starlink <
>starlink@lists.bufferbloat.net> wrote:
>
>> The point of the thread is that we still do not treat digital
>> communications infrastructure as life support critical.
>
>
>When I was younger there was a standard way to do this. Fire alarms had
>a
>dedicated pair directly to the fire department or a local alarm
>station.
>This wasn't dial-tone, it was a DC pair that would drop a trouble
>notification if DC was interrupted, and I think it would reverse
>polarity
>to indicate alarm. If DC was interrupted, that would also turn off the
>boiler in the building.
>
>Today my home fire alarms are wireless and have cellular back to their
>main
>Comcast connection, and detect CO, smoke, and temperature. This would
>not
>meet insurance requirements for a commercial building, they still have
>all
>of the sensors wired, with cellular backup.
>
>I don't think you are considering what life-support-critical digital
>communications would really cost. Start with metal conduit and
>fire-resistant wiring throughout the structure. Provide redundant power
>for
>*every* fan-out box (we just had a 24-hour power interruption here due
>to
>storms). AT&T provides 4 hour power for "Lightspeed" tombstone boxes
>that
>fan out telephone, beyond that a truck has to drive out and plug in a
>generator, or you are out of luck if it's a wide-are outage like we
>just
>had. Wire areas in a redundant loop rather than a tree. Supervise every
>home so that interruptions are serviced automatically. Provide a 4-hour
>SLA.
>
>The phone company used to do what you are asking for. The high prices
>this
>required are the main reason that everyone has jumped off of using the
>legacy telco for telephony.

[-- Attachment #2: Type: text/html, Size: 4995 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 22:04                                                                                     ` Robert McMahon
@ 2023-03-25 22:50                                                                                       ` dan
  2023-03-25 23:21                                                                                         ` Robert McMahon
  2023-03-25 22:57                                                                                       ` [LibreQoS] [Starlink] [Bloat] " Bruce Perens
  2023-03-25 23:20                                                                                       ` [LibreQoS] [Bloat] [Starlink] " David Lang
  2 siblings, 1 reply; 183+ messages in thread
From: dan @ 2023-03-25 22:50 UTC (permalink / raw)
  To: Robert McMahon
  Cc: Bruce Perens, Sebastian Moeller, Dave Taht via Starlink,
	Frantisek Borsik, libreqos, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 6152 bytes --]

I'm not quite following on this.  It's really not comcast's responsibility
to do maintenance on old cables etc.  Once installed, those are fixtures
and the responsibility of the building owner.    Comcast etc are only
pulling wire in to enable their primary business of selling voice, tv, and
data.  All of these other pieces are clearly the responsibility of the
property owner to install.  Trying to put this sort of thing on an ISP
would dramatically increase the cost of delivering services.

I read the chat log and I would have closed it too.  An HOA is a business
in legal terms. for profit or non-profit, but still a business.  The cost
to bring all products to every home and business would dramatically
increase the average cost of services.  The CSR offered a 2Gbit service and
you replied that you want the lower latencies of the 6Gbit service for your
fire alarm?  Firstly, why would the 2Gbit have lower latency than the
6Gbit, and secondly how much data do you think a fire alarm uses?  As the
CSR I would be telling jokes about you with my co-workers.  I'm not meaning
to be too antagonistic here, but this is a bit over the top don't you
think?  You're getting jostled around because you are demanding a service
they don't offer at the address.  You could have taken the 2Gbit plan offer
and been installed in a few days and still had a product that is literally
1000x more than your fire circuit needs.  The moment you started in on the
Boston fire I'd have been done.   Irrelevant and sensationalist.  Fire
alarms in all 50 states require either a hard wired telephone line or a
redundant data link (ISP+Cell for example) so the who 6Gbit to prevent
everyone from dying line is so over the top it made me switch teams
mid-read.

"I dont have what you are asking for" / "connect me to someone who does" is
the "Karen: I want to talk to your manager" equivalent for an ISP's CSR to
hear.

I could continue with how absurd a lot of what has been said is but I don't
want kicked out of the group for being unfriendly so I'll let it be.

On Sat, Mar 25, 2023 at 4:04 PM Robert McMahon <rjmcmahon@rjmcmahon.com>
wrote:

> Hi Bruce,
>
> I think you may be the right guy to solve this. I too remember the days of
> dry wire sold by the RBOCs.
>
> I found a structured wire fire alarm install to cost $100k for our
> building or $20k per unit. The labor and materials is about $25k. The other
> $75k is liability related costs, similar to a bike helmet, $10 in parts,
> $40 in insurance. So it's not labor nor equipment that drives the expenses.
> My opinion is poor people shouldn't have to pay for insurance to insurance
> companies, companies that figure figures for a living.
>
> A digression: I could do an LMR 600 passive cable system looped with
> Wilkinson power dividers, patch antennas and nests to protect the egress
> escape ladder for about $10 to $15K. Don't need an SLA. We've basically
> priced protecting human lives to only rich people.
>
> We need to use technology and our cleverness to fix this version of
> "expense bloat."
>
> Look at Boston public water for an example. Way too expensive to pipe
> water in from 15 miles away in the early days. So people who did it claimed
> alcoholism (and that "immorality") would be eliminated by providing clean
> and pure potable public water.  Alcholics would choose pathogen free water
> over spirits. Rich people got enough water for themselves and even for
> their private fountains so society stopped this initiative.
>
> It was a motivated doctor who taught rich people that their health was
> tied to public health. And public health was being impacted because
> pathogens being spread to poor people who didn't get potable public water
> would by addressed by ubiquitous potable water supplies. The fire chief was
> put in charge. See Ties That Bind
>
> https://upittpress.org/books/9780822961475/
>
> Now, in the U.S, most do get potable water even to flush a toilet. It's
> taken for granted.
>
> I think it's on us to do similar for digital communication networks.
> They're needed far beyond entertainment, and we need to get public safety
> elements engaged too.
>
> Bob
> On Mar 25, 2023, at 2:08 PM, Bruce Perens <bruce@perens.com> wrote:
>>
>>
>>
>> On Sat, Mar 25, 2023 at 1:44 PM rjmcmahon via Starlink <
>> starlink@lists.bufferbloat.net> wrote:
>>
>>> The point of the thread is that we still do not treat digital
>>> communications infrastructure as life support critical.
>>
>>
>> When I was younger there was a standard way to do this. Fire alarms had a
>> dedicated pair directly to the fire department or a local alarm station.
>> This wasn't dial-tone, it was a DC pair that would drop a trouble
>> notification if DC was interrupted, and I think it would reverse polarity
>> to indicate alarm. If DC was interrupted, that would also turn off the
>> boiler in the building.
>>
>> Today my home fire alarms are wireless and have cellular back to their
>> main Comcast connection, and detect CO, smoke, and temperature. This would
>> not meet insurance requirements for a commercial building, they still have
>> all of the sensors wired, with cellular backup.
>>
>> I don't think you are considering what life-support-critical digital
>> communications would really cost. Start with metal conduit and
>> fire-resistant wiring throughout the structure. Provide redundant power for
>> *every* fan-out box (we just had a 24-hour power interruption here due
>> to storms). AT&T provides 4 hour power for "Lightspeed" tombstone boxes
>> that fan out telephone, beyond that a truck has to drive out and plug in a
>> generator, or you are out of luck if it's a wide-are outage like we just
>> had. Wire areas in a redundant loop rather than a tree. Supervise every
>> home so that interruptions are serviced automatically. Provide a 4-hour
>> SLA.
>>
>> The phone company used to do what you are asking for. The high prices
>> this required are the main reason that everyone has jumped off of using the
>> legacy telco for telephony.
>>
>

[-- Attachment #2: Type: text/html, Size: 7579 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 22:04                                                                                     ` Robert McMahon
  2023-03-25 22:50                                                                                       ` dan
@ 2023-03-25 22:57                                                                                       ` Bruce Perens
  2023-03-25 23:33                                                                                         ` [LibreQoS] [Bloat] [Starlink] " David Lang
  2023-03-25 23:38                                                                                         ` [LibreQoS] [Starlink] [Bloat] " Robert McMahon
  2023-03-25 23:20                                                                                       ` [LibreQoS] [Bloat] [Starlink] " David Lang
  2 siblings, 2 replies; 183+ messages in thread
From: Bruce Perens @ 2023-03-25 22:57 UTC (permalink / raw)
  To: Robert McMahon
  Cc: Sebastian Moeller, Dave Taht via Starlink, dan, Frantisek Borsik,
	libreqos, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 2196 bytes --]

On Sat, Mar 25, 2023 at 3:04 PM Robert McMahon <rjmcmahon@rjmcmahon.com>
wrote:

> My opinion is poor people shouldn't have to pay for insurance to insurance
> companies, companies that figure figures for a living.
>

Be sure to be out there campaigning at every election. Really off-topic,
but IMO the current scheme of health insurance keeps many people from going
into their own business, and keeps them working for large companies. I'm
sure that's deliberate. Valerie works for the University, which is the only
thing that kept me under health insurance - I'm just a consultant.
California would give me a plan today, but most states would not.
November's heart attack cost $359K, the insurance company negotiated 60K
away and paid the rest, charging me $125. Prices aren't going to fall
unless we get single-payer like most civilized countries. Somebody *does *have
to pay for health care, though, and the choices are out of your pocket, or
in your taxes, or through inflation.

A digression: I could do an LMR 600 passive cable system looped with
> Wilkinson power dividers, patch antennas and nests to protect the egress
> escape ladder for about $10 to $15K. Don't need an SLA. We've basically
> priced protecting human lives to only rich people.
>

If it's going indoors between the units, you need plenum-rated cable. LMR
is really pricey in plenum-rated, RG-6 is more than adequate and more
reasonably priced. RF between units is a legacy medium, though, there
should be plenum-rated CAT-8. The dividers can be of the sort specified for
cable TV. Wilkinson would be overkill and this is just a tiny toroid
transformer in the box.

The very best way to future proof is not with any sort of wire or fiber,
but with conduit with lots of room, that can be re-pulled.

I think it's on us to do similar for digital communication networks.
> They're needed far beyond entertainment, and we need to get public safety
> elements engaged too.
>

I'm really dubious. Anyone who has to cope with the cost is going to hear
the siren call of wireless no matter how inappropriate it is to the task.
You will be lucky if fiber makes it to urban buildings.

[-- Attachment #2: Type: text/html, Size: 3046 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 22:04                                                                                     ` Robert McMahon
  2023-03-25 22:50                                                                                       ` dan
  2023-03-25 22:57                                                                                       ` [LibreQoS] [Starlink] [Bloat] " Bruce Perens
@ 2023-03-25 23:20                                                                                       ` David Lang
  2023-03-26 18:29                                                                                         ` rjmcmahon
  2 siblings, 1 reply; 183+ messages in thread
From: David Lang @ 2023-03-25 23:20 UTC (permalink / raw)
  To: Robert McMahon
  Cc: Bruce Perens, Rpm, dan, Frantisek Borsik, libreqos,
	Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 4118 bytes --]

if you want to eliminate insurance, then you need to eliminate the liability, 
which I don't think you want to do if you want to claim that this is 'life 
critical'

David Lang

On Sat, 25 Mar 2023, Robert McMahon via Bloat wrote:

> Hi Bruce,
>
> I think you may be the right guy to solve this. I too remember the days of dry wire sold by the RBOCs.
>
> I found a structured wire fire alarm install to cost $100k for our building or $20k per unit. The labor and materials is about $25k. The other $75k is liability related costs, similar to a bike helmet, $10 in parts, $40 in insurance. So it's not labor nor equipment that drives the expenses. My opinion is poor people shouldn't have to pay for insurance to insurance companies, companies that figure figures for a living.
>
> A digression: I could do an LMR 600 passive cable system looped with Wilkinson power dividers, patch antennas and nests to protect the egress escape ladder for about $10 to $15K. Don't need an SLA. We've basically priced protecting human lives to only rich people.
>
> We need to use technology and our cleverness to fix this version of "expense bloat."
>
> Look at Boston public water for an example. Way too expensive to pipe water in from 15 miles away in the early days. So people who did it claimed alcoholism (and that "immorality") would be eliminated by providing clean and pure potable public water.  Alcholics would choose pathogen free water over spirits. Rich people got enough water for themselves and even for their private fountains so society stopped this initiative.
>
> It was a motivated doctor who taught rich people that their health was tied to public health. And public health was being impacted because pathogens being spread to poor people who didn't get potable public water would by addressed by ubiquitous potable water supplies. The fire chief was put in charge. See Ties That Bind
>
> https://upittpress.org/books/9780822961475/
>
> Now, in the U.S, most do get potable water even to flush a toilet. It's taken for granted.
>
> I think it's on us to do similar for digital communication networks. They're needed far beyond entertainment, and we need to get public safety elements engaged too.
>
> Bob
>
> On Mar 25, 2023, 2:08 PM, at 2:08 PM, Bruce Perens <bruce@perens.com> wrote:
>> On Sat, Mar 25, 2023 at 1:44 PM rjmcmahon via Starlink <
>> starlink@lists.bufferbloat.net> wrote:
>>
>>> The point of the thread is that we still do not treat digital
>>> communications infrastructure as life support critical.
>>
>>
>> When I was younger there was a standard way to do this. Fire alarms had
>> a
>> dedicated pair directly to the fire department or a local alarm
>> station.
>> This wasn't dial-tone, it was a DC pair that would drop a trouble
>> notification if DC was interrupted, and I think it would reverse
>> polarity
>> to indicate alarm. If DC was interrupted, that would also turn off the
>> boiler in the building.
>>
>> Today my home fire alarms are wireless and have cellular back to their
>> main
>> Comcast connection, and detect CO, smoke, and temperature. This would
>> not
>> meet insurance requirements for a commercial building, they still have
>> all
>> of the sensors wired, with cellular backup.
>>
>> I don't think you are considering what life-support-critical digital
>> communications would really cost. Start with metal conduit and
>> fire-resistant wiring throughout the structure. Provide redundant power
>> for
>> *every* fan-out box (we just had a 24-hour power interruption here due
>> to
>> storms). AT&T provides 4 hour power for "Lightspeed" tombstone boxes
>> that
>> fan out telephone, beyond that a truck has to drive out and plug in a
>> generator, or you are out of luck if it's a wide-are outage like we
>> just
>> had. Wire areas in a redundant loop rather than a tree. Supervise every
>> home so that interruptions are serviced automatically. Provide a 4-hour
>> SLA.
>>
>> The phone company used to do what you are asking for. The high prices
>> this
>> required are the main reason that everyone has jumped off of using the
>> legacy telco for telephony.
>

[-- Attachment #2: Type: text/plain, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 22:50                                                                                       ` dan
@ 2023-03-25 23:21                                                                                         ` Robert McMahon
  2023-03-25 23:35                                                                                           ` [LibreQoS] [Bloat] [Starlink] " David Lang
  0 siblings, 1 reply; 183+ messages in thread
From: Robert McMahon @ 2023-03-25 23:21 UTC (permalink / raw)
  To: dan
  Cc: Bruce Perens, Sebastian Moeller, Dave Taht via Starlink,
	Frantisek Borsik, libreqos, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 7461 bytes --]

Read the arguments on potable public water supplies. You're missing the forest per the trees.

Also, Comcast offers full wifi services. The dmarc at the property line and right of way is artificial.

The economics is a 100gb/s sfp. Not 2g  or 6g. I'm asking for a roof of shingles vs a thatch roof. Many may laugh until they realize we're talking about real life issues. A 100Gb/s link drives queues to empty. Compute moves to speed of causality. That's the best we can do today and it'll be the best done in 50 or 100 years from now, assuming the optics are pluggable.

We need to stop conflating capacity with latency. Doing so is a basic engineering flaw.

The fiber has basically infinite capacity.  Where it ends, where it starts and who gets to decide on the optics is a non trivial problem. And that choice matters. But hey, many men think it's their womb too, which is no longer funny.

I want 6 Gbs optics. You laugh. Comcast says I can't have it. Why am I not in charge of this choice?

Bob


On Mar 25, 2023, 3:50 PM, at 3:50 PM, dan <dandenson@gmail.com> wrote:
>I'm not quite following on this.  It's really not comcast's
>responsibility
>to do maintenance on old cables etc.  Once installed, those are
>fixtures
>and the responsibility of the building owner.    Comcast etc are only
>pulling wire in to enable their primary business of selling voice, tv,
>and
>data.  All of these other pieces are clearly the responsibility of the
>property owner to install.  Trying to put this sort of thing on an ISP
>would dramatically increase the cost of delivering services.
>
>I read the chat log and I would have closed it too.  An HOA is a
>business
>in legal terms. for profit or non-profit, but still a business.  The
>cost
>to bring all products to every home and business would dramatically
>increase the average cost of services.  The CSR offered a 2Gbit service
>and
>you replied that you want the lower latencies of the 6Gbit service for
>your
>fire alarm?  Firstly, why would the 2Gbit have lower latency than the
>6Gbit, and secondly how much data do you think a fire alarm uses?  As
>the
>CSR I would be telling jokes about you with my co-workers.  I'm not
>meaning
>to be too antagonistic here, but this is a bit over the top don't you
>think?  You're getting jostled around because you are demanding a
>service
>they don't offer at the address.  You could have taken the 2Gbit plan
>offer
>and been installed in a few days and still had a product that is
>literally
>1000x more than your fire circuit needs.  The moment you started in on
>the
>Boston fire I'd have been done.   Irrelevant and sensationalist.  Fire
>alarms in all 50 states require either a hard wired telephone line or a
>redundant data link (ISP+Cell for example) so the who 6Gbit to prevent
>everyone from dying line is so over the top it made me switch teams
>mid-read.
>
>"I dont have what you are asking for" / "connect me to someone who
>does" is
>the "Karen: I want to talk to your manager" equivalent for an ISP's CSR
>to
>hear.
>
>I could continue with how absurd a lot of what has been said is but I
>don't
>want kicked out of the group for being unfriendly so I'll let it be.
>
>On Sat, Mar 25, 2023 at 4:04 PM Robert McMahon
><rjmcmahon@rjmcmahon.com>
>wrote:
>
>> Hi Bruce,
>>
>> I think you may be the right guy to solve this. I too remember the
>days of
>> dry wire sold by the RBOCs.
>>
>> I found a structured wire fire alarm install to cost $100k for our
>> building or $20k per unit. The labor and materials is about $25k. The
>other
>> $75k is liability related costs, similar to a bike helmet, $10 in
>parts,
>> $40 in insurance. So it's not labor nor equipment that drives the
>expenses.
>> My opinion is poor people shouldn't have to pay for insurance to
>insurance
>> companies, companies that figure figures for a living.
>>
>> A digression: I could do an LMR 600 passive cable system looped with
>> Wilkinson power dividers, patch antennas and nests to protect the
>egress
>> escape ladder for about $10 to $15K. Don't need an SLA. We've
>basically
>> priced protecting human lives to only rich people.
>>
>> We need to use technology and our cleverness to fix this version of
>> "expense bloat."
>>
>> Look at Boston public water for an example. Way too expensive to pipe
>> water in from 15 miles away in the early days. So people who did it
>claimed
>> alcoholism (and that "immorality") would be eliminated by providing
>clean
>> and pure potable public water.  Alcholics would choose pathogen free
>water
>> over spirits. Rich people got enough water for themselves and even
>for
>> their private fountains so society stopped this initiative.
>>
>> It was a motivated doctor who taught rich people that their health
>was
>> tied to public health. And public health was being impacted because
>> pathogens being spread to poor people who didn't get potable public
>water
>> would by addressed by ubiquitous potable water supplies. The fire
>chief was
>> put in charge. See Ties That Bind
>>
>> https://upittpress.org/books/9780822961475/
>>
>> Now, in the U.S, most do get potable water even to flush a toilet.
>It's
>> taken for granted.
>>
>> I think it's on us to do similar for digital communication networks.
>> They're needed far beyond entertainment, and we need to get public
>safety
>> elements engaged too.
>>
>> Bob
>> On Mar 25, 2023, at 2:08 PM, Bruce Perens <bruce@perens.com> wrote:
>>>
>>>
>>>
>>> On Sat, Mar 25, 2023 at 1:44 PM rjmcmahon via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>>
>>>> The point of the thread is that we still do not treat digital
>>>> communications infrastructure as life support critical.
>>>
>>>
>>> When I was younger there was a standard way to do this. Fire alarms
>had a
>>> dedicated pair directly to the fire department or a local alarm
>station.
>>> This wasn't dial-tone, it was a DC pair that would drop a trouble
>>> notification if DC was interrupted, and I think it would reverse
>polarity
>>> to indicate alarm. If DC was interrupted, that would also turn off
>the
>>> boiler in the building.
>>>
>>> Today my home fire alarms are wireless and have cellular back to
>their
>>> main Comcast connection, and detect CO, smoke, and temperature. This
>would
>>> not meet insurance requirements for a commercial building, they
>still have
>>> all of the sensors wired, with cellular backup.
>>>
>>> I don't think you are considering what life-support-critical digital
>>> communications would really cost. Start with metal conduit and
>>> fire-resistant wiring throughout the structure. Provide redundant
>power for
>>> *every* fan-out box (we just had a 24-hour power interruption here
>due
>>> to storms). AT&T provides 4 hour power for "Lightspeed" tombstone
>boxes
>>> that fan out telephone, beyond that a truck has to drive out and
>plug in a
>>> generator, or you are out of luck if it's a wide-are outage like we
>just
>>> had. Wire areas in a redundant loop rather than a tree. Supervise
>every
>>> home so that interruptions are serviced automatically. Provide a
>4-hour
>>> SLA.
>>>
>>> The phone company used to do what you are asking for. The high
>prices
>>> this required are the main reason that everyone has jumped off of
>using the
>>> legacy telco for telephony.
>>>
>>

[-- Attachment #2: Type: text/html, Size: 9690 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 22:57                                                                                       ` [LibreQoS] [Starlink] [Bloat] " Bruce Perens
@ 2023-03-25 23:33                                                                                         ` David Lang
  2023-03-25 23:38                                                                                         ` [LibreQoS] [Starlink] [Bloat] " Robert McMahon
  1 sibling, 0 replies; 183+ messages in thread
From: David Lang @ 2023-03-25 23:33 UTC (permalink / raw)
  To: Bruce Perens
  Cc: Robert McMahon, Rpm, dan, Frantisek Borsik, libreqos,
	Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 2824 bytes --]

On Sat, 25 Mar 2023, Bruce Perens via Bloat wrote:

> On Sat, Mar 25, 2023 at 3:04 PM Robert McMahon <rjmcmahon@rjmcmahon.com>
> wrote:
>
>> My opinion is poor people shouldn't have to pay for insurance to insurance
>> companies, companies that figure figures for a living.
>>
>
> Be sure to be out there campaigning at every election. Really off-topic,
> but IMO the current scheme of health insurance keeps many people from going
> into their own business, and keeps them working for large companies. I'm
> sure that's deliberate. Valerie works for the University, which is the only
> thing that kept me under health insurance - I'm just a consultant.
> California would give me a plan today, but most states would not.
> November's heart attack cost $359K, the insurance company negotiated 60K
> away and paid the rest, charging me $125. Prices aren't going to fall
> unless we get single-payer like most civilized countries. Somebody *does *have
> to pay for health care, though, and the choices are out of your pocket, or
> in your taxes, or through inflation.

History, employer provided health insurance started in WWII as a way for 
companies to get around government wage controls to attract employees.

I've been saying for a long time that if I could pay what the insurance 
companies pay, I wouldn't need health insurance (other than a catesrophic policy 
that didn't kick in until $10k or something like that)

my suggestion (which I spout off when the topic comes up to get more people to 
think about in the hope that the idea spreads) is that if you are willing to pay 
at the time of service (including by CC) you should not have to pay more than x% 
(50-100% could even be reasonable) more than the lowest negotiated price that 
they have with any insurance company (not counting government run 
medicade/medicare, those aren't negotiations)

rationale
1. bill collection is expensive (including the cost of people who never 
pay), so the price that they have to charge needs to account for these losses.

2. the 'list price' is getting inflated so that they can claim that they are 
getting a huge discount (so if the actual cost of the service to the provider 
goes up 10%, and the insurance company price negotiator wants an extra 10% 
discount this year, the service provider just increases the list price by 20% 
and everyone is happy, except the person paying list price out of pocket)

3. the reason for allowing > lowest negotiated prices is that there are some 
legtimate reasons for discounts (volume, directing people to you) that don't 
come into play for individuals. Given that I've seen negotiated prices around 
10% of list price, being able to pay 15-20% of list price is still such a huge 
win that it's acceptable to still be up to double the insurance negotiated 
minimum.

David Lang

[-- Attachment #2: Type: text/plain, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 23:21                                                                                         ` Robert McMahon
@ 2023-03-25 23:35                                                                                           ` David Lang
  2023-03-26  0:04                                                                                             ` Robert McMahon
  0 siblings, 1 reply; 183+ messages in thread
From: David Lang @ 2023-03-25 23:35 UTC (permalink / raw)
  To: Robert McMahon
  Cc: dan, Rpm, Frantisek Borsik, Bruce Perens, libreqos,
	Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 273 bytes --]

On Sat, 25 Mar 2023, Robert McMahon via Bloat wrote:

> The fiber has basically infinite capacity.

in theory, but once you start aggregating it and having to pay for equipment 
that can handle the rates, your 'infinite capaicty' starts to run out really 
fast.

David Lang

[-- Attachment #2: Type: text/plain, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 22:57                                                                                       ` [LibreQoS] [Starlink] [Bloat] " Bruce Perens
  2023-03-25 23:33                                                                                         ` [LibreQoS] [Bloat] [Starlink] " David Lang
@ 2023-03-25 23:38                                                                                         ` Robert McMahon
  1 sibling, 0 replies; 183+ messages in thread
From: Robert McMahon @ 2023-03-25 23:38 UTC (permalink / raw)
  To: Bruce Perens
  Cc: Sebastian Moeller, Dave Taht via Starlink, dan, Frantisek Borsik,
	libreqos, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 3557 bytes --]

Sorry about your Healthcare experiences. It sucks it's rationed. Far from perfect. We've got a ways to go for sure. Thankfully, today's medical communities aren't using shit ladder water sources. Previous generations of leaders in their field got that right.


My view is that leaders in our industry actually lead and stop making excuses for about the world sucks and it's not doable. I know it sucks for many, been there done that. Let's focus our energies every day on making it better to the extent we can.

Then go to our graves in repose with Thomas Grey's epitaph

THE EPITAPH
Here rests his head upon the lap of Earth
       A youth to Fortune and to Fame unknown.
Fair Science frown'd not on his humble birth,
       And Melancholy mark'd him for her own.

Large was his bounty, and his soul sincere,
       Heav'n did a recompense as largely send:
He gave to Mis'ry all he had, a tear,
       He gain'd from Heav'n ('twas all he wish'd) a friend.

No farther seek his merits to disclose,
       Or draw his frailties from their dread abode,
(There they alike in trembling hope repose)
       The bosom of his Father and his God.

Bob

On Mar 25, 2023, 3:57 PM, at 3:57 PM, Bruce Perens <bruce@perens.com> wrote:
>On Sat, Mar 25, 2023 at 3:04 PM Robert McMahon
><rjmcmahon@rjmcmahon.com>
>wrote:
>
>> My opinion is poor people shouldn't have to pay for insurance to
>insurance
>> companies, companies that figure figures for a living.
>>
>
>Be sure to be out there campaigning at every election. Really
>off-topic,
>but IMO the current scheme of health insurance keeps many people from
>going
>into their own business, and keeps them working for large companies.
>I'm
>sure that's deliberate. Valerie works for the University, which is the
>only
>thing that kept me under health insurance - I'm just a consultant.
>California would give me a plan today, but most states would not.
>November's heart attack cost $359K, the insurance company negotiated
>60K
>away and paid the rest, charging me $125. Prices aren't going to fall
>unless we get single-payer like most civilized countries. Somebody
>*does *have
>to pay for health care, though, and the choices are out of your pocket,
>or
>in your taxes, or through inflation.
>
>A digression: I could do an LMR 600 passive cable system looped with
>> Wilkinson power dividers, patch antennas and nests to protect the
>egress
>> escape ladder for about $10 to $15K. Don't need an SLA. We've
>basically
>> priced protecting human lives to only rich people.
>>
>
>If it's going indoors between the units, you need plenum-rated cable.
>LMR
>is really pricey in plenum-rated, RG-6 is more than adequate and more
>reasonably priced. RF between units is a legacy medium, though, there
>should be plenum-rated CAT-8. The dividers can be of the sort specified
>for
>cable TV. Wilkinson would be overkill and this is just a tiny toroid
>transformer in the box.
>
>The very best way to future proof is not with any sort of wire or
>fiber,
>but with conduit with lots of room, that can be re-pulled.
>
>I think it's on us to do similar for digital communication networks.
>> They're needed far beyond entertainment, and we need to get public
>safety
>> elements engaged too.
>>
>
>I'm really dubious. Anyone who has to cope with the cost is going to
>hear
>the siren call of wireless no matter how inappropriate it is to the
>task.
>You will be lucky if fiber makes it to urban buildings.

[-- Attachment #2: Type: text/html, Size: 5324 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 23:35                                                                                           ` [LibreQoS] [Bloat] [Starlink] " David Lang
@ 2023-03-26  0:04                                                                                             ` Robert McMahon
  2023-03-26  0:07                                                                                               ` Nathan Owens
  2023-03-26  0:28                                                                                               ` David Lang
  0 siblings, 2 replies; 183+ messages in thread
From: Robert McMahon @ 2023-03-26  0:04 UTC (permalink / raw)
  To: David Lang
  Cc: dan, Rpm, Frantisek Borsik, Bruce Perens, libreqos,
	Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 677 bytes --]

The primary cost is the optics. That's why they're p in sfp and pay go

Bob

On Mar 25, 2023, 4:35 PM, at 4:35 PM, David Lang <david@lang.hm> wrote:
>On Sat, 25 Mar 2023, Robert McMahon via Bloat wrote:
>
>> The fiber has basically infinite capacity.
>
>in theory, but once you start aggregating it and having to pay for
>equipment
>that can handle the rates, your 'infinite capaicty' starts to run out
>really
>fast.
>
>David Lang
>
>------------------------------------------------------------------------
>
>_______________________________________________
>Bloat mailing list
>Bloat@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/bloat

[-- Attachment #2: Type: text/html, Size: 1124 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-26  0:04                                                                                             ` Robert McMahon
@ 2023-03-26  0:07                                                                                               ` Nathan Owens
  2023-03-26  0:50                                                                                                 ` Robert McMahon
  2023-03-26  8:45                                                                                                 ` Livingood, Jason
  2023-03-26  0:28                                                                                               ` David Lang
  1 sibling, 2 replies; 183+ messages in thread
From: Nathan Owens @ 2023-03-26  0:07 UTC (permalink / raw)
  To: Robert McMahon
  Cc: David Lang, Dave Taht via Starlink, dan, Frantisek Borsik,
	Bruce Perens, libreqos, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 1335 bytes --]

Comcast's 6Gbps service is a niche product with probably <1000 customers.
It requires knowledge and persistence from the customer to actually get it
installed, a process that can take many months (It's basically MetroE). It
requires you to be within 1760ft of available fiber, with some limit on
install cost if trenching is required. In some cases, you may be able to
trench yourself, or cover some of the costs (usually thousands to tens of
thousands).

On Sat, Mar 25, 2023 at 5:04 PM Robert McMahon via Bloat <
bloat@lists.bufferbloat.net> wrote:

> The primary cost is the optics. That's why they're p in sfp and pay go
>
> Bob
> On Mar 25, 2023, at 4:35 PM, David Lang <david@lang.hm> wrote:
>>
>> On Sat, 25 Mar 2023, Robert McMahon via Bloat wrote:
>>
>>  The fiber has basically infinite capacity.
>>>
>>
>> in theory, but once you start aggregating it and having to pay for equipment
>> that can handle the rates, your 'infinite capaicty' starts to run out really
>> fast.
>>
>> David Lang
>>
>> ------------------------------
>>
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 2316 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-26  0:04                                                                                             ` Robert McMahon
  2023-03-26  0:07                                                                                               ` Nathan Owens
@ 2023-03-26  0:28                                                                                               ` David Lang
  2023-03-26  0:57                                                                                                 ` Robert McMahon
  1 sibling, 1 reply; 183+ messages in thread
From: David Lang @ 2023-03-26  0:28 UTC (permalink / raw)
  To: Robert McMahon
  Cc: David Lang, dan, Rpm, Frantisek Borsik, Bruce Perens, libreqos,
	Dave Taht via Starlink, bloat

No, the primary cost (other than laying the fiber) is in the electronics to 
route the packets around once they leave the optics, and the upstream bandwith 
and peering to other ISPs.

laying the fiber is expensive, optics are trivially cheap in comparison, but 
while the theoretical bandwidth of the fiber is huge, that's only for the one 
hop, once you get past that hop and have to deal with the aggregate bandwidth of 
multiple endpoints, something has to give.

David Lang

On Sat, 25 Mar 2023, Robert McMahon wrote:

> The primary cost is the optics. That's why they're p in sfp and pay go
>
> Bob
>
> On Mar 25, 2023, 4:35 PM, at 4:35 PM, David Lang <david@lang.hm> wrote:
>> On Sat, 25 Mar 2023, Robert McMahon via Bloat wrote:
>>
>>> The fiber has basically infinite capacity.
>>
>> in theory, but once you start aggregating it and having to pay for
>> equipment
>> that can handle the rates, your 'infinite capaicty' starts to run out
>> really
>> fast.
>>
>> David Lang
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-26  0:07                                                                                               ` Nathan Owens
@ 2023-03-26  0:50                                                                                                 ` Robert McMahon
  2023-03-26  8:45                                                                                                 ` Livingood, Jason
  1 sibling, 0 replies; 183+ messages in thread
From: Robert McMahon @ 2023-03-26  0:50 UTC (permalink / raw)
  To: Nathan Owens
  Cc: David Lang, Dave Taht via Starlink, dan, Frantisek Borsik,
	Bruce Perens, libreqos, Rpm, bloat

[-- Attachment #1: Type: text/plain, Size: 2602 bytes --]

Our building is 100 meters from multiple fallow strands. I've got the kmz map from the dark fiber guys.

The juniper switch was designed in 2012 and used old mfg processes, not sure the nanometers but likely 28. Newer ASICS improve power per bit per distance dramatically simply by using current foundries 5nm on the way to 3nm. Then on top of that there is a lot of NRE to further reduce power. That's why a 100Gb/s without gear boxes can run at 1W for all parts, serdes, laser etc and at distance. Not 5kW like a tower.

2g to 6g really adds nothing.  My point was to see how flexible they were in optics per a customer ask. I suspect both use 10g and rate limiting. 10G optic parts are a decade old now. No improvements coming in 10G. Kinda like buying one of the last incandescent bulbs. Best to go led if possible.

FiWi connected via 100Gb/s is the answer for the next ten years. The last mile providers will figure it out if given quality information and if they ask the right questions and demand the optimal metrics be used. They may have fallen victims to their own marketing. Hard to know from the outside.

Bob

On Mar 25, 2023, 5:07 PM, at 5:07 PM, Nathan Owens <nathan@nathan.io> wrote:
>Comcast's 6Gbps service is a niche product with probably <1000
>customers.
>It requires knowledge and persistence from the customer to actually get
>it
>installed, a process that can take many months (It's basically MetroE).
>It
>requires you to be within 1760ft of available fiber, with some limit on
>install cost if trenching is required. In some cases, you may be able
>to
>trench yourself, or cover some of the costs (usually thousands to tens
>of
>thousands).
>
>On Sat, Mar 25, 2023 at 5:04 PM Robert McMahon via Bloat <
>bloat@lists.bufferbloat.net> wrote:
>
>> The primary cost is the optics. That's why they're p in sfp and pay
>go
>>
>> Bob
>> On Mar 25, 2023, at 4:35 PM, David Lang <david@lang.hm> wrote:
>>>
>>> On Sat, 25 Mar 2023, Robert McMahon via Bloat wrote:
>>>
>>>  The fiber has basically infinite capacity.
>>>>
>>>
>>> in theory, but once you start aggregating it and having to pay for
>equipment
>>> that can handle the rates, your 'infinite capaicty' starts to run
>out really
>>> fast.
>>>
>>> David Lang
>>>
>>> ------------------------------
>>>
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>

[-- Attachment #2: Type: text/html, Size: 4106 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-26  0:28                                                                                               ` David Lang
@ 2023-03-26  0:57                                                                                                 ` Robert McMahon
  0 siblings, 0 replies; 183+ messages in thread
From: Robert McMahon @ 2023-03-26  0:57 UTC (permalink / raw)
  To: David Lang
  Cc: dan, Rpm, Frantisek Borsik, Bruce Perens, libreqos,
	Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 1751 bytes --]

Sure, this isn't about peering. It's about treating last mile infrastructure as critical infrastructure and paying those who work on it, construct it, maintain it, manage it, to meet high standards like we try to do for hospitals and water supplies.

⁣Peering, ad insertions, etc. are important but not relevant to this analysis unless I'm missing something.

Bob

On Mar 25, 2023, 5:28 PM, at 5:28 PM, David Lang <david@lang.hm> wrote:
>No, the primary cost (other than laying the fiber) is in the
>electronics to
>route the packets around once they leave the optics, and the upstream
>bandwith
>and peering to other ISPs.
>
>laying the fiber is expensive, optics are trivially cheap in
>comparison, but
>while the theoretical bandwidth of the fiber is huge, that's only for
>the one
>hop, once you get past that hop and have to deal with the aggregate
>bandwidth of
>multiple endpoints, something has to give.
>
>David Lang
>
>On Sat, 25 Mar 2023, Robert McMahon wrote:
>
>> The primary cost is the optics. That's why they're p in sfp and pay
>go
>>
>> Bob
>>
>> On Mar 25, 2023, 4:35 PM, at 4:35 PM, David Lang <david@lang.hm>
>wrote:
>>> On Sat, 25 Mar 2023, Robert McMahon via Bloat wrote:
>>>
>>>> The fiber has basically infinite capacity.
>>>
>>> in theory, but once you start aggregating it and having to pay for
>>> equipment
>>> that can handle the rates, your 'infinite capaicty' starts to run
>out
>>> really
>>> fast.
>>>
>>> David Lang
>>>
>>>
>------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>

[-- Attachment #2: Type: text/html, Size: 2542 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-26  0:07                                                                                               ` Nathan Owens
  2023-03-26  0:50                                                                                                 ` Robert McMahon
@ 2023-03-26  8:45                                                                                                 ` Livingood, Jason
  2023-03-26 18:54                                                                                                   ` rjmcmahon
  1 sibling, 1 reply; 183+ messages in thread
From: Livingood, Jason @ 2023-03-26  8:45 UTC (permalink / raw)
  To: Nathan Owens, Robert McMahon
  Cc: Rpm, dan, Frantisek Borsik, Bruce Perens, libreqos,
	Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 2665 bytes --]

Happy to help (you can ping me off-list). The main products are DOCSIS and PON these days and it kind of depends where you are, whether it is a new build, etc. As others said, it gets super complicated in MDUs and the infrastructure in place and the building agreements vary quite a bit.

Jason

From: Bloat <bloat-bounces@lists.bufferbloat.net> on behalf of Nathan Owens via Bloat <bloat@lists.bufferbloat.net>
Reply-To: Nathan Owens <nathan@nathan.io>
Date: Sunday, March 26, 2023 at 09:07
To: Robert McMahon <rjmcmahon@rjmcmahon.com>
Cc: Rpm <rpm@lists.bufferbloat.net>, dan <dandenson@gmail.com>, Frantisek Borsik <frantisek.borsik@gmail.com>, Bruce Perens <bruce@perens.com>, libreqos <libreqos@lists.bufferbloat.net>, Dave Taht via Starlink <starlink@lists.bufferbloat.net>, bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat

Comcast's 6Gbps service is a niche product with probably <1000 customers. It requires knowledge and persistence from the customer to actually get it installed, a process that can take many months (It's basically MetroE). It requires you to be within 1760ft of available fiber, with some limit on install cost if trenching is required. In some cases, you may be able to trench yourself, or cover some of the costs (usually thousands to tens of thousands).

On Sat, Mar 25, 2023 at 5:04 PM Robert McMahon via Bloat <bloat@lists.bufferbloat.net<mailto:bloat@lists.bufferbloat.net>> wrote:
The primary cost is the optics. That's why they're p in sfp and pay go
Bob
On Mar 25, 2023, at 4:35 PM, David Lang <david@lang.hm<mailto:david@lang.hm>> wrote:

On Sat, 25 Mar 2023, Robert McMahon via Bloat wrote:

 The fiber has basically infinite capacity.

in theory, but once you start aggregating it and having to pay for equipment
that can handle the rates, your 'infinite capaicty' starts to run out really
fast.

David Lang

________________________________

Bloat mailing list
Bloat@lists.bufferbloat.net<mailto:Bloat@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/bloat<https://urldefense.com/v3/__https:/lists.bufferbloat.net/listinfo/bloat__;!!CQl3mcHX2A!EOSY1k9O_PBuVuNNoTVKtyE8K5P8zDDQD-_ns2m_whJemleFOcMrd25veZFZqbIvJ292Ut9e47Owc0kkGpayyP8rW_bkOQ$>
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net<mailto:Bloat@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/bloat<https://urldefense.com/v3/__https:/lists.bufferbloat.net/listinfo/bloat__;!!CQl3mcHX2A!EOSY1k9O_PBuVuNNoTVKtyE8K5P8zDDQD-_ns2m_whJemleFOcMrd25veZFZqbIvJ292Ut9e47Owc0kkGpayyP8rW_bkOQ$>

[-- Attachment #2: Type: text/html, Size: 6745 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 20:43                                                                                 ` rjmcmahon
  2023-03-25 21:08                                                                                   ` [LibreQoS] [Starlink] " Bruce Perens
@ 2023-03-26 10:34                                                                                   ` Sebastian Moeller
  2023-03-26 18:12                                                                                     ` rjmcmahon
  2023-03-26 20:57                                                                                     ` David Lang
  1 sibling, 2 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-26 10:34 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Rpm, dan, Frantisek Borsik, brandon, libreqos,
	Dave Taht via Starlink, bloat

Hi Bob,


> On Mar 25, 2023, at 21:43, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
> 
> It's not just one phone call. I've been figuring this out for about two years now. I've been working with some strategic people in Boston, colos & dark fiber providers, and professional installers that wired up many of the Boston universities, some universities themselves to offer co-ops to students to run networsk, trainings for DIC and other high value IoT offerings, blue collar principals (with staffs of about 100) to help them learn to install fiber and provide better jobs for their employees.
> 
> My conclusion is that Comcast is best suited for the job as the broadband provider, at least in Boston, for multiple reasons. One chat isn't going to block me ;)

	Yes, but they clearly are not the party best selected to to the internal wiring... this is a question of incentives and cost... if you pay their technicians by the hour to do the internal wiring according to your plan (assuming that they would accept that) then your goals are aligned, if the cost of the installation is to be carried by the ISP, they likely are motivated to the the kind of job I saw in California*.
	Over here the situation is slightly different, in-house cabling from the first demarking socket (which is considered to be ISP owned) is clearly the responsibility of the owner/resident not the ISP. ISPs offer to route cables, but on a per-hour basis, or for MDUs often used to make contracts with the owner that they would build the internal wiring (in an agreed upon fashion) for the right to be sole provider of e.g. cable TV services (with the cable fees mandatorily folded into the rent) for a fixed multi-year period (10-15 IIRC), after that the plant would end-up property of the building owner. Recent changes in law made the "mandatory cable fees as part of the rent" much harder/impossible, turning the in-house wiring back into an owner/resident problem.


> 
> The point of the thread is that we still do not treat digital communications infrastructure as life support critical.

	Well, let's keep things in perspective, unlike power, water (fresh and waste), and often gas, communications infrastructure is mostly not critical yet. But I agree that we are clearly on a path in that direction, so it is time to look at that from a different perspective. 
	Personally, I am a big fan of putting the access network into communal hands, as these guys already do a decent job with other critical infrastructure (see list above, plus roads) and I see a PtP fiber access network terminating in some CO-like locations a viable way to allow ISPs to compete in the internet service field all the while using the communally build access network for a few. IIRC this is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has beed essentially unchanged for decades, I estimate that current fiber access lines would also last for decades requiring no active component changes in the field, making them candidates for communal management. (With all my love for communal ownership and maintenance, these typically are not very nimble and hence best when we talk about life times of decades).


> It reminds me of Elon Musk and his claims on FSD.

	;) I had to look up FSD, I guess full self driving (aka pie-in-the-sky)?


> I could do the whole thing myself - but that's not going to achieve what's needed. We need systems that our loved ones can call and those systems will care for them. Similar to how the medical community works, though imperfect, in caring for our loved one's and their healths.

	I think I get your point. The question is how do we get from where we are now to that place your are describing here and in the FiWi concept?


> I think we all are responsible for changing our belief sets & developing ourselves to better serve others. Most won't act until they can actually see what's possible. So let's start to show them.

	Sure, having real implemented examples always helps!

Regards
	Sebastian


> 
> Bob


P.S.: Bruce's point about placing ducts/conduits seems like to only way to gain some future-proofeness. For multi-story and/or multi-dweller units this introduces the question how to stop fire using these conduits to "jump" between levels, but I assume that is a solved problem already, and can be squelches with throwing money in its direction.



*)A IIRC charter technician routing coaxial cable on the outside of the two story building and drilling through the (wooden) wall to set the cable socket inside, all the while casually cutting the Dish coaxial cable that was still connected to a satellite dish... Not that I cared, we were using ADSL at the time, and in accordance with the old "when in Rome..." rule, I bridged over the deteriorated in-house phone wiring by running a 30m Cat5 cable on the outside of the building to the first hand-over box.


> 
>> Hi Bob,
>> somewhat sad. Have you considered that your described requirements and
>> the use-case might be outside of the mass-market envelope for which
>> the big ISPs taylor/rig their processes? Maybe, not sure that is an
>> option, if you approach this as a "business"* asking for a fiber
>> uplink for an already "wired" 5 unit property you might get better
>> service? You still would need to do the in-house re-wiring, but you
>> likely would avoid scripted hot-lines that hang up when in the
>> allotted time the agent sees little chance of "closing" the call. All
>> (big) ISPs I know treat hotline as a cost factor and not as the first
>> line of customer retention...
>> I would also not be amazed if Boston had smaller ISPs that are willing
>> and able to listen to customers (but that might be a bit more
>> expensive than the big ISPs).
>> That or try to get your foot into Comcast's PR department to sell them
>> on the "reference installation" for all Boston historic buildings, so
>> they can offset the custom tailoring effort with the expected good
>> press of doing the "right thing" publicly.
>> Good luck
>> 	Sebastian
>> *) I understand you are not, but I assume the business units to have
>> more leeway to actually offer more bespoke solutions than the likely
>> cost-optimized to Mars and back residental customer unit.
>>> On Mar 25, 2023, at 20:39, rjmcmahon via Bloat <bloat@lists.bufferbloat.net> wrote:
>>> Hi All,
>>> I've been trying to modernize a building in Boston where I'm an HOA board member over the last 18 mos. I perceive the broadband network as a critical infrastructure to our 5 unit building.
>>> Unfortunately, Comcast staff doesn't seem to agree. The agent basically closed the chat on me mid-stream (chat attached.) I've been at this for about 18 mos now.
>>> While I think bufferbloat is a big issue, the bigger issue is that our last-mile providers must change their cultures to understand that life support use cases that require proper pathways, conduits & cabling can no longer be ignored. These buildings have coaxial thrown over the exterior walls done in the 80s then drilling holes without consideration of structures. This and the lack of environmental protections for our HOA's critical infrastructure is disheartening. It's past time to remove this shoddy work on our building and all buildings in Boston as well as across the globe.
>>> My hope was by now I'd have shown through actions what a historic building in Boston looks like when we, as humans in our short lives, act as both stewards of history and as responsible guardians to those that share living spaces and neighborhoods today & tomorrow. Motivating humans to better serve one another is hard.
>>> Bob<comcast.pdf>_______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-26 10:34                                                                                   ` [LibreQoS] [Bloat] " Sebastian Moeller
@ 2023-03-26 18:12                                                                                     ` rjmcmahon
  2023-03-26 20:57                                                                                     ` David Lang
  1 sibling, 0 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-26 18:12 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Rpm, dan, Frantisek Borsik, brandon, libreqos,
	Dave Taht via Starlink, bloat

On who pays & does the internal wiring: I agree & agree. This is a capex 
spend and asset improvement so payments come from the property owner(s) 
somehow. My thoughts are this is a new industry for the trades. My 
interactions with many in their 20s suggest that starting or working for 
a fiber & wifi install company is something they'd like.

On investor owned vs publicly owned: The broadband providers in the U.S. 
are mostly investor owned. Our water supplies are publicly owned but in 
Europe mostly privately owned. Similar, but not exact, for medical. 
These outcomes are per critical junctures, e.g. London was bombed in 
WWII but the U.S. really wasn't so British society turned to their govt 
to provide the NHS. So, I don't think there is a universal answer.

Over the last 20+ years in the U.S., the major investment in broadband 
has been investor-owned companies and not from the regulated RBOCs.

Note: The point of this thread is to show the state of where we are now 
to help us focus our energies on the next ten years. That's what my plan 
is, leave things a little better than what it was when we showed up.

Bob

> Hi Bob,
> 
> 
>> On Mar 25, 2023, at 21:43, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>> 
>> It's not just one phone call. I've been figuring this out for about 
>> two years now. I've been working with some strategic people in Boston, 
>> colos & dark fiber providers, and professional installers that wired 
>> up many of the Boston universities, some universities themselves to 
>> offer co-ops to students to run networsk, trainings for DIC and other 
>> high value IoT offerings, blue collar principals (with staffs of about 
>> 100) to help them learn to install fiber and provide better jobs for 
>> their employees.
>> 
>> My conclusion is that Comcast is best suited for the job as the 
>> broadband provider, at least in Boston, for multiple reasons. One chat 
>> isn't going to block me ;)
> 
> 	Yes, but they clearly are not the party best selected to to the
> internal wiring... this is a question of incentives and cost... if you
> pay their technicians by the hour to do the internal wiring according
> to your plan (assuming that they would accept that) then your goals
> are aligned, if the cost of the installation is to be carried by the
> ISP, they likely are motivated to the the kind of job I saw in
> California*.
> 	Over here the situation is slightly different, in-house cabling from
> the first demarking socket (which is considered to be ISP owned) is
> clearly the responsibility of the owner/resident not the ISP. ISPs
> offer to route cables, but on a per-hour basis, or for MDUs often used
> to make contracts with the owner that they would build the internal
> wiring (in an agreed upon fashion) for the right to be sole provider
> of e.g. cable TV services (with the cable fees mandatorily folded into
> the rent) for a fixed multi-year period (10-15 IIRC), after that the
> plant would end-up property of the building owner. Recent changes in
> law made the "mandatory cable fees as part of the rent" much
> harder/impossible, turning the in-house wiring back into an
> owner/resident problem.
> 
> 
>> 
>> The point of the thread is that we still do not treat digital 
>> communications infrastructure as life support critical.
> 
> 	Well, let's keep things in perspective, unlike power, water (fresh
> and waste), and often gas, communications infrastructure is mostly not
> critical yet. But I agree that we are clearly on a path in that
> direction, so it is time to look at that from a different perspective.
> 	Personally, I am a big fan of putting the access network into
> communal hands, as these guys already do a decent job with other
> critical infrastructure (see list above, plus roads) and I see a PtP
> fiber access network terminating in some CO-like locations a viable
> way to allow ISPs to compete in the internet service field all the
> while using the communally build access network for a few. IIRC this
> is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has
> beed essentially unchanged for decades, I estimate that current fiber
> access lines would also last for decades requiring no active component
> changes in the field, making them candidates for communal management.
> (With all my love for communal ownership and maintenance, these
> typically are not very nimble and hence best when we talk about life
> times of decades).
> 
> 
>> It reminds me of Elon Musk and his claims on FSD.
> 
> 	;) I had to look up FSD, I guess full self driving (aka 
> pie-in-the-sky)?
> 
> 
>> I could do the whole thing myself - but that's not going to achieve 
>> what's needed. We need systems that our loved ones can call and those 
>> systems will care for them. Similar to how the medical community 
>> works, though imperfect, in caring for our loved one's and their 
>> healths.
> 
> 	I think I get your point. The question is how do we get from where we
> are now to that place your are describing here and in the FiWi
> concept?
> 
> 
>> I think we all are responsible for changing our belief sets & 
>> developing ourselves to better serve others. Most won't act until they 
>> can actually see what's possible. So let's start to show them.
> 
> 	Sure, having real implemented examples always helps!
> 
> Regards
> 	Sebastian
> 
> 
>> 
>> Bob
> 
> 
> P.S.: Bruce's point about placing ducts/conduits seems like to only
> way to gain some future-proofeness. For multi-story and/or
> multi-dweller units this introduces the question how to stop fire
> using these conduits to "jump" between levels, but I assume that is a
> solved problem already, and can be squelches with throwing money in
> its direction.
> 
> 
> 
> *)A IIRC charter technician routing coaxial cable on the outside of
> the two story building and drilling through the (wooden) wall to set
> the cable socket inside, all the while casually cutting the Dish
> coaxial cable that was still connected to a satellite dish... Not that
> I cared, we were using ADSL at the time, and in accordance with the
> old "when in Rome..." rule, I bridged over the deteriorated in-house
> phone wiring by running a 30m Cat5 cable on the outside of the
> building to the first hand-over box.
> 
> 
>> 
>>> Hi Bob,
>>> somewhat sad. Have you considered that your described requirements 
>>> and
>>> the use-case might be outside of the mass-market envelope for which
>>> the big ISPs taylor/rig their processes? Maybe, not sure that is an
>>> option, if you approach this as a "business"* asking for a fiber
>>> uplink for an already "wired" 5 unit property you might get better
>>> service? You still would need to do the in-house re-wiring, but you
>>> likely would avoid scripted hot-lines that hang up when in the
>>> allotted time the agent sees little chance of "closing" the call. All
>>> (big) ISPs I know treat hotline as a cost factor and not as the first
>>> line of customer retention...
>>> I would also not be amazed if Boston had smaller ISPs that are 
>>> willing
>>> and able to listen to customers (but that might be a bit more
>>> expensive than the big ISPs).
>>> That or try to get your foot into Comcast's PR department to sell 
>>> them
>>> on the "reference installation" for all Boston historic buildings, so
>>> they can offset the custom tailoring effort with the expected good
>>> press of doing the "right thing" publicly.
>>> Good luck
>>> 	Sebastian
>>> *) I understand you are not, but I assume the business units to have
>>> more leeway to actually offer more bespoke solutions than the likely
>>> cost-optimized to Mars and back residental customer unit.
>>>> On Mar 25, 2023, at 20:39, rjmcmahon via Bloat 
>>>> <bloat@lists.bufferbloat.net> wrote:
>>>> Hi All,
>>>> I've been trying to modernize a building in Boston where I'm an HOA 
>>>> board member over the last 18 mos. I perceive the broadband network 
>>>> as a critical infrastructure to our 5 unit building.
>>>> Unfortunately, Comcast staff doesn't seem to agree. The agent 
>>>> basically closed the chat on me mid-stream (chat attached.) I've 
>>>> been at this for about 18 mos now.
>>>> While I think bufferbloat is a big issue, the bigger issue is that 
>>>> our last-mile providers must change their cultures to understand 
>>>> that life support use cases that require proper pathways, conduits & 
>>>> cabling can no longer be ignored. These buildings have coaxial 
>>>> thrown over the exterior walls done in the 80s then drilling holes 
>>>> without consideration of structures. This and the lack of 
>>>> environmental protections for our HOA's critical infrastructure is 
>>>> disheartening. It's past time to remove this shoddy work on our 
>>>> building and all buildings in Boston as well as across the globe.
>>>> My hope was by now I'd have shown through actions what a historic 
>>>> building in Boston looks like when we, as humans in our short lives, 
>>>> act as both stewards of history and as responsible guardians to 
>>>> those that share living spaces and neighborhoods today & tomorrow. 
>>>> Motivating humans to better serve one another is hard.
>>>> Bob<comcast.pdf>_______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-25 23:20                                                                                       ` [LibreQoS] [Bloat] [Starlink] " David Lang
@ 2023-03-26 18:29                                                                                         ` rjmcmahon
  0 siblings, 0 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-26 18:29 UTC (permalink / raw)
  To: David Lang
  Cc: Bruce Perens, Rpm, dan, Frantisek Borsik, libreqos,
	Dave Taht via Starlink, bloat

I don't think so. The govt. just bailed out SVB for billionaires who 
were woefully underinsured. The claim is that it protected our financial 
system. Their risk officers didn't price in inflation and those impacts, 
i.e. they eliminated insurance without eliminating the liability.

Texas govt sells windstorm insurance https://www.twia.org/ so the real 
estate industry will build houses in Hurricane prone areas. Society is 
good with that.

Liabilities that will stop people from installing quality FiWi fire 
alarms are a failure that needs to be fixed too.

We've got a lot of ground to cover.

Bob

> if you want to eliminate insurance, then you need to eliminate the
> liability, which I don't think you want to do if you want to claim
> that this is 'life critical'
> 
> David Lang

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat
  2023-03-26  8:45                                                                                                 ` Livingood, Jason
@ 2023-03-26 18:54                                                                                                   ` rjmcmahon
  0 siblings, 0 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-26 18:54 UTC (permalink / raw)
  To: Livingood, Jason
  Cc: Nathan Owens, Rpm, dan, Frantisek Borsik, Bruce Perens, libreqos,
	Dave Taht via Starlink, bloat

Thanks for this. Yeah, I can understand MDUs are complex and present 
unique issues for both their Boards and companies to service them.  
Condo trusts, LLC non profits, co-ops, etc. Too many attorneys to boot. 
My attorney fees cost more than my training youth to install FiWi infra. 
The expensive, existing cos are asking $80K per building. The fire alarm 
installer is asking $100K per building. I figure we can get both for 
less than $180K but it's going to take some figuring out. And once we 
sink the money, it needs to be world-class with swappable parts. Others 
may then notice and follow suit.

Then for dark fiber to a private colo about 1.5 miles away the ask is 
$5K per month. Buy my own switch and SFPs. Peering and ISP services are 
not included.

So I do see the value Comcast brings. I think a challenge is that 
different options are needed for different customers. That's why I think 
pluggable optics, serdes and cmos radios are critical to the design for 
when we eventually go full fiber & wireless for the last meters.

Bob
> Happy to help (you can ping me off-list). The main products are DOCSIS
> and PON these days and it kind of depends where you are, whether it is
> a new build, etc. As others said, it gets super complicated in MDUs
> and the infrastructure in place and the building agreements vary quite
> a bit.
> 
> Jason
> 
> From: Bloat <bloat-bounces@lists.bufferbloat.net> on behalf of Nathan
> Owens via Bloat <bloat@lists.bufferbloat.net>
> Reply-To: Nathan Owens <nathan@nathan.io>
> Date: Sunday, March 26, 2023 at 09:07
> To: Robert McMahon <rjmcmahon@rjmcmahon.com>
> Cc: Rpm <rpm@lists.bufferbloat.net>, dan <dandenson@gmail.com>,
> Frantisek Borsik <frantisek.borsik@gmail.com>, Bruce Perens
> <bruce@perens.com>, libreqos <libreqos@lists.bufferbloat.net>, Dave
> Taht via Starlink <starlink@lists.bufferbloat.net>, bloat
> <bloat@lists.bufferbloat.net>
> Subject: Re: [Bloat] [Starlink] On fiber as critical infrastructure
> w/Comcast chat
> 
> Comcast's 6Gbps service is a niche product with probably <1000
> customers. It requires knowledge and persistence from the customer to
> actually get it installed, a process that can take many months (It's
> basically MetroE). It requires you to be within 1760ft of available
> fiber, with some limit on install cost if trenching is required. In
> some cases, you may be able to trench yourself, or cover some of the
> costs (usually thousands to tens of thousands).
> 
> On Sat, Mar 25, 2023 at 5:04 PM Robert McMahon via Bloat
> <bloat@lists.bufferbloat.net> wrote:
> 
>> The primary cost is the optics. That's why they're p in sfp and pay
>> go
>> 
>> Bob
>> 
>> On Mar 25, 2023, at 4:35 PM, David Lang <david@lang.hm> wrote:
>> 
>> On Sat, 25 Mar 2023, Robert McMahon via Bloat wrote:
>> 
>> The fiber has basically infinite capacity.
>> 
>> in theory, but once you start aggregating it and having to pay for
>> equipment
>> that can handle the rates, your 'infinite capaicty' starts to run
>> out really
>> fast.
>> 
>> David Lang
>> 
>> -------------------------
>> 
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat [1]
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat [1]
> 
> Links:
> ------
> [1] 
> https://urldefense.com/v3/__https:/lists.bufferbloat.net/listinfo/bloat__;!!CQl3mcHX2A!EOSY1k9O_PBuVuNNoTVKtyE8K5P8zDDQD-_ns2m_whJemleFOcMrd25veZFZqbIvJ292Ut9e47Owc0kkGpayyP8rW_bkOQ$

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-26 10:34                                                                                   ` [LibreQoS] [Bloat] " Sebastian Moeller
  2023-03-26 18:12                                                                                     ` rjmcmahon
@ 2023-03-26 20:57                                                                                     ` David Lang
  2023-03-26 21:11                                                                                       ` Sebastian Moeller
  1 sibling, 1 reply; 183+ messages in thread
From: David Lang @ 2023-03-26 20:57 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: rjmcmahon, Dave Taht via Starlink, dan, Frantisek Borsik,
	brandon, libreqos, Rpm, bloat

On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:

>> The point of the thread is that we still do not treat digital communications infrastructure as life support critical.
>
> 	Well, let's keep things in perspective, unlike power, water (fresh and waste), and often gas, communications infrastructure is mostly not critical yet. But I agree that we are clearly on a path in that direction, so it is time to look at that from a different perspective.
> 	Personally, I am a big fan of putting the access network into communal hands, as these guys already do a decent job with other critical infrastructure (see list above, plus roads) and I see a PtP fiber access network terminating in some CO-like locations a viable way to allow ISPs to compete in the internet service field all the while using the communally build access network for a few. IIRC this is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has beed essentially unchanged for decades, I estimate that current fiber access lines would also last for decades requiring no active component changes in the field, making them candidates for communal management. (With all my love for communal ownership and maintenance, these typically are not very nimble and hence best when we talk about life times of decades).

This is happening in some places (the town where I live is doing such a 
rollout), but the incumbant ISPs are fighting this and in many states have 
gotten laws created that prohibit towns from building such systems.

David Lang

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-26 20:57                                                                                     ` David Lang
@ 2023-03-26 21:11                                                                                       ` Sebastian Moeller
  2023-03-26 21:26                                                                                         ` David Lang
  2023-03-28 17:06                                                                                         ` [LibreQoS] [Starlink] " Larry Press
  0 siblings, 2 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-26 21:11 UTC (permalink / raw)
  To: David Lang
  Cc: rjmcmahon, Dave Taht via Starlink, dan, Frantisek Borsik,
	brandon, libreqos, bloat

Hi David,


> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
> 
> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
> 
>>> The point of the thread is that we still do not treat digital communications infrastructure as life support critical.
>> 
>> 	Well, let's keep things in perspective, unlike power, water (fresh and waste), and often gas, communications infrastructure is mostly not critical yet. But I agree that we are clearly on a path in that direction, so it is time to look at that from a different perspective.
>> 	Personally, I am a big fan of putting the access network into communal hands, as these guys already do a decent job with other critical infrastructure (see list above, plus roads) and I see a PtP fiber access network terminating in some CO-like locations a viable way to allow ISPs to compete in the internet service field all the while using the communally build access network for a few. IIRC this is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has beed essentially unchanged for decades, I estimate that current fiber access lines would also last for decades requiring no active component changes in the field, making them candidates for communal management. (With all my love for communal ownership and maintenance, these typically are not very nimble and hence best when we talk about life times of decades).
> 
> This is happening in some places (the town where I live is doing such a rollout), but the incumbant ISPs are fighting this and in many states have gotten laws created that prohibit towns from building such systems.

	A resistance that in the current system is understandable*... btw, my point is not wanting to get rid of ISPs, I really just think that the access network is more of a natural monopoly and if we want actual ISP competition, the access network is the wrong place to implement it... as it is unlikely that we will see multiple ISPs running independent fibers to all/most dwelling units... There are two ways I see to address this structural problem:
a) require ISPs to rent the access links to their competitors for "reasonable" prices
b) as I proposed have some non-ISP entity build and maintain the access network

None of these is terribly attractive to current ISPs, but we already see how the economically more attractive PON approach throws a spanner into a), on a PON the competitors might get bitstream access, but will not be able to "light up" the fiber any way they see fit (as would be possible in a PtP deployment, at least in theory). My subjective preference is b) as I mentioned before, as I think that would offer a level playing field for ISPs to compete doing what they do best, offer internet access service while not pushing the cost of the access network build-out to all-fiber onto the ISPs. This would allow a fairer, less revenue driven approach to select which areas to convert to FTTH first....

However this is pretty much orthogonal to Bob's idea, as I understand it, as this subthread really is only about getting houses hooked up to the internet and ignores his proposal how to do the in-house network design in a future-proof way...

Regards
	Sebastian


*) I am not saying such resistance is nice or the right thing, just that I can see why it is happening.


> 
> David Lang


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-26 21:11                                                                                       ` Sebastian Moeller
@ 2023-03-26 21:26                                                                                         ` David Lang
  2023-03-28 17:06                                                                                         ` [LibreQoS] [Starlink] " Larry Press
  1 sibling, 0 replies; 183+ messages in thread
From: David Lang @ 2023-03-26 21:26 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: David Lang, rjmcmahon, Dave Taht via Starlink, dan,
	Frantisek Borsik, brandon, libreqos, bloat

On Sun, 26 Mar 2023, Sebastian Moeller wrote:

>> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
>>
>> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
>>
>>>> The point of the thread is that we still do not treat digital communications infrastructure as life support critical.
>>>
>>> 	Well, let's keep things in perspective, unlike power, water (fresh and waste), and often gas, communications infrastructure is mostly not critical yet. But I agree that we are clearly on a path in that direction, so it is time to look at that from a different perspective.
>>> 	Personally, I am a big fan of putting the access network into communal hands, as these guys already do a decent job with other critical infrastructure (see list above, plus roads) and I see a PtP fiber access network terminating in some CO-like locations a viable way to allow ISPs to compete in the internet service field all the while using the communally build access network for a few. IIRC this is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has beed essentially unchanged for decades, I estimate that current fiber access lines would also last for decades requiring no active component changes in the field, making them candidates for communal management. (With all my love for communal ownership and maintenance, these typically are not very nimble and hence best when we talk about life times of decades).
>>
>> This is happening in some places (the town where I live is doing such a rollout), but the incumbant ISPs are fighting this and in many states have gotten laws created that prohibit towns from building such systems.
>
> 	A resistance that in the current system is understandable*... btw, my 
> point is not wanting to get rid of ISPs, I really just think that the access 
> network is more of a natural monopoly and if we want actual ISP competition, 
> the access network is the wrong place to implement it... as it is unlikely 
> that we will see multiple ISPs running independent fibers to all/most dwelling 
> units... There are two ways I see to address this structural problem:
>
> a) require ISPs to rent the access links to their competitors for "reasonable" prices
> b) as I proposed have some non-ISP entity build and maintain the access network

In my town, the city is building the network, connecting every house, and then 
there are going to be multiple ISPs available (at least 3 that I've seen, I 
haven't dug into it since I'm not yet connected)

David Lang

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-26 21:11                                                                                       ` Sebastian Moeller
  2023-03-26 21:26                                                                                         ` David Lang
@ 2023-03-28 17:06                                                                                         ` Larry Press
  2023-03-28 17:47                                                                                           ` rjmcmahon
  1 sibling, 1 reply; 183+ messages in thread
From: Larry Press @ 2023-03-28 17:06 UTC (permalink / raw)
  To: David Lang, Sebastian Moeller
  Cc: dan, Frantisek Borsik, libreqos, Dave Taht via Starlink,
	rjmcmahon, bloat

[-- Attachment #1: Type: text/plain, Size: 4842 bytes --]

Here is an old (2014) post on Stockholm to my class "textbook":
https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html

[https://1.bp.blogspot.com/-29b6JXMZN4g/U6nd1vJCr4I/AAAAAAAAauY/G4f091mDI80/w1200-h630-p-k-no-nu/stockholm.png]<https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html>
Stockholm: 19 years of municipal broadband success<https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html>
The Stokab report should be required reading for all local government officials. Stockholm is one of the  top Internet cities in the worl...
cis471.blogspot.com


________________________________
From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net>
Sent: Sunday, March 26, 2023 2:11 PM
To: David Lang <david@lang.hm>
Cc: dan <dandenson@gmail.com>; Frantisek Borsik <frantisek.borsik@gmail.com>; libreqos <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>; rjmcmahon <rjmcmahon@rjmcmahon.com>; bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat

Hi David,


> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
>
> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
>
>>> The point of the thread is that we still do not treat digital communications infrastructure as life support critical.
>>
>>       Well, let's keep things in perspective, unlike power, water (fresh and waste), and often gas, communications infrastructure is mostly not critical yet. But I agree that we are clearly on a path in that direction, so it is time to look at that from a different perspective.
>>       Personally, I am a big fan of putting the access network into communal hands, as these guys already do a decent job with other critical infrastructure (see list above, plus roads) and I see a PtP fiber access network terminating in some CO-like locations a viable way to allow ISPs to compete in the internet service field all the while using the communally build access network for a few. IIRC this is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has beed essentially unchanged for decades, I estimate that current fiber access lines would also last for decades requiring no active component changes in the field, making them candidates for communal management. (With all my love for communal ownership and maintenance, these typically are not very nimble and hence best when we talk about life times of decades).
>
> This is happening in some places (the town where I live is doing such a rollout), but the incumbant ISPs are fighting this and in many states have gotten laws created that prohibit towns from building such systems.

        A resistance that in the current system is understandable*... btw, my point is not wanting to get rid of ISPs, I really just think that the access network is more of a natural monopoly and if we want actual ISP competition, the access network is the wrong place to implement it... as it is unlikely that we will see multiple ISPs running independent fibers to all/most dwelling units... There are two ways I see to address this structural problem:
a) require ISPs to rent the access links to their competitors for "reasonable" prices
b) as I proposed have some non-ISP entity build and maintain the access network

None of these is terribly attractive to current ISPs, but we already see how the economically more attractive PON approach throws a spanner into a), on a PON the competitors might get bitstream access, but will not be able to "light up" the fiber any way they see fit (as would be possible in a PtP deployment, at least in theory). My subjective preference is b) as I mentioned before, as I think that would offer a level playing field for ISPs to compete doing what they do best, offer internet access service while not pushing the cost of the access network build-out to all-fiber onto the ISPs. This would allow a fairer, less revenue driven approach to select which areas to convert to FTTH first....

However this is pretty much orthogonal to Bob's idea, as I understand it, as this subthread really is only about getting houses hooked up to the internet and ignores his proposal how to do the in-house network design in a future-proof way...

Regards
        Sebastian


*) I am not saying such resistance is nice or the right thing, just that I can see why it is happening.


>
> David Lang

_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!P7nkOOY!vFtTwFdYBTFjrJCFqT0rp0o2dtaz2m-dskeRLX2dIW_Pujge6ZU8eOIxtkN_spTDlqyyzClrVbEMFFbvL3NlUgIHOg$

[-- Attachment #2: Type: text/html, Size: 8863 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-28 17:06                                                                                         ` [LibreQoS] [Starlink] " Larry Press
@ 2023-03-28 17:47                                                                                           ` rjmcmahon
  2023-03-28 18:11                                                                                             ` Frantisek Borsik
  2023-03-29  8:28                                                                                             ` Sebastian Moeller
  0 siblings, 2 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-28 17:47 UTC (permalink / raw)
  To: Larry Press
  Cc: David Lang, Sebastian Moeller, dan, Frantisek Borsik, libreqos,
	Dave Taht via Starlink, bloat

Interesting. I'm skeptical that our cities in the U.S. can get this 
(structural separation) right.

Pre-coaxial cable & contract carriage, the FCC licensed spectrum to the 
major media companies and placed a news obligation on them for these OTA 
rights. A society can't run a democracy well without quality and factual 
information to the constituents. Sadly, contract carriage got rid of 
that news as a public service obligation as predicted by Eli Noam. 
http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we get January 
6th and an insurrection.

It takes a staff of 300 to produce 30 minutes of news three times a day. 
The co-axial franchise agreements per each city traded this obligation 
for a community access channel and a small studio, and annual franchise 
fees. History has shown this is insufficient for a city to provide 
quality news to its citizens. Community access channels failed 
miserably.

Another requirement was two cables so there would be "competition" in 
the coaxial offerings. This rarely happened because of natural monopoly 
both in the last mile and in negotiating broadcast rights (mostly for 
sports.) There is only one broadcast rights winner, e.g. NBC for the 
Olympics, and only one last mile winner. That's been proven empirically 
in the U.S.

Now cities are dependent on those franchise fees for their budgets. And 
the cable cos rolled up to a national level. So it's mostly the FCC that 
regulates all of this where they care more about Janet Jackson's breast 
than providing accurate news to help a democracy function well. 
https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy

It gets worse as people are moving to unicast networks for their "news." 
But we're really not getting news at all, we're gravitating to emotional 
validations per our dysfunctions. Facebook et al happily provide this 
because it sells more ads. And then the major equipment providers claim 
they're doing great engineering because they can carry "AI loads!!" and 
their stock goes up in value.  This means ads & news feeds that trigger 
dopamine hits for addicts are driving the money flows. Which is a sad 
theme for undereducated populations.

And ChatGPT is not the answer for our lack of education and a public 
obligation to support those educations, which includes addiction 
recovery programs, and the ability to think critically for ourselves.

Bob
> Here is an old (2014) post on Stockholm to my class "textbook":
>  
> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
> 
> 
>  [1]
>  Stockholm: 19 years of municipal broadband success [1]
>  The Stokab report should be required reading for all local government
> officials. Stockholm is one of the  top Internet cities in the worl...
> 
>  cis471.blogspot.com
> 
> -------------------------
> 
> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
> Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net>
> Sent: Sunday, March 26, 2023 2:11 PM
> To: David Lang <david@lang.hm>
> Cc: dan <dandenson@gmail.com>; Frantisek Borsik
> <frantisek.borsik@gmail.com>; libreqos
> <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink
> <starlink@lists.bufferbloat.net>; rjmcmahon <rjmcmahon@rjmcmahon.com>;
> bloat <bloat@lists.bufferbloat.net>
> Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure
> w/Comcast chat
> 
> Hi David,
> 
>> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
>> 
>> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
>> 
>>>> The point of the thread is that we still do not treat digital
> communications infrastructure as life support critical.
>>> 
>>>       Well, let's keep things in perspective, unlike power, water
> (fresh and waste), and often gas, communications infrastructure is
> mostly not critical yet. But I agree that we are clearly on a path in
> that direction, so it is time to look at that from a different
> perspective.
>>>       Personally, I am a big fan of putting the access network into
> communal hands, as these guys already do a decent job with other
> critical infrastructure (see list above, plus roads) and I see a PtP
> fiber access network terminating in some CO-like locations a viable
> way to allow ISPs to compete in the internet service field all the
> while using the communally build access network for a few. IIRC this
> is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has
> beed essentially unchanged for decades, I estimate that current fiber
> access lines would also last for decades requiring no active component
> changes in the field, making them candidates for communal management.
> (With all my love for communal ownership and maintenance, these
> typically are not very nimble and hence best when we talk about life
> times of decades).
>> 
>> This is happening in some places (the town where I live is doing
> such a rollout), but the incumbant ISPs are fighting this and in many
> states have gotten laws created that prohibit towns from building such
> systems.
> 
>         A resistance that in the current system is understandable*...
> btw, my point is not wanting to get rid of ISPs, I really just think
> that the access network is more of a natural monopoly and if we want
> actual ISP competition, the access network is the wrong place to
> implement it... as it is unlikely that we will see multiple ISPs
> running independent fibers to all/most dwelling units... There are two
> ways I see to address this structural problem:
> a) require ISPs to rent the access links to their competitors for
> "reasonable" prices
> b) as I proposed have some non-ISP entity build and maintain the
> access network
> 
> None of these is terribly attractive to current ISPs, but we already
> see how the economically more attractive PON approach throws a spanner
> into a), on a PON the competitors might get bitstream access, but will
> not be able to "light up" the fiber any way they see fit (as would be
> possible in a PtP deployment, at least in theory). My subjective
> preference is b) as I mentioned before, as I think that would offer a
> level playing field for ISPs to compete doing what they do best, offer
> internet access service while not pushing the cost of the access
> network build-out to all-fiber onto the ISPs. This would allow a
> fairer, less revenue driven approach to select which areas to convert
> to FTTH first....
> 
> However this is pretty much orthogonal to Bob's idea, as I understand
> it, as this subthread really is only about getting houses hooked up to
> the internet and ignores his proposal how to do the in-house network
> design in a future-proof way...
> 
> Regards
>         Sebastian
> 
> *) I am not saying such resistance is nice or the right thing, just
> that I can see why it is happening.
> 
>> 
>> David Lang
> 
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!P7nkOOY!vFtTwFdYBTFjrJCFqT0rp0o2dtaz2m-dskeRLX2dIW_Pujge6ZU8eOIxtkN_spTDlqyyzClrVbEMFFbvL3NlUgIHOg$
> 
> 
> 
> Links:
> ------
> [1] 
> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-28 17:47                                                                                           ` rjmcmahon
@ 2023-03-28 18:11                                                                                             ` Frantisek Borsik
  2023-03-28 18:46                                                                                               ` rjmcmahon
  2023-03-29  8:28                                                                                             ` Sebastian Moeller
  1 sibling, 1 reply; 183+ messages in thread
From: Frantisek Borsik @ 2023-03-28 18:11 UTC (permalink / raw)
  To: rjmcmahon, Larry Press
  Cc: Dave Taht via Starlink, bloat, dan, David Lang, libreqos,
	Sebastian Moeller

[-- Attachment #1: Type: text/plain, Size: 7935 bytes --]

https://www.linkedin.com/in/christopher-mitchell-79078b5 and the like are
doing a pretty good job (given the circumstances) here in the US. At least,
that’s my understanding of his work.


All the best,

Frank
Frantisek (Frank) Borsik


https://www.linkedin.com/in/frantisekborsik

Signal, Telegram, WhatsApp: +421919416714

iMessage, mobile: +420775230885

Skype: casioa5302ca

frantisek.borsik@gmail.com





On 28 March 2023 at 7:47:33 PM, rjmcmahon (rjmcmahon@rjmcmahon.com) wrote:

> Interesting. I'm skeptical that our cities in the U.S. can get this
> (structural separation) right.
>
> Pre-coaxial cable & contract carriage, the FCC licensed spectrum to the
> major media companies and placed a news obligation on them for these OTA
> rights. A society can't run a democracy well without quality and factual
> information to the constituents. Sadly, contract carriage got rid of
> that news as a public service obligation as predicted by Eli Noam.
> http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we get January
> 6th and an insurrection.
>
> It takes a staff of 300 to produce 30 minutes of news three times a day.
> The co-axial franchise agreements per each city traded this obligation
> for a community access channel and a small studio, and annual franchise
> fees. History has shown this is insufficient for a city to provide
> quality news to its citizens. Community access channels failed
> miserably.
>
> Another requirement was two cables so there would be "competition" in
> the coaxial offerings. This rarely happened because of natural monopoly
> both in the last mile and in negotiating broadcast rights (mostly for
> sports.) There is only one broadcast rights winner, e.g. NBC for the
> Olympics, and only one last mile winner. That's been proven empirically
> in the U.S.
>
> Now cities are dependent on those franchise fees for their budgets. And
> the cable cos rolled up to a national level. So it's mostly the FCC that
> regulates all of this where they care more about Janet Jackson's breast
> than providing accurate news to help a democracy function well.
> https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy
>
> It gets worse as people are moving to unicast networks for their "news."
> But we're really not getting news at all, we're gravitating to emotional
> validations per our dysfunctions. Facebook et al happily provide this
> because it sells more ads. And then the major equipment providers claim
> they're doing great engineering because they can carry "AI loads!!" and
> their stock goes up in value. This means ads & news feeds that trigger
> dopamine hits for addicts are driving the money flows. Which is a sad
> theme for undereducated populations.
>
> And ChatGPT is not the answer for our lack of education and a public
> obligation to support those educations, which includes addiction
> recovery programs, and the ability to think critically for ourselves.
>
> Bob
>
> Here is an old (2014) post on Stockholm to my class "textbook":
>
> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>
>
> [1]
> Stockholm: 19 years of municipal broadband success [1]
> The Stokab report should be required reading for all local government
> officials. Stockholm is one of the top Internet cities in the worl...
>
> cis471.blogspot.com
>
> -------------------------
>
> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
> Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net>
> Sent: Sunday, March 26, 2023 2:11 PM
> To: David Lang <david@lang.hm>
> Cc: dan <dandenson@gmail.com>; Frantisek Borsik
> <frantisek.borsik@gmail.com>; libreqos
> <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink
> <starlink@lists.bufferbloat.net>; rjmcmahon <rjmcmahon@rjmcmahon.com>;
> bloat <bloat@lists.bufferbloat.net>
> Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure
> w/Comcast chat
>
> Hi David,
>
> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
>
> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
>
> The point of the thread is that we still do not treat digital
>
> communications infrastructure as life support critical.
>
>
> Well, let's keep things in perspective, unlike power, water
>
> (fresh and waste), and often gas, communications infrastructure is
> mostly not critical yet. But I agree that we are clearly on a path in
> that direction, so it is time to look at that from a different
> perspective.
>
> Personally, I am a big fan of putting the access network into
>
> communal hands, as these guys already do a decent job with other
> critical infrastructure (see list above, plus roads) and I see a PtP
> fiber access network terminating in some CO-like locations a viable
> way to allow ISPs to compete in the internet service field all the
> while using the communally build access network for a few. IIRC this
> is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has
> beed essentially unchanged for decades, I estimate that current fiber
> access lines would also last for decades requiring no active component
> changes in the field, making them candidates for communal management.
> (With all my love for communal ownership and maintenance, these
> typically are not very nimble and hence best when we talk about life
> times of decades).
>
>
> This is happening in some places (the town where I live is doing
>
> such a rollout), but the incumbant ISPs are fighting this and in many
> states have gotten laws created that prohibit towns from building such
> systems.
>
> A resistance that in the current system is understandable*...
> btw, my point is not wanting to get rid of ISPs, I really just think
> that the access network is more of a natural monopoly and if we want
> actual ISP competition, the access network is the wrong place to
> implement it... as it is unlikely that we will see multiple ISPs
> running independent fibers to all/most dwelling units... There are two
> ways I see to address this structural problem:
> a) require ISPs to rent the access links to their competitors for
> "reasonable" prices
> b) as I proposed have some non-ISP entity build and maintain the
> access network
>
> None of these is terribly attractive to current ISPs, but we already
> see how the economically more attractive PON approach throws a spanner
> into a), on a PON the competitors might get bitstream access, but will
> not be able to "light up" the fiber any way they see fit (as would be
> possible in a PtP deployment, at least in theory). My subjective
> preference is b) as I mentioned before, as I think that would offer a
> level playing field for ISPs to compete doing what they do best, offer
> internet access service while not pushing the cost of the access
> network build-out to all-fiber onto the ISPs. This would allow a
> fairer, less revenue driven approach to select which areas to convert
> to FTTH first....
>
> However this is pretty much orthogonal to Bob's idea, as I understand
> it, as this subthread really is only about getting houses hooked up to
> the internet and ignores his proposal how to do the in-house network
> design in a future-proof way...
>
> Regards
> Sebastian
>
> *) I am not saying such resistance is nice or the right thing, just
> that I can see why it is happening.
>
>
> David Lang
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
>
> https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!P7nkOOY!vFtTwFdYBTFjrJCFqT0rp0o2dtaz2m-dskeRLX2dIW_Pujge6ZU8eOIxtkN_spTDlqyyzClrVbEMFFbvL3NlUgIHOg$
>
>
>
> Links:
> ------
> [1]
> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>
>

[-- Attachment #2: Type: text/html, Size: 12428 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-28 18:11                                                                                             ` Frantisek Borsik
@ 2023-03-28 18:46                                                                                               ` rjmcmahon
  2023-03-28 20:37                                                                                                 ` David Lang
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-28 18:46 UTC (permalink / raw)
  To: Frantisek Borsik
  Cc: Larry Press, Dave Taht via Starlink, bloat, dan, David Lang,
	libreqos, Sebastian Moeller

There are municipal broadband projects. Most are in rural areas 
partially funded by the federal government via the USDA. Glasgow started 
a few decades ago. Similar to LUS in Lafayette, LA. 
https://www.usda.gov/broadband

Rural areas get a lot of federal money for things, a la the farm bill 
which also pays for food stamps instituted as part of the New Deal after 
the Great Depression.

https://sustainableagriculture.net/our-work/campaigns/fbcampaign/what-is-the-farm-bill/

None of this is really relevant to the vast majority of our urban 
populations that get broadband from investor-owned companies. These 
companies don't receive federal subsidies though sometimes they get 
access to municipal revenue bonds when doing city infrastructures.

Bob
> https://www.linkedin.com/in/christopher-mitchell-79078b5 and the like
> are doing a pretty good job (given the circumstances) here in the US.
> At least, that’s my understanding of his work.
> 
> All the best,
> 
> Frank
> Frantisek (Frank) Borsik
> 
> https://www.linkedin.com/in/frantisekborsik
> 
> Signal, Telegram, WhatsApp: +421919416714 [2]
> 
> iMessage, mobile: +420775230885 [3]
> 
> Skype: casioa5302ca
> 
> frantisek.borsik@gmail.com
> 
> On 28 March 2023 at 7:47:33 PM, rjmcmahon (rjmcmahon@rjmcmahon.com)
> wrote:
> 
>> Interesting. I'm skeptical that our cities in the U.S. can get this
>> (structural separation) right.
>> 
>> Pre-coaxial cable & contract carriage, the FCC licensed spectrum to
>> the
>> major media companies and placed a news obligation on them for these
>> OTA
>> rights. A society can't run a democracy well without quality and
>> factual
>> information to the constituents. Sadly, contract carriage got rid of
>> 
>> that news as a public service obligation as predicted by Eli Noam.
>> http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we get
>> January
>> 6th and an insurrection.
>> 
>> It takes a staff of 300 to produce 30 minutes of news three times a
>> day.
>> The co-axial franchise agreements per each city traded this
>> obligation
>> for a community access channel and a small studio, and annual
>> franchise
>> fees. History has shown this is insufficient for a city to provide
>> quality news to its citizens. Community access channels failed
>> miserably.
>> 
>> Another requirement was two cables so there would be "competition"
>> in
>> the coaxial offerings. This rarely happened because of natural
>> monopoly
>> both in the last mile and in negotiating broadcast rights (mostly
>> for
>> sports.) There is only one broadcast rights winner, e.g. NBC for the
>> 
>> Olympics, and only one last mile winner. That's been proven
>> empirically
>> in the U.S.
>> 
>> Now cities are dependent on those franchise fees for their budgets.
>> And
>> the cable cos rolled up to a national level. So it's mostly the FCC
>> that
>> regulates all of this where they care more about Janet Jackson's
>> breast
>> than providing accurate news to help a democracy function well.
>> 
> https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy
>> 
>> 
>> It gets worse as people are moving to unicast networks for their
>> "news."
>> But we're really not getting news at all, we're gravitating to
>> emotional
>> validations per our dysfunctions. Facebook et al happily provide
>> this
>> because it sells more ads. And then the major equipment providers
>> claim
>> they're doing great engineering because they can carry "AI loads!!"
>> and
>> their stock goes up in value. This means ads & news feeds that
>> trigger
>> dopamine hits for addicts are driving the money flows. Which is a
>> sad
>> theme for undereducated populations.
>> 
>> And ChatGPT is not the answer for our lack of education and a public
>> 
>> obligation to support those educations, which includes addiction
>> recovery programs, and the ability to think critically for
>> ourselves.
>> 
>> Bob
>> Here is an old (2014) post on Stockholm to my class "textbook":
>> 
>> 
> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>> 
>> 
>> [1]
>> Stockholm: 19 years of municipal broadband success [1]
>> The Stokab report should be required reading for all local
>> government
>> officials. Stockholm is one of the top Internet cities in the
>> worl...
>> 
>> cis471.blogspot.com [1]
>> 
>> -------------------------
>> 
>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>> 
>> Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net>
>> Sent: Sunday, March 26, 2023 2:11 PM
>> To: David Lang <david@lang.hm>
>> Cc: dan <dandenson@gmail.com>; Frantisek Borsik
>> <frantisek.borsik@gmail.com>; libreqos
>> <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink
>> <starlink@lists.bufferbloat.net>; rjmcmahon
>> <rjmcmahon@rjmcmahon.com>;
>> bloat <bloat@lists.bufferbloat.net>
>> Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure
>> w/Comcast chat
>> 
>> Hi David,
>> 
>> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
>> 
>> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
>> 
>> The point of the thread is that we still do not treat digital
>  communications infrastructure as life support critical.
> 
>>> Well, let's keep things in perspective, unlike power, water
>  (fresh and waste), and often gas, communications infrastructure is
> mostly not critical yet. But I agree that we are clearly on a path in
> that direction, so it is time to look at that from a different
> perspective.
> 
>>> Personally, I am a big fan of putting the access network into
>  communal hands, as these guys already do a decent job with other
> critical infrastructure (see list above, plus roads) and I see a PtP
> fiber access network terminating in some CO-like locations a viable
> way to allow ISPs to compete in the internet service field all the
> while using the communally build access network for a few. IIRC this
> is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has
> beed essentially unchanged for decades, I estimate that current fiber
> access lines would also last for decades requiring no active component
> 
> changes in the field, making them candidates for communal management.
> (With all my love for communal ownership and maintenance, these
> typically are not very nimble and hence best when we talk about life
> times of decades).
> 
>> This is happening in some places (the town where I live is doing
>  such a rollout), but the incumbant ISPs are fighting this and in many
> 
> states have gotten laws created that prohibit towns from building such
> 
> systems.
> 
> A resistance that in the current system is understandable*...
> btw, my point is not wanting to get rid of ISPs, I really just think
> that the access network is more of a natural monopoly and if we want
> actual ISP competition, the access network is the wrong place to
> implement it... as it is unlikely that we will see multiple ISPs
> running independent fibers to all/most dwelling units... There are two
> 
> ways I see to address this structural problem:
> a) require ISPs to rent the access links to their competitors for
> "reasonable" prices
> b) as I proposed have some non-ISP entity build and maintain the
> access network
> 
> None of these is terribly attractive to current ISPs, but we already
> see how the economically more attractive PON approach throws a spanner
> 
> into a), on a PON the competitors might get bitstream access, but will
> 
> not be able to "light up" the fiber any way they see fit (as would be
> possible in a PtP deployment, at least in theory). My subjective
> preference is b) as I mentioned before, as I think that would offer a
> level playing field for ISPs to compete doing what they do best, offer
> 
> internet access service while not pushing the cost of the access
> network build-out to all-fiber onto the ISPs. This would allow a
> fairer, less revenue driven approach to select which areas to convert
> to FTTH first....
> 
> However this is pretty much orthogonal to Bob's idea, as I understand
> it, as this subthread really is only about getting houses hooked up to
> 
> the internet and ignores his proposal how to do the in-house network
> design in a future-proof way...
> 
> Regards
> Sebastian
> 
> *) I am not saying such resistance is nice or the right thing, just
> that I can see why it is happening.
> 
>> David Lang
> 
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!P7nkOOY!vFtTwFdYBTFjrJCFqT0rp0o2dtaz2m-dskeRLX2dIW_Pujge6ZU8eOIxtkN_spTDlqyyzClrVbEMFFbvL3NlUgIHOg$
> 
> 
> Links:
> ------
> [1]
> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
> 
> 
> 
> Links:
> ------
> [1] http://cis471.blogspot.com
> [2] tel:+421919416714
> [3] tel:+420775230885

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-28 18:46                                                                                               ` rjmcmahon
@ 2023-03-28 20:37                                                                                                 ` David Lang
  2023-03-28 21:31                                                                                                   ` rjmcmahon
  0 siblings, 1 reply; 183+ messages in thread
From: David Lang @ 2023-03-28 20:37 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Frantisek Borsik, Larry Press, Dave Taht via Starlink, bloat,
	dan, David Lang, libreqos, Sebastian Moeller

[-- Attachment #1: Type: text/plain, Size: 9338 bytes --]

https://sifinetworks.com/residential/cities/simi-valley-ca/

I'm due to get it to my area Q2 (or so). we're a suburb outside LA, but 100k+ 
people so not tiny.

David Lang


On Tue, 28 Mar 2023, rjmcmahon wrote:

> There are municipal broadband projects. Most are in rural areas partially 
> funded by the federal government via the USDA. Glasgow started a few decades 
> ago. Similar to LUS in Lafayette, LA. https://www.usda.gov/broadband
>
> Rural areas get a lot of federal money for things, a la the farm bill which 
> also pays for food stamps instituted as part of the New Deal after the Great 
> Depression.
>
> https://sustainableagriculture.net/our-work/campaigns/fbcampaign/what-is-the-farm-bill/
>
> None of this is really relevant to the vast majority of our urban populations 
> that get broadband from investor-owned companies. These companies don't 
> receive federal subsidies though sometimes they get access to municipal 
> revenue bonds when doing city infrastructures.
>
> Bob
>> https://www.linkedin.com/in/christopher-mitchell-79078b5 and the like
>> are doing a pretty good job (given the circumstances) here in the US.
>> At least, that’s my understanding of his work.
>> 
>> All the best,
>> 
>> Frank
>> Frantisek (Frank) Borsik
>> 
>> https://www.linkedin.com/in/frantisekborsik
>> 
>> Signal, Telegram, WhatsApp: +421919416714 [2]
>> 
>> iMessage, mobile: +420775230885 [3]
>> 
>> Skype: casioa5302ca
>> 
>> frantisek.borsik@gmail.com
>> 
>> On 28 March 2023 at 7:47:33 PM, rjmcmahon (rjmcmahon@rjmcmahon.com)
>> wrote:
>> 
>>> Interesting. I'm skeptical that our cities in the U.S. can get this
>>> (structural separation) right.
>>> 
>>> Pre-coaxial cable & contract carriage, the FCC licensed spectrum to
>>> the
>>> major media companies and placed a news obligation on them for these
>>> OTA
>>> rights. A society can't run a democracy well without quality and
>>> factual
>>> information to the constituents. Sadly, contract carriage got rid of
>>> 
>>> that news as a public service obligation as predicted by Eli Noam.
>>> http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we get
>>> January
>>> 6th and an insurrection.
>>> 
>>> It takes a staff of 300 to produce 30 minutes of news three times a
>>> day.
>>> The co-axial franchise agreements per each city traded this
>>> obligation
>>> for a community access channel and a small studio, and annual
>>> franchise
>>> fees. History has shown this is insufficient for a city to provide
>>> quality news to its citizens. Community access channels failed
>>> miserably.
>>> 
>>> Another requirement was two cables so there would be "competition"
>>> in
>>> the coaxial offerings. This rarely happened because of natural
>>> monopoly
>>> both in the last mile and in negotiating broadcast rights (mostly
>>> for
>>> sports.) There is only one broadcast rights winner, e.g. NBC for the
>>> 
>>> Olympics, and only one last mile winner. That's been proven
>>> empirically
>>> in the U.S.
>>> 
>>> Now cities are dependent on those franchise fees for their budgets.
>>> And
>>> the cable cos rolled up to a national level. So it's mostly the FCC
>>> that
>>> regulates all of this where they care more about Janet Jackson's
>>> breast
>>> than providing accurate news to help a democracy function well.
>>> 
>> https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy
>>> 
>>> 
>>> It gets worse as people are moving to unicast networks for their
>>> "news."
>>> But we're really not getting news at all, we're gravitating to
>>> emotional
>>> validations per our dysfunctions. Facebook et al happily provide
>>> this
>>> because it sells more ads. And then the major equipment providers
>>> claim
>>> they're doing great engineering because they can carry "AI loads!!"
>>> and
>>> their stock goes up in value. This means ads & news feeds that
>>> trigger
>>> dopamine hits for addicts are driving the money flows. Which is a
>>> sad
>>> theme for undereducated populations.
>>> 
>>> And ChatGPT is not the answer for our lack of education and a public
>>> 
>>> obligation to support those educations, which includes addiction
>>> recovery programs, and the ability to think critically for
>>> ourselves.
>>> 
>>> Bob
>>> Here is an old (2014) post on Stockholm to my class "textbook":
>>> 
>>> 
>> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>>> 
>>> 
>>> [1]
>>> Stockholm: 19 years of municipal broadband success [1]
>>> The Stokab report should be required reading for all local
>>> government
>>> officials. Stockholm is one of the top Internet cities in the
>>> worl...
>>> 
>>> cis471.blogspot.com [1]
>>> 
>>> -------------------------
>>> 
>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>>> 
>>> Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net>
>>> Sent: Sunday, March 26, 2023 2:11 PM
>>> To: David Lang <david@lang.hm>
>>> Cc: dan <dandenson@gmail.com>; Frantisek Borsik
>>> <frantisek.borsik@gmail.com>; libreqos
>>> <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink
>>> <starlink@lists.bufferbloat.net>; rjmcmahon
>>> <rjmcmahon@rjmcmahon.com>;
>>> bloat <bloat@lists.bufferbloat.net>
>>> Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure
>>> w/Comcast chat
>>> 
>>> Hi David,
>>> 
>>> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
>>> 
>>> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
>>> 
>>> The point of the thread is that we still do not treat digital
>>  communications infrastructure as life support critical.
>> 
>>>> Well, let's keep things in perspective, unlike power, water
>>  (fresh and waste), and often gas, communications infrastructure is
>> mostly not critical yet. But I agree that we are clearly on a path in
>> that direction, so it is time to look at that from a different
>> perspective.
>> 
>>>> Personally, I am a big fan of putting the access network into
>>  communal hands, as these guys already do a decent job with other
>> critical infrastructure (see list above, plus roads) and I see a PtP
>> fiber access network terminating in some CO-like locations a viable
>> way to allow ISPs to compete in the internet service field all the
>> while using the communally build access network for a few. IIRC this
>> is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has
>> beed essentially unchanged for decades, I estimate that current fiber
>> access lines would also last for decades requiring no active component
>> 
>> changes in the field, making them candidates for communal management.
>> (With all my love for communal ownership and maintenance, these
>> typically are not very nimble and hence best when we talk about life
>> times of decades).
>> 
>>> This is happening in some places (the town where I live is doing
>>  such a rollout), but the incumbant ISPs are fighting this and in many
>> 
>> states have gotten laws created that prohibit towns from building such
>> 
>> systems.
>> 
>> A resistance that in the current system is understandable*...
>> btw, my point is not wanting to get rid of ISPs, I really just think
>> that the access network is more of a natural monopoly and if we want
>> actual ISP competition, the access network is the wrong place to
>> implement it... as it is unlikely that we will see multiple ISPs
>> running independent fibers to all/most dwelling units... There are two
>> 
>> ways I see to address this structural problem:
>> a) require ISPs to rent the access links to their competitors for
>> "reasonable" prices
>> b) as I proposed have some non-ISP entity build and maintain the
>> access network
>> 
>> None of these is terribly attractive to current ISPs, but we already
>> see how the economically more attractive PON approach throws a spanner
>> 
>> into a), on a PON the competitors might get bitstream access, but will
>> 
>> not be able to "light up" the fiber any way they see fit (as would be
>> possible in a PtP deployment, at least in theory). My subjective
>> preference is b) as I mentioned before, as I think that would offer a
>> level playing field for ISPs to compete doing what they do best, offer
>> 
>> internet access service while not pushing the cost of the access
>> network build-out to all-fiber onto the ISPs. This would allow a
>> fairer, less revenue driven approach to select which areas to convert
>> to FTTH first....
>> 
>> However this is pretty much orthogonal to Bob's idea, as I understand
>> it, as this subthread really is only about getting houses hooked up to
>> 
>> the internet and ignores his proposal how to do the in-house network
>> design in a future-proof way...
>> 
>> Regards
>> Sebastian
>> 
>> *) I am not saying such resistance is nice or the right thing, just
>> that I can see why it is happening.
>> 
>>> David Lang
>> 
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!P7nkOOY!vFtTwFdYBTFjrJCFqT0rp0o2dtaz2m-dskeRLX2dIW_Pujge6ZU8eOIxtkN_spTDlqyyzClrVbEMFFbvL3NlUgIHOg$
>> 
>> 
>> Links:
>> ------
>> [1]
>> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>> 
>> 
>> 
>> Links:
>> ------
>> [1] http://cis471.blogspot.com
>> [2] tel:+421919416714
>> [3] tel:+420775230885
>

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-28 20:37                                                                                                 ` David Lang
@ 2023-03-28 21:31                                                                                                   ` rjmcmahon
  2023-03-28 22:18                                                                                                     ` dan
  0 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-28 21:31 UTC (permalink / raw)
  To: David Lang
  Cc: Frantisek Borsik, Larry Press, Dave Taht via Starlink, bloat,
	dan, libreqos, Sebastian Moeller

Agreed though, from a semiconductor perspective, 100K units over ten+ 
years isn't going to drive a foundry to produce the parts required. 
Then, a small staff makes the same decisions for all 100K premises 
regardless of things like the ability to pay for differentiators as they 
have no differentiators (we all get Model T black.) These staffs are 
also trying to predict the future without any real ability to affect 
that future. It's worse than a tragedy of the commons because the sunk 
mistakes get magnified every passing year.

A FiWi architecture with pluggable components may have the opportunity 
to address these issues and do it in volume and at fair prices and also 
reduce climate impacts per taking in account capacity / (latency * 
distance * power), by making that aspect field upgradeable.

Bob
> https://sifinetworks.com/residential/cities/simi-valley-ca/
> 
> I'm due to get it to my area Q2 (or so). we're a suburb outside LA,
> but 100k+ people so not tiny.
> 
> David Lang
> 
> 
> On Tue, 28 Mar 2023, rjmcmahon wrote:
> 
>> There are municipal broadband projects. Most are in rural areas 
>> partially funded by the federal government via the USDA. Glasgow 
>> started a few decades ago. Similar to LUS in Lafayette, LA. 
>> https://www.usda.gov/broadband
>> 
>> Rural areas get a lot of federal money for things, a la the farm bill 
>> which also pays for food stamps instituted as part of the New Deal 
>> after the Great Depression.
>> 
>> https://sustainableagriculture.net/our-work/campaigns/fbcampaign/what-is-the-farm-bill/
>> 
>> None of this is really relevant to the vast majority of our urban 
>> populations that get broadband from investor-owned companies. These 
>> companies don't receive federal subsidies though sometimes they get 
>> access to municipal revenue bonds when doing city infrastructures.
>> 
>> Bob
>>> https://www.linkedin.com/in/christopher-mitchell-79078b5 and the like
>>> are doing a pretty good job (given the circumstances) here in the US.
>>> At least, that’s my understanding of his work.
>>> 
>>> All the best,
>>> 
>>> Frank
>>> Frantisek (Frank) Borsik
>>> 
>>> https://www.linkedin.com/in/frantisekborsik
>>> 
>>> Signal, Telegram, WhatsApp: +421919416714 [2]
>>> 
>>> iMessage, mobile: +420775230885 [3]
>>> 
>>> Skype: casioa5302ca
>>> 
>>> frantisek.borsik@gmail.com
>>> 
>>> On 28 March 2023 at 7:47:33 PM, rjmcmahon (rjmcmahon@rjmcmahon.com)
>>> wrote:
>>> 
>>>> Interesting. I'm skeptical that our cities in the U.S. can get this
>>>> (structural separation) right.
>>>> 
>>>> Pre-coaxial cable & contract carriage, the FCC licensed spectrum to
>>>> the
>>>> major media companies and placed a news obligation on them for these
>>>> OTA
>>>> rights. A society can't run a democracy well without quality and
>>>> factual
>>>> information to the constituents. Sadly, contract carriage got rid of
>>>> 
>>>> that news as a public service obligation as predicted by Eli Noam.
>>>> http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we get
>>>> January
>>>> 6th and an insurrection.
>>>> 
>>>> It takes a staff of 300 to produce 30 minutes of news three times a
>>>> day.
>>>> The co-axial franchise agreements per each city traded this
>>>> obligation
>>>> for a community access channel and a small studio, and annual
>>>> franchise
>>>> fees. History has shown this is insufficient for a city to provide
>>>> quality news to its citizens. Community access channels failed
>>>> miserably.
>>>> 
>>>> Another requirement was two cables so there would be "competition"
>>>> in
>>>> the coaxial offerings. This rarely happened because of natural
>>>> monopoly
>>>> both in the last mile and in negotiating broadcast rights (mostly
>>>> for
>>>> sports.) There is only one broadcast rights winner, e.g. NBC for the
>>>> 
>>>> Olympics, and only one last mile winner. That's been proven
>>>> empirically
>>>> in the U.S.
>>>> 
>>>> Now cities are dependent on those franchise fees for their budgets.
>>>> And
>>>> the cable cos rolled up to a national level. So it's mostly the FCC
>>>> that
>>>> regulates all of this where they care more about Janet Jackson's
>>>> breast
>>>> than providing accurate news to help a democracy function well.
>>>> 
>>> https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy
>>>> 
>>>> 
>>>> It gets worse as people are moving to unicast networks for their
>>>> "news."
>>>> But we're really not getting news at all, we're gravitating to
>>>> emotional
>>>> validations per our dysfunctions. Facebook et al happily provide
>>>> this
>>>> because it sells more ads. And then the major equipment providers
>>>> claim
>>>> they're doing great engineering because they can carry "AI loads!!"
>>>> and
>>>> their stock goes up in value. This means ads & news feeds that
>>>> trigger
>>>> dopamine hits for addicts are driving the money flows. Which is a
>>>> sad
>>>> theme for undereducated populations.
>>>> 
>>>> And ChatGPT is not the answer for our lack of education and a public
>>>> 
>>>> obligation to support those educations, which includes addiction
>>>> recovery programs, and the ability to think critically for
>>>> ourselves.
>>>> 
>>>> Bob
>>>> Here is an old (2014) post on Stockholm to my class "textbook":
>>>> 
>>>> 
>>> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>>>> 
>>>> 
>>>> [1]
>>>> Stockholm: 19 years of municipal broadband success [1]
>>>> The Stokab report should be required reading for all local
>>>> government
>>>> officials. Stockholm is one of the top Internet cities in the
>>>> worl...
>>>> 
>>>> cis471.blogspot.com [1]
>>>> 
>>>> -------------------------
>>>> 
>>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>>>> 
>>>> Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net>
>>>> Sent: Sunday, March 26, 2023 2:11 PM
>>>> To: David Lang <david@lang.hm>
>>>> Cc: dan <dandenson@gmail.com>; Frantisek Borsik
>>>> <frantisek.borsik@gmail.com>; libreqos
>>>> <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink
>>>> <starlink@lists.bufferbloat.net>; rjmcmahon
>>>> <rjmcmahon@rjmcmahon.com>;
>>>> bloat <bloat@lists.bufferbloat.net>
>>>> Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure
>>>> w/Comcast chat
>>>> 
>>>> Hi David,
>>>> 
>>>> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
>>>> 
>>>> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
>>>> 
>>>> The point of the thread is that we still do not treat digital
>>>  communications infrastructure as life support critical.
>>> 
>>>>> Well, let's keep things in perspective, unlike power, water
>>>  (fresh and waste), and often gas, communications infrastructure is
>>> mostly not critical yet. But I agree that we are clearly on a path in
>>> that direction, so it is time to look at that from a different
>>> perspective.
>>> 
>>>>> Personally, I am a big fan of putting the access network into
>>>  communal hands, as these guys already do a decent job with other
>>> critical infrastructure (see list above, plus roads) and I see a PtP
>>> fiber access network terminating in some CO-like locations a viable
>>> way to allow ISPs to compete in the internet service field all the
>>> while using the communally build access network for a few. IIRC this
>>> is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has
>>> beed essentially unchanged for decades, I estimate that current fiber
>>> access lines would also last for decades requiring no active 
>>> component
>>> 
>>> changes in the field, making them candidates for communal management.
>>> (With all my love for communal ownership and maintenance, these
>>> typically are not very nimble and hence best when we talk about life
>>> times of decades).
>>> 
>>>> This is happening in some places (the town where I live is doing
>>>  such a rollout), but the incumbant ISPs are fighting this and in 
>>> many
>>> 
>>> states have gotten laws created that prohibit towns from building 
>>> such
>>> 
>>> systems.
>>> 
>>> A resistance that in the current system is understandable*...
>>> btw, my point is not wanting to get rid of ISPs, I really just think
>>> that the access network is more of a natural monopoly and if we want
>>> actual ISP competition, the access network is the wrong place to
>>> implement it... as it is unlikely that we will see multiple ISPs
>>> running independent fibers to all/most dwelling units... There are 
>>> two
>>> 
>>> ways I see to address this structural problem:
>>> a) require ISPs to rent the access links to their competitors for
>>> "reasonable" prices
>>> b) as I proposed have some non-ISP entity build and maintain the
>>> access network
>>> 
>>> None of these is terribly attractive to current ISPs, but we already
>>> see how the economically more attractive PON approach throws a 
>>> spanner
>>> 
>>> into a), on a PON the competitors might get bitstream access, but 
>>> will
>>> 
>>> not be able to "light up" the fiber any way they see fit (as would be
>>> possible in a PtP deployment, at least in theory). My subjective
>>> preference is b) as I mentioned before, as I think that would offer a
>>> level playing field for ISPs to compete doing what they do best, 
>>> offer
>>> 
>>> internet access service while not pushing the cost of the access
>>> network build-out to all-fiber onto the ISPs. This would allow a
>>> fairer, less revenue driven approach to select which areas to convert
>>> to FTTH first....
>>> 
>>> However this is pretty much orthogonal to Bob's idea, as I understand
>>> it, as this subthread really is only about getting houses hooked up 
>>> to
>>> 
>>> the internet and ignores his proposal how to do the in-house network
>>> design in a future-proof way...
>>> 
>>> Regards
>>> Sebastian
>>> 
>>> *) I am not saying such resistance is nice or the right thing, just
>>> that I can see why it is happening.
>>> 
>>>> David Lang
>>> 
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!P7nkOOY!vFtTwFdYBTFjrJCFqT0rp0o2dtaz2m-dskeRLX2dIW_Pujge6ZU8eOIxtkN_spTDlqyyzClrVbEMFFbvL3NlUgIHOg$
>>> 
>>> 
>>> Links:
>>> ------
>>> [1]
>>> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>>> 
>>> 
>>> 
>>> Links:
>>> ------
>>> [1] http://cis471.blogspot.com
>>> [2] tel:+421919416714
>>> [3] tel:+420775230885
>> 

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-28 21:31                                                                                                   ` rjmcmahon
@ 2023-03-28 22:18                                                                                                     ` dan
  2023-03-28 22:42                                                                                                       ` rjmcmahon
  0 siblings, 1 reply; 183+ messages in thread
From: dan @ 2023-03-28 22:18 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Frantisek Borsik, Larry Press, Dave Taht via Starlink, bloat,
	libreqos, Sebastian Moeller, David Lang

[-- Attachment #1: Type: text/plain, Size: 12263 bytes --]

 IMO, there is a very near zero chance of this ‘FiWi’ coming to fruition.
No one wants it.  I don’t want it, I see nothing but flaws, single points
of failure, security issues, erosion of privacy in homes and business,  and
general consumer mistrust of such a model and well as consolidation and
monopolization of internet access.  I will actively speak out against this,
is bad in just about every way you can talk about.  I cannot find a single
benefit it offers.




On Mar 28, 2023 at 3:31:40 PM, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:

> Agreed though, from a semiconductor perspective, 100K units over ten+
> years isn't going to drive a foundry to produce the parts required.
> Then, a small staff makes the same decisions for all 100K premises
> regardless of things like the ability to pay for differentiators as they
> have no differentiators (we all get Model T black.) These staffs are
> also trying to predict the future without any real ability to affect
> that future. It's worse than a tragedy of the commons because the sunk
> mistakes get magnified every passing year.
>
> A FiWi architecture with pluggable components may have the opportunity
> to address these issues and do it in volume and at fair prices and also
> reduce climate impacts per taking in account capacity / (latency *
> distance * power), by making that aspect field upgradeable.
>
> Bob
>
> https://sifinetworks.com/residential/cities/simi-valley-ca/
>
>
> I'm due to get it to my area Q2 (or so). we're a suburb outside LA,
>
> but 100k+ people so not tiny.
>
>
> David Lang
>
>
>
> On Tue, 28 Mar 2023, rjmcmahon wrote:
>
>
> > There are municipal broadband projects. Most are in rural areas
>
> > partially funded by the federal government via the USDA. Glasgow
>
> > started a few decades ago. Similar to LUS in Lafayette, LA.
>
> > https://www.usda.gov/broadband
>
> >
>
> > Rural areas get a lot of federal money for things, a la the farm bill
>
> > which also pays for food stamps instituted as part of the New Deal
>
> > after the Great Depression.
>
> >
>
> >
> https://sustainableagriculture.net/our-work/campaigns/fbcampaign/what-is-the-farm-bill/
>
> >
>
> > None of this is really relevant to the vast majority of our urban
>
> > populations that get broadband from investor-owned companies. These
>
> > companies don't receive federal subsidies though sometimes they get
>
> > access to municipal revenue bonds when doing city infrastructures.
>
> >
>
> > Bob
>
> >> https://www.linkedin.com/in/christopher-mitchell-79078b5 and the like
>
> >> are doing a pretty good job (given the circumstances) here in the US.
>
> >> At least, that’s my understanding of his work.
>
> >>
>
> >> All the best,
>
> >>
>
> >> Frank
>
> >> Frantisek (Frank) Borsik
>
> >>
>
> >> https://www.linkedin.com/in/frantisekborsik
>
> >>
>
> >> Signal, Telegram, WhatsApp: +421919416714 [2]
>
> >>
>
> >> iMessage, mobile: +420775230885 [3]
>
> >>
>
> >> Skype: casioa5302ca
>
> >>
>
> >> frantisek.borsik@gmail.com
>
> >>
>
> >> On 28 March 2023 at 7:47:33 PM, rjmcmahon (rjmcmahon@rjmcmahon.com)
>
> >> wrote:
>
> >>
>
> >>> Interesting. I'm skeptical that our cities in the U.S. can get this
>
> >>> (structural separation) right.
>
> >>>
>
> >>> Pre-coaxial cable & contract carriage, the FCC licensed spectrum to
>
> >>> the
>
> >>> major media companies and placed a news obligation on them for these
>
> >>> OTA
>
> >>> rights. A society can't run a democracy well without quality and
>
> >>> factual
>
> >>> information to the constituents. Sadly, contract carriage got rid of
>
> >>>
>
> >>> that news as a public service obligation as predicted by Eli Noam.
>
> >>> http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we get
>
> >>> January
>
> >>> 6th and an insurrection.
>
> >>>
>
> >>> It takes a staff of 300 to produce 30 minutes of news three times a
>
> >>> day.
>
> >>> The co-axial franchise agreements per each city traded this
>
> >>> obligation
>
> >>> for a community access channel and a small studio, and annual
>
> >>> franchise
>
> >>> fees. History has shown this is insufficient for a city to provide
>
> >>> quality news to its citizens. Community access channels failed
>
> >>> miserably.
>
> >>>
>
> >>> Another requirement was two cables so there would be "competition"
>
> >>> in
>
> >>> the coaxial offerings. This rarely happened because of natural
>
> >>> monopoly
>
> >>> both in the last mile and in negotiating broadcast rights (mostly
>
> >>> for
>
> >>> sports.) There is only one broadcast rights winner, e.g. NBC for the
>
> >>>
>
> >>> Olympics, and only one last mile winner. That's been proven
>
> >>> empirically
>
> >>> in the U.S.
>
> >>>
>
> >>> Now cities are dependent on those franchise fees for their budgets.
>
> >>> And
>
> >>> the cable cos rolled up to a national level. So it's mostly the FCC
>
> >>> that
>
> >>> regulates all of this where they care more about Janet Jackson's
>
> >>> breast
>
> >>> than providing accurate news to help a democracy function well.
>
> >>>
>
> >>
> https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy
>
> >>>
>
> >>>
>
> >>> It gets worse as people are moving to unicast networks for their
>
> >>> "news."
>
> >>> But we're really not getting news at all, we're gravitating to
>
> >>> emotional
>
> >>> validations per our dysfunctions. Facebook et al happily provide
>
> >>> this
>
> >>> because it sells more ads. And then the major equipment providers
>
> >>> claim
>
> >>> they're doing great engineering because they can carry "AI loads!!"
>
> >>> and
>
> >>> their stock goes up in value. This means ads & news feeds that
>
> >>> trigger
>
> >>> dopamine hits for addicts are driving the money flows. Which is a
>
> >>> sad
>
> >>> theme for undereducated populations.
>
> >>>
>
> >>> And ChatGPT is not the answer for our lack of education and a public
>
> >>>
>
> >>> obligation to support those educations, which includes addiction
>
> >>> recovery programs, and the ability to think critically for
>
> >>> ourselves.
>
> >>>
>
> >>> Bob
>
> >>> Here is an old (2014) post on Stockholm to my class "textbook":
>
> >>>
>
> >>>
>
> >>
> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>
> >>>
>
> >>>
>
> >>> [1]
>
> >>> Stockholm: 19 years of municipal broadband success [1]
>
> >>> The Stokab report should be required reading for all local
>
> >>> government
>
> >>> officials. Stockholm is one of the top Internet cities in the
>
> >>> worl...
>
> >>>
>
> >>> cis471.blogspot.com [1]
>
> >>>
>
> >>> -------------------------
>
> >>>
>
> >>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>
> >>>
>
> >>> Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net>
>
> >>> Sent: Sunday, March 26, 2023 2:11 PM
>
> >>> To: David Lang <david@lang.hm>
>
> >>> Cc: dan <dandenson@gmail.com>; Frantisek Borsik
>
> >>> <frantisek.borsik@gmail.com>; libreqos
>
> >>> <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink
>
> >>> <starlink@lists.bufferbloat.net>; rjmcmahon
>
> >>> <rjmcmahon@rjmcmahon.com>;
>
> >>> bloat <bloat@lists.bufferbloat.net>
>
> >>> Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure
>
> >>> w/Comcast chat
>
> >>>
>
> >>> Hi David,
>
> >>>
>
> >>> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
>
> >>>
>
> >>> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
>
> >>>
>
> >>> The point of the thread is that we still do not treat digital
>
> >>  communications infrastructure as life support critical.
>
> >>
>
> >>>> Well, let's keep things in perspective, unlike power, water
>
> >>  (fresh and waste), and often gas, communications infrastructure is
>
> >> mostly not critical yet. But I agree that we are clearly on a path in
>
> >> that direction, so it is time to look at that from a different
>
> >> perspective.
>
> >>
>
> >>>> Personally, I am a big fan of putting the access network into
>
> >>  communal hands, as these guys already do a decent job with other
>
> >> critical infrastructure (see list above, plus roads) and I see a PtP
>
> >> fiber access network terminating in some CO-like locations a viable
>
> >> way to allow ISPs to compete in the internet service field all the
>
> >> while using the communally build access network for a few. IIRC this
>
> >> is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has
>
> >> beed essentially unchanged for decades, I estimate that current fiber
>
> >> access lines would also last for decades requiring no active
>
> >> component
>
> >>
>
> >> changes in the field, making them candidates for communal management.
>
> >> (With all my love for communal ownership and maintenance, these
>
> >> typically are not very nimble and hence best when we talk about life
>
> >> times of decades).
>
> >>
>
> >>> This is happening in some places (the town where I live is doing
>
> >>  such a rollout), but the incumbant ISPs are fighting this and in
>
> >> many
>
> >>
>
> >> states have gotten laws created that prohibit towns from building
>
> >> such
>
> >>
>
> >> systems.
>
> >>
>
> >> A resistance that in the current system is understandable*...
>
> >> btw, my point is not wanting to get rid of ISPs, I really just think
>
> >> that the access network is more of a natural monopoly and if we want
>
> >> actual ISP competition, the access network is the wrong place to
>
> >> implement it... as it is unlikely that we will see multiple ISPs
>
> >> running independent fibers to all/most dwelling units... There are
>
> >> two
>
> >>
>
> >> ways I see to address this structural problem:
>
> >> a) require ISPs to rent the access links to their competitors for
>
> >> "reasonable" prices
>
> >> b) as I proposed have some non-ISP entity build and maintain the
>
> >> access network
>
> >>
>
> >> None of these is terribly attractive to current ISPs, but we already
>
> >> see how the economically more attractive PON approach throws a
>
> >> spanner
>
> >>
>
> >> into a), on a PON the competitors might get bitstream access, but
>
> >> will
>
> >>
>
> >> not be able to "light up" the fiber any way they see fit (as would be
>
> >> possible in a PtP deployment, at least in theory). My subjective
>
> >> preference is b) as I mentioned before, as I think that would offer a
>
> >> level playing field for ISPs to compete doing what they do best,
>
> >> offer
>
> >>
>
> >> internet access service while not pushing the cost of the access
>
> >> network build-out to all-fiber onto the ISPs. This would allow a
>
> >> fairer, less revenue driven approach to select which areas to convert
>
> >> to FTTH first....
>
> >>
>
> >> However this is pretty much orthogonal to Bob's idea, as I understand
>
> >> it, as this subthread really is only about getting houses hooked up
>
> >> to
>
> >>
>
> >> the internet and ignores his proposal how to do the in-house network
>
> >> design in a future-proof way...
>
> >>
>
> >> Regards
>
> >> Sebastian
>
> >>
>
> >> *) I am not saying such resistance is nice or the right thing, just
>
> >> that I can see why it is happening.
>
> >>
>
> >>> David Lang
>
> >>
>
> >> _______________________________________________
>
> >> Starlink mailing list
>
> >> Starlink@lists.bufferbloat.net
>
> >>
> https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!P7nkOOY!vFtTwFdYBTFjrJCFqT0rp0o2dtaz2m-dskeRLX2dIW_Pujge6ZU8eOIxtkN_spTDlqyyzClrVbEMFFbvL3NlUgIHOg$
>
> >>
>
> >>
>
> >> Links:
>
> >> ------
>
> >> [1]
>
> >>
> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>
> >>
>
> >>
>
> >>
>
> >> Links:
>
> >> ------
>
> >> [1] http://cis471.blogspot.com
>
> >> [2] tel:+421919416714
>
> >> [3] tel:+420775230885
>
> >
>
>

[-- Attachment #2: Type: text/html, Size: 25168 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-28 22:18                                                                                                     ` dan
@ 2023-03-28 22:42                                                                                                       ` rjmcmahon
  0 siblings, 0 replies; 183+ messages in thread
From: rjmcmahon @ 2023-03-28 22:42 UTC (permalink / raw)
  To: dan
  Cc: Frantisek Borsik, Larry Press, Dave Taht via Starlink, bloat,
	libreqos, Sebastian Moeller, David Lang

If it doesn't align with privacy & security, what we know of physics, 
what can be achieved by world class engineering, what will be funded by 
market models or behaviors based upon payments & receipts, increase job 
creation for blue collar workers, reduce power consumption, etc. then I 
agree FiWi should, and likely will, fail.

Russia came very late to the industrial revolution because its leaders 
were against technological progress, e.g. trains. That was a critical 
juncture for them. 
https://blogs.lt.vt.edu/jhoran/2014/08/31/transportation-and-industrialization/

It seems likely to me we are at our own critical juncture. I hope we get 
it more or less right so that inclusive human societies, societies that 
learn to care for others, built from our technologies, technologies 
derived from the works & ideas of those who came before us, can benefit 
long after we each depart as has been done with potable water supplies 
for many (but not all.)

Bob

PS. I tend to ignore things that have no chance. I find it better to 
spend my time & energy on things that do have some possibility of 
impact. I find our lives are too short to do otherwise.

> IMO, there is a very near zero chance of this ‘FiWi’ coming to
> fruition.  No one wants it.  I don’t want it, I see nothing but
> flaws, single points of failure, security issues, erosion of privacy
> in homes and business,  and general consumer mistrust of such a model
> and well as consolidation and monopolization of internet access.  I
> will actively speak out against this, is bad in just about every way
> you can talk about.  I cannot find a single benefit it offers.
> 
> On Mar 28, 2023 at 3:31:40 PM, rjmcmahon <rjmcmahon@rjmcmahon.com>
> wrote:
> 
>> Agreed though, from a semiconductor perspective, 100K units over
>> ten+
>> years isn't going to drive a foundry to produce the parts required.
>> Then, a small staff makes the same decisions for all 100K premises
>> regardless of things like the ability to pay for differentiators as
>> they
>> have no differentiators (we all get Model T black.) These staffs are
>> 
>> also trying to predict the future without any real ability to affect
>> 
>> that future. It's worse than a tragedy of the commons because the
>> sunk
>> mistakes get magnified every passing year.
>> 
>> A FiWi architecture with pluggable components may have the
>> opportunity
>> to address these issues and do it in volume and at fair prices and
>> also
>> reduce climate impacts per taking in account capacity / (latency *
>> distance * power), by making that aspect field upgradeable.
>> 
>> Bob
>> 
>>> https://sifinetworks.com/residential/cities/simi-valley-ca/
>> 
>>> 
>> 
>>> I'm due to get it to my area Q2 (or so). we're a suburb outside
>>> LA,
>> 
>>> but 100k+ people so not tiny.
>> 
>>> 
>> 
>>> David Lang
>> 
>>> 
>> 
>>> 
>> 
>>> On Tue, 28 Mar 2023, rjmcmahon wrote:
>> 
>>> 
>> 
>>>> There are municipal broadband projects. Most are in rural areas
>> 
>>>> partially funded by the federal government via the USDA. Glasgow
>> 
>>>> started a few decades ago. Similar to LUS in Lafayette, LA.
>> 
>>>> https://www.usda.gov/broadband
>> 
>>>> 
>> 
>>>> Rural areas get a lot of federal money for things, a la the farm
>>> bill
>> 
>>>> which also pays for food stamps instituted as part of the New
>>> Deal
>> 
>>>> after the Great Depression.
>> 
>>>> 
>> 
>>>> 
>>> 
>> 
> https://sustainableagriculture.net/our-work/campaigns/fbcampaign/what-is-the-farm-bill/
>> 
>>>> 
>> 
>>>> None of this is really relevant to the vast majority of our
>>> urban
>> 
>>>> populations that get broadband from investor-owned companies.
>>> These
>> 
>>>> companies don't receive federal subsidies though sometimes they
>>> get
>> 
>>>> access to municipal revenue bonds when doing city
>>> infrastructures.
>> 
>>>> 
>> 
>>>> Bob
>> 
>>>>> https://www.linkedin.com/in/christopher-mitchell-79078b5 and
>>> the like
>> 
>>>>> are doing a pretty good job (given the circumstances) here in
>>> the US.
>> 
>>>>> At least, that’s my understanding of his work.
>> 
>>>>> 
>> 
>>>>> All the best,
>> 
>>>>> 
>> 
>>>>> Frank
>> 
>>>>> Frantisek (Frank) Borsik
>> 
>>>>> 
>> 
>>>>> https://www.linkedin.com/in/frantisekborsik
>> 
>>>>> 
>> 
>>>>> Signal, Telegram, WhatsApp: +421919416714 [2]
>> 
>>>>> 
>> 
>>>>> iMessage, mobile: +420775230885 [3]
>> 
>>>>> 
>> 
>>>>> Skype: casioa5302ca
>> 
>>>>> 
>> 
>>>>> frantisek.borsik@gmail.com
>> 
>>>>> 
>> 
>>>>> On 28 March 2023 at 7:47:33 PM, rjmcmahon
>>> (rjmcmahon@rjmcmahon.com)
>> 
>>>>> wrote:
>> 
>>>>> 
>> 
>>>>>> Interesting. I'm skeptical that our cities in the U.S. can get
>>> this
>> 
>>>>>> (structural separation) right.
>> 
>>>>>> 
>> 
>>>>>> Pre-coaxial cable & contract carriage, the FCC licensed
>>> spectrum to
>> 
>>>>>> the
>> 
>>>>>> major media companies and placed a news obligation on them for
>>> these
>> 
>>>>>> OTA
>> 
>>>>>> rights. A society can't run a democracy well without quality
>>> and
>> 
>>>>>> factual
>> 
>>>>>> information to the constituents. Sadly, contract carriage got
>>> rid of
>> 
>>>>>> 
>> 
>>>>>> that news as a public service obligation as predicted by Eli
>>> Noam.
>> 
>>>>>> http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we
>>> get
>> 
>>>>>> January
>> 
>>>>>> 6th and an insurrection.
>> 
>>>>>> 
>> 
>>>>>> It takes a staff of 300 to produce 30 minutes of news three
>>> times a
>> 
>>>>>> day.
>> 
>>>>>> The co-axial franchise agreements per each city traded this
>> 
>>>>>> obligation
>> 
>>>>>> for a community access channel and a small studio, and annual
>> 
>>>>>> franchise
>> 
>>>>>> fees. History has shown this is insufficient for a city to
>>> provide
>> 
>>>>>> quality news to its citizens. Community access channels failed
>> 
>>>>>> miserably.
>> 
>>>>>> 
>> 
>>>>>> Another requirement was two cables so there would be
>>> "competition"
>> 
>>>>>> in
>> 
>>>>>> the coaxial offerings. This rarely happened because of natural
>> 
>>>>>> monopoly
>> 
>>>>>> both in the last mile and in negotiating broadcast rights
>>> (mostly
>> 
>>>>>> for
>> 
>>>>>> sports.) There is only one broadcast rights winner, e.g. NBC
>>> for the
>> 
>>>>>> 
>> 
>>>>>> Olympics, and only one last mile winner. That's been proven
>> 
>>>>>> empirically
>> 
>>>>>> in the U.S.
>> 
>>>>>> 
>> 
>>>>>> Now cities are dependent on those franchise fees for their
>>> budgets.
>> 
>>>>>> And
>> 
>>>>>> the cable cos rolled up to a national level. So it's mostly
>>> the FCC
>> 
>>>>>> that
>> 
>>>>>> regulates all of this where they care more about Janet
>>> Jackson's
>> 
>>>>>> breast
>> 
>>>>>> than providing accurate news to help a democracy function
>>> well.
>> 
>>>>>> 
>> 
>>>>> 
>>> 
>> 
> https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy
>> 
>>>>>> 
>> 
>>>>>> 
>> 
>>>>>> It gets worse as people are moving to unicast networks for
>>> their
>> 
>>>>>> "news."
>> 
>>>>>> But we're really not getting news at all, we're gravitating to
>> 
>>>>>> emotional
>> 
>>>>>> validations per our dysfunctions. Facebook et al happily
>>> provide
>> 
>>>>>> this
>> 
>>>>>> because it sells more ads. And then the major equipment
>>> providers
>> 
>>>>>> claim
>> 
>>>>>> they're doing great engineering because they can carry "AI
>>> loads!!"
>> 
>>>>>> and
>> 
>>>>>> their stock goes up in value. This means ads & news feeds that
>> 
>>>>>> trigger
>> 
>>>>>> dopamine hits for addicts are driving the money flows. Which
>>> is a
>> 
>>>>>> sad
>> 
>>>>>> theme for undereducated populations.
>> 
>>>>>> 
>> 
>>>>>> And ChatGPT is not the answer for our lack of education and a
>>> public
>> 
>>>>>> 
>> 
>>>>>> obligation to support those educations, which includes
>>> addiction
>> 
>>>>>> recovery programs, and the ability to think critically for
>> 
>>>>>> ourselves.
>> 
>>>>>> 
>> 
>>>>>> Bob
>> 
>>>>>> Here is an old (2014) post on Stockholm to my class
>>> "textbook":
>> 
>>>>>> 
>> 
>>>>>> 
>> 
>>>>> 
>>> 
>> 
> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>> 
>>>>>> 
>> 
>>>>>> 
>> 
>>>>>> [1]
>> 
>>>>>> Stockholm: 19 years of municipal broadband success [1]
>> 
>>>>>> The Stokab report should be required reading for all local
>> 
>>>>>> government
>> 
>>>>>> officials. Stockholm is one of the top Internet cities in the
>> 
>>>>>> worl...
>> 
>>>>>> 
>> 
>>>>>> cis471.blogspot.com [1] [1]
>> 
>>>>>> 
>> 
>>>>>> -------------------------
>> 
>>>>>> 
>> 
>>>>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on
>>> behalf of
>> 
>>>>>> 
>> 
>>>>>> Sebastian Moeller via Starlink
>>> <starlink@lists.bufferbloat.net>
>> 
>>>>>> Sent: Sunday, March 26, 2023 2:11 PM
>> 
>>>>>> To: David Lang <david@lang.hm>
>> 
>>>>>> Cc: dan <dandenson@gmail.com>; Frantisek Borsik
>> 
>>>>>> <frantisek.borsik@gmail.com>; libreqos
>> 
>>>>>> <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink
>> 
>>>>>> <starlink@lists.bufferbloat.net>; rjmcmahon
>> 
>>>>>> <rjmcmahon@rjmcmahon.com>;
>> 
>>>>>> bloat <bloat@lists.bufferbloat.net>
>> 
>>>>>> Subject: Re: [Starlink] [Bloat] On fiber as critical
>>> infrastructure
>> 
>>>>>> w/Comcast chat
>> 
>>>>>> 
>> 
>>>>>> Hi David,
>> 
>>>>>> 
>> 
>>>>>> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
>> 
>>>>>> 
>> 
>>>>>> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
>> 
>>>>>> 
>> 
>>>>>> The point of the thread is that we still do not treat digital
>> 
>>>>> communications infrastructure as life support critical.
>> 
>>>>> 
>> 
>>>>>>> Well, let's keep things in perspective, unlike power, water
>> 
>>>>> (fresh and waste), and often gas, communications
>>> infrastructure is
>> 
>>>>> mostly not critical yet. But I agree that we are clearly on a
>>> path in
>> 
>>>>> that direction, so it is time to look at that from a different
>> 
>>>>> perspective.
>> 
>>>>> 
>> 
>>>>>>> Personally, I am a big fan of putting the access network into
>> 
>>>>> communal hands, as these guys already do a decent job with
>>> other
>> 
>>>>> critical infrastructure (see list above, plus roads) and I see
>>> a PtP
>> 
>>>>> fiber access network terminating in some CO-like locations a
>>> viable
>> 
>>>>> way to allow ISPs to compete in the internet service field all
>>> the
>> 
>>>>> while using the communally build access network for a few. IIRC
>>> this
>> 
>>>>> is how Amsterdam organized its FTTH roll-out. Just as POTS
>>> wiring has
>> 
>>>>> beed essentially unchanged for decades, I estimate that current
>>> fiber
>> 
>>>>> access lines would also last for decades requiring no active
>> 
>>>>> component
>> 
>>>>> 
>> 
>>>>> changes in the field, making them candidates for communal
>>> management.
>> 
>>>>> (With all my love for communal ownership and maintenance, these
>> 
>>>>> typically are not very nimble and hence best when we talk about
>>> life
>> 
>>>>> times of decades).
>> 
>>>>> 
>> 
>>>>>> This is happening in some places (the town where I live is
>>> doing
>> 
>>>>> such a rollout), but the incumbant ISPs are fighting this and
>>> in
>> 
>>>>> many
>> 
>>>>> 
>> 
>>>>> states have gotten laws created that prohibit towns from
>>> building
>> 
>>>>> such
>> 
>>>>> 
>> 
>>>>> systems.
>> 
>>>>> 
>> 
>>>>> A resistance that in the current system is understandable*...
>> 
>>>>> btw, my point is not wanting to get rid of ISPs, I really just
>>> think
>> 
>>>>> that the access network is more of a natural monopoly and if we
>>> want
>> 
>>>>> actual ISP competition, the access network is the wrong place
>>> to
>> 
>>>>> implement it... as it is unlikely that we will see multiple
>>> ISPs
>> 
>>>>> running independent fibers to all/most dwelling units... There
>>> are
>> 
>>>>> two
>> 
>>>>> 
>> 
>>>>> ways I see to address this structural problem:
>> 
>>>>> a) require ISPs to rent the access links to their competitors
>>> for
>> 
>>>>> "reasonable" prices
>> 
>>>>> b) as I proposed have some non-ISP entity build and maintain
>>> the
>> 
>>>>> access network
>> 
>>>>> 
>> 
>>>>> None of these is terribly attractive to current ISPs, but we
>>> already
>> 
>>>>> see how the economically more attractive PON approach throws a
>> 
>>>>> spanner
>> 
>>>>> 
>> 
>>>>> into a), on a PON the competitors might get bitstream access,
>>> but
>> 
>>>>> will
>> 
>>>>> 
>> 
>>>>> not be able to "light up" the fiber any way they see fit (as
>>> would be
>> 
>>>>> possible in a PtP deployment, at least in theory). My
>>> subjective
>> 
>>>>> preference is b) as I mentioned before, as I think that would
>>> offer a
>> 
>>>>> level playing field for ISPs to compete doing what they do
>>> best,
>> 
>>>>> offer
>> 
>>>>> 
>> 
>>>>> internet access service while not pushing the cost of the
>>> access
>> 
>>>>> network build-out to all-fiber onto the ISPs. This would allow
>>> a
>> 
>>>>> fairer, less revenue driven approach to select which areas to
>>> convert
>> 
>>>>> to FTTH first....
>> 
>>>>> 
>> 
>>>>> However this is pretty much orthogonal to Bob's idea, as I
>>> understand
>> 
>>>>> it, as this subthread really is only about getting houses
>>> hooked up
>> 
>>>>> to
>> 
>>>>> 
>> 
>>>>> the internet and ignores his proposal how to do the in-house
>>> network
>> 
>>>>> design in a future-proof way...
>> 
>>>>> 
>> 
>>>>> Regards
>> 
>>>>> Sebastian
>> 
>>>>> 
>> 
>>>>> *) I am not saying such resistance is nice or the right thing,
>>> just
>> 
>>>>> that I can see why it is happening.
>> 
>>>>> 
>> 
>>>>>> David Lang
>> 
>>>>> 
>> 
>>>>> _______________________________________________
>> 
>>>>> Starlink mailing list
>> 
>>>>> Starlink@lists.bufferbloat.net
>> 
>>>>> 
>>> 
>> 
> https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!P7nkOOY!vFtTwFdYBTFjrJCFqT0rp0o2dtaz2m-dskeRLX2dIW_Pujge6ZU8eOIxtkN_spTDlqyyzClrVbEMFFbvL3NlUgIHOg$
>> 
>>>>> 
>> 
>>>>> 
>> 
>>>>> Links:
>> 
>>>>> ------
>> 
>>>>> [1]
>> 
>>>>> 
>>> 
>> 
> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>> 
>>>>> 
>> 
>>>>> 
>> 
>>>>> 
>> 
>>>>> Links:
>> 
>>>>> ------
>> 
>>>>> [1] http://cis471.blogspot.com
>> 
>>>>> [2] tel:+421919416714
>> 
>>>>> [3] tel:+420775230885
>> 
>>>> 
> 
> 
> Links:
> ------
> [1] http://cis471.blogspot.com

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-28 17:47                                                                                           ` rjmcmahon
  2023-03-28 18:11                                                                                             ` Frantisek Borsik
@ 2023-03-29  8:28                                                                                             ` Sebastian Moeller
       [not found]                                                                                               ` <a2857ec4-a6ea-e9eb-cf99-17ef7ea08ef2@indexexchange.com>
                                                                                                                 ` (2 more replies)
  1 sibling, 3 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-29  8:28 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Larry Press, David Lang, dan, Frantisek Borsik, libreqos,
	Dave Taht via Starlink, bloat

Hi Bob,


> On Mar 28, 2023, at 19:47, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
> 
> Interesting. I'm skeptical that our cities in the U.S. can get this (structural separation) right.

There really isn't that much to get wrong, you built the access network and terminate the per household fibers in arge enough "exchanges" there you offer ISPs to lighten up the fibers on the premise that customers can use any ISP they want (that is present in the exchange)... and on ISP change will just be patched differently in the exchange.
While I think that local "government" also could successfully run internet access services, I see no reason why they should do so (unless there is no competition).
The goal here is to move the "natural monopoly" of the access network out of the hand of the "market" (as markets simply fail as optimizing resource allocation instruments under mono- and oligopoly conditions, on either side).


> 
> Pre-coaxial cable & contract carriage, the FCC licensed spectrum to the major media companies and placed a news obligation on them for these OTA rights. A society can't run a democracy well without quality and factual information to the constituents. Sadly, contract carriage got rid of that news as a public service obligation as predicted by Eli Noam. http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we get January 6th and an insurrection.



> 
> It takes a staff of 300 to produce 30 minutes of news three times a day. The co-axial franchise agreements per each city traded this obligation for a community access channel and a small studio, and annual franchise fees. History has shown this is insufficient for a city to provide quality news to its citizens. Community access channels failed miserably.

	I would argue this is that there are things where cities excel and some where they simply are mediocre... managing monopoly infrastructure (like roads, water, sometime power) with long amortization times is something they do well (either directly or via companies they own and operate). 

> Another requirement was two cables so there would be "competition" in the coaxial offerings. This rarely happened because of natural monopoly both in the last mile and in negotiating broadcast rights (mostly for sports.) There is only one broadcast rights winner, e.g. NBC for the Olympics, and only one last mile winner. That's been proven empirically in the U.S.

	Yes, that is why the operator of the last mile, should really not offer services over that mile itself. Real competition on the access lines themselves is not going to happen (at least not is sufficient number to make a market solution viable), but there is precedence of getting enough service providers to offer their services over access lines (e.g. Amsterdam).

> Now cities are dependent on those franchise fees for their budgets. And the cable cos rolled up to a national level. So it's mostly the FCC that regulates all of this where they care more about Janet Jackson's breast than providing accurate news to help a democracy function well. https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy
> 
> It gets worse as people are moving to unicast networks for their "news." But we're really not getting news at all, we're gravitating to emotional validations per our dysfunctions. Facebook et al happily provide this because it sells more ads. And then the major equipment providers claim they're doing great engineering because they can carry "AI loads!!" and their stock goes up in value.  This means ads & news feeds that trigger dopamine hits for addicts are driving the money flows. Which is a sad theme for undereducated populations.

	I am not 100% sure this is a uni- versus broadcast issue... even on uni-cast I can consume traditional middle-of the road news and even on broadcast I can opt for pretend-news. Sure the social media explosion with its auto-bias-amplification incentives (they care for time spend on the platform and will show anything they believe will people stay longer, and guess what that is not a strategy to rhymes well with objective information transmission, but emotional engagement, often negative, but I think we all know this).


> 
> And ChatGPT is not the answer for our lack of education and a public obligation to support those educations, which includes addiction recovery programs, and the ability to think critically for ourselves.

	Yes, for sure not ;) This is a fad mostly, and will go away some time in the future, once people realize that this flavor of machine learning is great for what it is, but what it is is not what we are prone to believe it is...

Regards
	Sebastian


> 
> Bob
>> Here is an old (2014) post on Stockholm to my class "textbook":
>> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>> [1]
>> Stockholm: 19 years of municipal broadband success [1]
>> The Stokab report should be required reading for all local government
>> officials. Stockholm is one of the  top Internet cities in the worl...
>> cis471.blogspot.com
>> -------------------------
>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>> Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net>
>> Sent: Sunday, March 26, 2023 2:11 PM
>> To: David Lang <david@lang.hm>
>> Cc: dan <dandenson@gmail.com>; Frantisek Borsik
>> <frantisek.borsik@gmail.com>; libreqos
>> <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink
>> <starlink@lists.bufferbloat.net>; rjmcmahon <rjmcmahon@rjmcmahon.com>;
>> bloat <bloat@lists.bufferbloat.net>
>> Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure
>> w/Comcast chat
>> Hi David,
>>> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
>>> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
>>>>> The point of the thread is that we still do not treat digital
>> communications infrastructure as life support critical.
>>>>      Well, let's keep things in perspective, unlike power, water
>> (fresh and waste), and often gas, communications infrastructure is
>> mostly not critical yet. But I agree that we are clearly on a path in
>> that direction, so it is time to look at that from a different
>> perspective.
>>>>      Personally, I am a big fan of putting the access network into
>> communal hands, as these guys already do a decent job with other
>> critical infrastructure (see list above, plus roads) and I see a PtP
>> fiber access network terminating in some CO-like locations a viable
>> way to allow ISPs to compete in the internet service field all the
>> while using the communally build access network for a few. IIRC this
>> is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has
>> beed essentially unchanged for decades, I estimate that current fiber
>> access lines would also last for decades requiring no active component
>> changes in the field, making them candidates for communal management.
>> (With all my love for communal ownership and maintenance, these
>> typically are not very nimble and hence best when we talk about life
>> times of decades).
>>> This is happening in some places (the town where I live is doing
>> such a rollout), but the incumbant ISPs are fighting this and in many
>> states have gotten laws created that prohibit towns from building such
>> systems.
>>        A resistance that in the current system is understandable*...
>> btw, my point is not wanting to get rid of ISPs, I really just think
>> that the access network is more of a natural monopoly and if we want
>> actual ISP competition, the access network is the wrong place to
>> implement it... as it is unlikely that we will see multiple ISPs
>> running independent fibers to all/most dwelling units... There are two
>> ways I see to address this structural problem:
>> a) require ISPs to rent the access links to their competitors for
>> "reasonable" prices
>> b) as I proposed have some non-ISP entity build and maintain the
>> access network
>> None of these is terribly attractive to current ISPs, but we already
>> see how the economically more attractive PON approach throws a spanner
>> into a), on a PON the competitors might get bitstream access, but will
>> not be able to "light up" the fiber any way they see fit (as would be
>> possible in a PtP deployment, at least in theory). My subjective
>> preference is b) as I mentioned before, as I think that would offer a
>> level playing field for ISPs to compete doing what they do best, offer
>> internet access service while not pushing the cost of the access
>> network build-out to all-fiber onto the ISPs. This would allow a
>> fairer, less revenue driven approach to select which areas to convert
>> to FTTH first....
>> However this is pretty much orthogonal to Bob's idea, as I understand
>> it, as this subthread really is only about getting houses hooked up to
>> the internet and ignores his proposal how to do the in-house network
>> design in a future-proof way...
>> Regards
>>        Sebastian
>> *) I am not saying such resistance is nice or the right thing, just
>> that I can see why it is happening.
>>> David Lang
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!P7nkOOY!vFtTwFdYBTFjrJCFqT0rp0o2dtaz2m-dskeRLX2dIW_Pujge6ZU8eOIxtkN_spTDlqyyzClrVbEMFFbvL3NlUgIHOg$
>> Links:
>> ------
>> [1] https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html


^ permalink raw reply	[flat|nested] 183+ messages in thread

* [LibreQoS] Enabling a production model
       [not found]                                                                                                 ` <716ECAAD-E2EE-4647-9E73-D60BF8BF9C1E@searls.com>
@ 2023-03-29 13:40                                                                                                   ` Dave Taht
  2023-03-29 14:54                                                                                                     ` dan
  0 siblings, 1 reply; 183+ messages in thread
From: Dave Taht @ 2023-03-29 13:40 UTC (permalink / raw)
  To: Doc Searls; +Cc: Dave Collier-Brown, Dave Taht via Starlink, libreqos, bloat

Doc: thank you for giving me a way to express the promise of fiber to
enable a better "production model",
in what you wrote below.

Btw, folks, I am doing an AMA with broadband.io on friday, with a live
chat. It is a chance for us techies to engage more directly with the
state directors with $70B of government funding as part of the NTIA
BEAD program and others like internet4all - and to help focus them on
things that would result in a genuinely better internet. I plan to
focus more on reducing latency and improving interoperability than
bufferbloat, but I have no idea what will happen. "This broadband of
which you speak... does it have IPv6?".

Please come!? I would love it if more folk with experience all around
the world, in what can be done right and wrong with a broadband
rollout, if they showed up to help us here in the USA.

https://www.broadband.io/c/broadband-grant-events/dave-taht

On Wed, Mar 29, 2023 at 6:22 AM Doc Searls via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
> Always a mistake to generalize from a sample of one, but in my case I have four, because I live in four places. So I like to think that, to some degree, I represent a kind of market demand.
>
> All those places—Santa Barbara (CA), New York (NY), Bloomington (IN), and San Marino (CA)—are served by cable monopolies (Cox, Spectrum, Comcast/Xfinity) that provide (or at least claim) 1 Gb service... downstream of course. One (Cox) provides 36 Mb of upstream capacity. The other two provide just 10 Mb.  Because of that, residents have no option to do much work, or to store large amounts of data, in clouds (to mention just one grace of upstream capacity). The market is rigged for consumption, not production, on the TV model. Same as it has been since commercial activity began to explode in 1995, when John Perry Barlow wrote Death From Above. It's killer. Please read it: https://dl.acm.org/doi/pdf/10.1145/203356.203358.

I have been citing that piece left and right lately.

> But here in Bloomington, where I am writing now, the city has come up with a public/private arrangement that has much promise:
>
> https://www.bloomington.in.gov/fiber

I think the smartest thing any city can do to start with, is to
establish a good ole-fashioned internet exchange point there, require
those providing service in the city to interconnect,
>
> See what you think.
>
> For me, the promise of fiber is a huge attraction to living and working here. And I am not alone.
>
> Doc
>
> On Mar 29, 2023, at 8:27 AM, Dave Collier-Brown via Starlink <starlink@lists.bufferbloat.net> wrote:
>
>
> On 3/29/23 04:28, Sebastian Moeller via Starlink wrote:
>
> Hi Bob,
>
>
> On Mar 28, 2023, at 19:47, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>
> Interesting. I'm skeptical that our cities in the U.S. can get this (structural separation) right.
>
> There really isn't that much to get wrong, you built the access network and terminate the per household fibers in arge enough "exchanges" there you offer ISPs to lighten up the fibers on the premise that customers can use any ISP they want (that is present in the exchange)... and on ISP change will just be patched differently in the exchange.
> While I think that local "government" also could successfully run internet access services, I see no reason why they should do so (unless there is no competition).
> The goal here is to move the "natural monopoly" of the access network out of the hand of the "market" (as markets simply fail as optimizing resource allocation instruments under mono- and oligopoly conditions, on either side).
>
>
> We see  the same issue in Canada: some provinces and cities happily
> manage the delivery of services over cables hung from province-owned
> poles (eg, TCP/IP in New Brunswick).  Other provinces did less well, and
> we have "telephone poles" and "hydro poles" on the same street (in
> Toronto, Ontario)
>
> There is no real difference between New Brunswick, Ontario or, for that
> matter, Minnesota. If a province or city has operated natural monopolies
> like the last mile for water and sewer, it can operate the last mile for
> any other monopoly.
>
> --dave
>
> --
> David Collier-Brown,         | Always do right. This will gratify
> System Programmer and Author | some people and astonish the rest
> dave.collier-brown@indexexchange.com |              -- Mark Twain
>
>
> CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory.
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink



-- 
AMA March 31: https://www.broadband.io/c/broadband-grant-events/dave-taht
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-29  8:28                                                                                             ` Sebastian Moeller
       [not found]                                                                                               ` <a2857ec4-a6ea-e9eb-cf99-17ef7ea08ef2@indexexchange.com>
@ 2023-03-29 13:46                                                                                               ` Frantisek Borsik
  2023-03-29 14:57                                                                                                 ` Dave Taht
  2023-03-29 19:02                                                                                               ` rjmcmahon
  2 siblings, 1 reply; 183+ messages in thread
From: Frantisek Borsik @ 2023-03-29 13:46 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: rjmcmahon, Larry Press, David Lang, dan, libreqos,
	Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 12601 bytes --]

Guys, tell me why - besides that it's just the usual, human nature - why
every discussion here ends with our version of the "reductio ad Hitlerum",
which is, in my mind, more or less subtle attack on capitalism,
entrepreneurship, corporations, market and the like.
Also, more importantly, we all want to close that goddamn digital divide.
And we will never gonna do it with fiber ONLY...not to mention FiWi.

Also, if there are some fruitful attempts to build some community
broadband, be it fiber, wireless or mix...we end up with "yeah, but it's
not done in big cities, just in some rural areas."

We need to close the digital divide - which is, mostly, locate in the rural
areas, e.g. to bring broadband where it's not or where it's not sufficient
and there is a lot of tools in the toolbox, not just fiber, and every
single one of them has its place and should be used and funded by the grant
money. The majority of these places need to be served quickly and on the
best effort a.k.a what is actually possible and feasible in their
respective territory, terrain...and on on the BS notion "GIGABIT or
NOTHING", or even 100/20 or nothing, when 25/5 would be more than enough,
for most of the cases, in the foreseeable future.

To let me bitch a bit about those bad corporations :) - just take a look on
the market with WiFi routers. Most of the mainstream vendors ship old HW
with old SW, it can be even 8-10 years old kernel, they don't care about
CVEs, they barely do some security updates - not to mention the regular SW
upgrades (adding new features), they don't built do last...they want You to
buy a new router every year or two. Dave's write up of this is here:
https://blog.cerowrt.org/post/tango_on_turris/
And what Starlink did? Crazy, ridiculous story
<https://www.youtube.com/watch?v=c9gLo6Xrwgw>. It has been improved a bit,
but it was meant to be good right from the box, bufferbloat fixed and all
that jazz, because OpenWrt has it fixed, right?

BUT still, to hand over even more control of the Internet infrastructure to
the government is nonsense. Government can be a good servant, but a bad
master. Exactly like the corporate world.


All the best,

Frank

Frantisek (Frank) Borsik



https://www.linkedin.com/in/frantisekborsik

Signal, Telegram, WhatsApp: +421919416714

iMessage, mobile: +420775230885

Skype: casioa5302ca

frantisek.borsik@gmail.com


On Wed, Mar 29, 2023 at 10:28 AM Sebastian Moeller <moeller0@gmx.de> wrote:

> Hi Bob,
>
>
> > On Mar 28, 2023, at 19:47, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
> >
> > Interesting. I'm skeptical that our cities in the U.S. can get this
> (structural separation) right.
>
> There really isn't that much to get wrong, you built the access network
> and terminate the per household fibers in arge enough "exchanges" there you
> offer ISPs to lighten up the fibers on the premise that customers can use
> any ISP they want (that is present in the exchange)... and on ISP change
> will just be patched differently in the exchange.
> While I think that local "government" also could successfully run internet
> access services, I see no reason why they should do so (unless there is no
> competition).
> The goal here is to move the "natural monopoly" of the access network out
> of the hand of the "market" (as markets simply fail as optimizing resource
> allocation instruments under mono- and oligopoly conditions, on either
> side).
>
>
> >
> > Pre-coaxial cable & contract carriage, the FCC licensed spectrum to the
> major media companies and placed a news obligation on them for these OTA
> rights. A society can't run a democracy well without quality and factual
> information to the constituents. Sadly, contract carriage got rid of that
> news as a public service obligation as predicted by Eli Noam.
> http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we get January
> 6th and an insurrection.
>
>
>
> >
> > It takes a staff of 300 to produce 30 minutes of news three times a day.
> The co-axial franchise agreements per each city traded this obligation for
> a community access channel and a small studio, and annual franchise fees.
> History has shown this is insufficient for a city to provide quality news
> to its citizens. Community access channels failed miserably.
>
>         I would argue this is that there are things where cities excel and
> some where they simply are mediocre... managing monopoly infrastructure
> (like roads, water, sometime power) with long amortization times is
> something they do well (either directly or via companies they own and
> operate).
>
> > Another requirement was two cables so there would be "competition" in
> the coaxial offerings. This rarely happened because of natural monopoly
> both in the last mile and in negotiating broadcast rights (mostly for
> sports.) There is only one broadcast rights winner, e.g. NBC for the
> Olympics, and only one last mile winner. That's been proven empirically in
> the U.S.
>
>         Yes, that is why the operator of the last mile, should really not
> offer services over that mile itself. Real competition on the access lines
> themselves is not going to happen (at least not is sufficient number to
> make a market solution viable), but there is precedence of getting enough
> service providers to offer their services over access lines (e.g.
> Amsterdam).
>
> > Now cities are dependent on those franchise fees for their budgets. And
> the cable cos rolled up to a national level. So it's mostly the FCC that
> regulates all of this where they care more about Janet Jackson's breast
> than providing accurate news to help a democracy function well.
> https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy
> >
> > It gets worse as people are moving to unicast networks for their "news."
> But we're really not getting news at all, we're gravitating to emotional
> validations per our dysfunctions. Facebook et al happily provide this
> because it sells more ads. And then the major equipment providers claim
> they're doing great engineering because they can carry "AI loads!!" and
> their stock goes up in value.  This means ads & news feeds that trigger
> dopamine hits for addicts are driving the money flows. Which is a sad theme
> for undereducated populations.
>
>         I am not 100% sure this is a uni- versus broadcast issue... even
> on uni-cast I can consume traditional middle-of the road news and even on
> broadcast I can opt for pretend-news. Sure the social media explosion with
> its auto-bias-amplification incentives (they care for time spend on the
> platform and will show anything they believe will people stay longer, and
> guess what that is not a strategy to rhymes well with objective information
> transmission, but emotional engagement, often negative, but I think we all
> know this).
>
>
> >
> > And ChatGPT is not the answer for our lack of education and a public
> obligation to support those educations, which includes addiction recovery
> programs, and the ability to think critically for ourselves.
>
>         Yes, for sure not ;) This is a fad mostly, and will go away some
> time in the future, once people realize that this flavor of machine
> learning is great for what it is, but what it is is not what we are prone
> to believe it is...
>
> Regards
>         Sebastian
>
>
> >
> > Bob
> >> Here is an old (2014) post on Stockholm to my class "textbook":
> >>
> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
> >> [1]
> >> Stockholm: 19 years of municipal broadband success [1]
> >> The Stokab report should be required reading for all local government
> >> officials. Stockholm is one of the  top Internet cities in the worl...
> >> cis471.blogspot.com
> >> -------------------------
> >> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
> >> Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net>
> >> Sent: Sunday, March 26, 2023 2:11 PM
> >> To: David Lang <david@lang.hm>
> >> Cc: dan <dandenson@gmail.com>; Frantisek Borsik
> >> <frantisek.borsik@gmail.com>; libreqos
> >> <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink
> >> <starlink@lists.bufferbloat.net>; rjmcmahon <rjmcmahon@rjmcmahon.com>;
> >> bloat <bloat@lists.bufferbloat.net>
> >> Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure
> >> w/Comcast chat
> >> Hi David,
> >>> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
> >>> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
> >>>>> The point of the thread is that we still do not treat digital
> >> communications infrastructure as life support critical.
> >>>>      Well, let's keep things in perspective, unlike power, water
> >> (fresh and waste), and often gas, communications infrastructure is
> >> mostly not critical yet. But I agree that we are clearly on a path in
> >> that direction, so it is time to look at that from a different
> >> perspective.
> >>>>      Personally, I am a big fan of putting the access network into
> >> communal hands, as these guys already do a decent job with other
> >> critical infrastructure (see list above, plus roads) and I see a PtP
> >> fiber access network terminating in some CO-like locations a viable
> >> way to allow ISPs to compete in the internet service field all the
> >> while using the communally build access network for a few. IIRC this
> >> is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has
> >> beed essentially unchanged for decades, I estimate that current fiber
> >> access lines would also last for decades requiring no active component
> >> changes in the field, making them candidates for communal management.
> >> (With all my love for communal ownership and maintenance, these
> >> typically are not very nimble and hence best when we talk about life
> >> times of decades).
> >>> This is happening in some places (the town where I live is doing
> >> such a rollout), but the incumbant ISPs are fighting this and in many
> >> states have gotten laws created that prohibit towns from building such
> >> systems.
> >>        A resistance that in the current system is understandable*...
> >> btw, my point is not wanting to get rid of ISPs, I really just think
> >> that the access network is more of a natural monopoly and if we want
> >> actual ISP competition, the access network is the wrong place to
> >> implement it... as it is unlikely that we will see multiple ISPs
> >> running independent fibers to all/most dwelling units... There are two
> >> ways I see to address this structural problem:
> >> a) require ISPs to rent the access links to their competitors for
> >> "reasonable" prices
> >> b) as I proposed have some non-ISP entity build and maintain the
> >> access network
> >> None of these is terribly attractive to current ISPs, but we already
> >> see how the economically more attractive PON approach throws a spanner
> >> into a), on a PON the competitors might get bitstream access, but will
> >> not be able to "light up" the fiber any way they see fit (as would be
> >> possible in a PtP deployment, at least in theory). My subjective
> >> preference is b) as I mentioned before, as I think that would offer a
> >> level playing field for ISPs to compete doing what they do best, offer
> >> internet access service while not pushing the cost of the access
> >> network build-out to all-fiber onto the ISPs. This would allow a
> >> fairer, less revenue driven approach to select which areas to convert
> >> to FTTH first....
> >> However this is pretty much orthogonal to Bob's idea, as I understand
> >> it, as this subthread really is only about getting houses hooked up to
> >> the internet and ignores his proposal how to do the in-house network
> >> design in a future-proof way...
> >> Regards
> >>        Sebastian
> >> *) I am not saying such resistance is nice or the right thing, just
> >> that I can see why it is happening.
> >>> David Lang
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net
> >>
> https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!P7nkOOY!vFtTwFdYBTFjrJCFqT0rp0o2dtaz2m-dskeRLX2dIW_Pujge6ZU8eOIxtkN_spTDlqyyzClrVbEMFFbvL3NlUgIHOg$
> >> Links:
> >> ------
> >> [1]
> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>
>

[-- Attachment #2: Type: text/html, Size: 16699 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] Enabling a production model
  2023-03-29 13:40                                                                                                   ` [LibreQoS] Enabling a production model Dave Taht
@ 2023-03-29 14:54                                                                                                     ` dan
  2023-03-29 16:53                                                                                                       ` Jeremy Austin
  2023-03-29 17:13                                                                                                       ` [LibreQoS] [Bloat] " David Lang
  0 siblings, 2 replies; 183+ messages in thread
From: dan @ 2023-03-29 14:54 UTC (permalink / raw)
  To: Dave Taht
  Cc: Doc Searls, Dave Taht via Starlink, Dave Collier-Brown, libreqos, bloat

[-- Attachment #1: Type: text/plain, Size: 3863 bytes --]

>
>
>
> > Always a mistake to generalize from a sample of one, but in my case I
> have four, because I live in four places. So I like to think that, to some
> degree, I represent a kind of market demand.
> >
> > All those places—Santa Barbara (CA), New York (NY), Bloomington (IN),
> and San Marino (CA)—are served by cable monopolies (Cox, Spectrum,
> Comcast/Xfinity) that provide (or at least claim) 1 Gb service...
> downstream of course. One (Cox) provides 36 Mb of upstream capacity. The
> other two provide just 10 Mb.  Because of that, residents have no option to
> do much work, or to store large amounts of data, in clouds (to mention just
> one grace of upstream capacity). The market is rigged for consumption, not
> production, on the TV model. Same as it has been since commercial activity
> began to explode in 1995, when John Perry Barlow wrote Death From Above.
> It's killer. Please read it:
> https://dl.acm.org/doi/pdf/10.1145/203356.203358.
>
> I have been citing that piece left and right lately.
>


The problem is that this 'FiWi' model or the municipal backhaul model
FORCES this model.   The reason you are stuck with those providers is
because there is a monopoly designed into the system.  Without competition,
10Mbps is good enough.  There is no way for consumers to 'vote' with their
money because they can't pick another product or provider.


> I think the smartest thing any city can do to start with, is to
> establish a good ole-fashioned internet exchange point there, require
> those providing service in the city to interconnect,
> >
> > See what you think.
> >
> > For me, the promise of fiber is a huge attraction to living and working
> here. And I am not alone.
> >
>
>
This makes the municipality the internet provider.  Even if you get to pick
who does the upstream on the bits, it's ultimately the muni to repair the
lines, handle the CPE, and handle the switching infrastructure in the
exchange.  So an ISP run by a city council? a council who got elected to
'Karen' away about how cell towers give them 5G poisoning?  Disaster.

Take any city listed about and look at the water and waste facilities.  The
pockets of the city that are not served or are poorly served.  The
Flint Michigans with one source of water that is contaminated.  how those
services just stop, homes beyond are on septic tanks and hauled water.
When you've destroyed all the ISPs, whos going to bring services to those
beyond the core?  The county?  not sure if you've ever dealt with county
officials...

This entirely removes all choice.  The entire job of the ISP is the last
mile, there is no point in selling bits to individual users at the
exchange.  Take that away and the city itself is necessarily the ISP.  The
'exchange' model is fundamentally flawed because there's no money in it.
The city is going to have to raise taxes or charge for the last mile at the
same rates as the ISPs do, except more because government inefficient and
inflexible.   The upstream connectivity is the simplest and cheapest part
of being an ISP.

The solution to having monopolies control internet service isn't to create
a different monopoly to control internet service.

The obvious solution is to foster competition.  Anywhere you overlay cable
companies with fiber BOTH companies remain and compete against each other
and the cable company increases upload speeds.  If fiber was so naturally
superior, the cable companies would be erased.   I have MSP customers in
multiple markets with competing techs and it's VERY nice to be able to get
fiber and cable or terragraph and cable to a business for resilience.  I
cannot do that on single product dominated markets.  The 'exchange' model
above doesn't do it because of that single point of failure of the
municipal fiber.

[-- Attachment #2: Type: text/html, Size: 4428 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-29 13:46                                                                                               ` [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat Frantisek Borsik
@ 2023-03-29 14:57                                                                                                 ` Dave Taht
  2023-03-29 19:23                                                                                                   ` Sebastian Moeller
  0 siblings, 1 reply; 183+ messages in thread
From: Dave Taht @ 2023-03-29 14:57 UTC (permalink / raw)
  To: Frantisek Borsik
  Cc: Sebastian Moeller, David Lang, Dave Taht via Starlink, libreqos,
	Larry Press, rjmcmahon, bloat

I ended up top posting, sorry. Frank: every conversation here does not
end as you say - there have been 14 years worth of conversation
here....

As for finding ways out of this mess, olle, the future of the internet
as we know it is uncertain. It has long been obvious that "a
declaration of freedom of cyberspace" wouldn't work, but, overall, I
think we can continue to "shrink the world", and connect people
ever-better together.

I liked the approach we took in the mid-90s, in particular,
establishing non-profits to attempt to be neutral arbiters of how we
hooked the internet up together, ranging from a multiplicity of orgs
like ICANN (managing numbering), the RIR's (fairly distributing the
numbers), and the IETF and IEEE.

Places where we went commercial (like the name registrars) were pretty
competitive but where we  handed out  "natural monopolies", somewhat
less so - .com for example, the gold rush for .tlds, but .org, at
least, worked out more or less, helping support isoc and the ietf.
Some RIRs were good (apnic, ripe, ARIN), some, like AFRINIC, not so
much. The commercial ISP market that started in the 90s was fueled by
a flat market for phone services, and the AT&Ts of then were caught
flatfooted by the sudden demand for phone lines that were nailed up
for 3 hours, rather than 3 minutes, as had been the case for voice
calls.

On the other hand, makers of needed infrastructure software, like
isc.org (makers of bind9 and dhcp only survived due to the support of
a beneficent millionaire ). DNS has kind of fallen into disrepair
along the edge. I would have really liked it had it remained viable
and email in particular, continued to transit all the way into the
home, where in the US at least, it would have had strong 4th amendment
protections.

It has been a bad decade or two for non-profits. They cannot lobby,
the structure of corporate and personal taxation has shifted away from
support for it, and the work they do to sustain the internet, far too
invisible to too many. There have been meetings for years about
internet governance from folk that wish to govern, but by design and
intent, I think, from those days, we attempted to make the Net
ungovernable, which I do think, remains a good thing - connecting
people to people - with as few intermediaries and influencers as
possible.

I can certainly now make a compelling argument for capital forces
distributing IPv4 address spaces better (which it is doing), but that
scarcity market excludes new entrants from getting online. I shudder
at whatever convolutions new broadband builders are going to have to
go through to provide decent ipv4 access...

It is also increasing a bad-seeming market for the cell companies and
ISPs, with cries for subsidy or a two way market billing the more
profitable cloudy service providers.

And so it goes.

A bit more below.


On Wed, Mar 29, 2023 at 6:46 AM Frantisek Borsik via LibreQoS
<libreqos@lists.bufferbloat.net> wrote:
>
> Guys, tell me why - besides that it's just the usual, human nature - why every discussion here ends with our version of the "reductio ad Hitlerum", which is, in my mind, more or less subtle attack on capitalism, entrepreneurship, corporations, market and the like.
> Also, more importantly, we all want to close that goddamn digital divide. And we will never gonna do it with fiber ONLY...not to mention FiWi.

The digital divide, if you count tethering to a cellphone, is largely
crossed in the USA, IMHO.

> Also, if there are some fruitful attempts to build some community broadband, be it fiber, wireless or mix...we end up with "yeah, but it's not done in big cities, just in some rural areas."

I look at the fiber effort in bloomington, il, that doc just praised.
They have been at it now, for 14 years.... I would really like a
starting point for cities to be merely enabling a local internet
exchange point and/or small data center.

>
> We need to close the digital divide - which is, mostly, locate in the rural areas, e.g. to bring broadband where it's not or where it's not sufficient and there is a lot of tools in the toolbox, not just fiber, and every single one of them has its place and should be used and funded by the grant money. The majority of these places need to be served quickly and on the best effort a.k.a what is actually possible and feasible in their respective territory, terrain...and on on the BS notion "GIGABIT or NOTHING", or even 100/20 or nothing, when 25/5 would be more than enough, for most of the cases, in the foreseeable future.

0) frank is quoting me from a BOFH-influenced new piece that I posted
the other day:
 https://blog.cerowrt.org/post/trouble_in_paradise/ that is so cynical
and depressing that I would prefer it not be spread around much. It is
funny, in spots, though.

1) I am really impressed with starlink's evolution. Someone can get
one, run a few radios or wires to their neighbors, and be sufficiently
online. That is not quite starlink's business model, but as they
cannot have high densitity in the first place, I wish they would
embrace it.

2) We have long shown here, that 25/5 is more than enough for nearly
all present day uses of the internet... with good queueing. We have
not won that argument anywhere outside this community, as yet, but I
like to think the tide is turning.

However issues with backhaul remain, and we have other failure modes
emerging by layering umptine layers of NAT on top of our overstressed
IPv4 networks (far, far, worse in india and china).

Fiber is great for long distances, it is great in high density
environments, and it is also great within a datacenter or internet
exchange point. As for to the home, I'm still of two minds regarding
GPON vs active ethernet, I vastly prefer the idea of an interoperable
network with active fiber ethernet gear you can get at best buy, but
nearly everyone with actual deployment experience is saying gpon...

> To let me bitch a bit about those bad corporations :) - just take a look on the market with WiFi routers. Most of the mainstream vendors ship old HW with old SW, it can be even 8-10 years old kernel, they don't care about CVEs, they barely do some security updates - not to mention the regular SW upgrades (adding new features), they don't built do last...they want You to buy a new router every year or two. Dave's write up of this is here: https://blog.cerowrt.org/post/tango_on_turris/

This is actually a place where I think state governments could step up
and set minimum standards (much like california set emission standards
for cars, leading the nation) for the kind of gear that they are
willing to import, develop, or fund. IPv6, mandated. Good queuing,
also. And probably the one mandate that would establish a decent,
sustainable market for better gear, would be to mandate that all gear
sold here have a prompt (say 45 day) response to CVEs, and regular
software updates, for new features and other bugs. Software designed
around the world, but "built in america" would be a start towards me
sleeping a lot better about iot.

Actual federal involvement in the consumer space here would boot a 95%
of the scary cheap stuff out of Amazon.

> And what Starlink did? Crazy, ridiculous story. It has been improved a bit, but it was meant to be good right from the box, bufferbloat fixed and all that jazz, because OpenWrt has it fixed, right?

I think they are NOT optimizing for speedtest anymore, which in part
is due to them no longer attempting to comply with the stupid RDOF
regulations regarding that - just providing an ever better service to
the folk that need it. They are really good nowadays at low levels
(e.g. videconferencing) of bandwidth, and only get flaky when you
stress it out or are in areas with too many terminals.

Yes it could be much better.

More ISPs should flat out disregard speedtest results on building
their networks.

Plug - please see the latest demos of the stats we get out of libreqos
now up at https://payne.taht.net

>
> BUT still, to hand over even more control of the Internet infrastructure to the government is nonsense. Government can be a good servant, but a bad master. Exactly like the corporate world.

We always need balance in the farce.

>
> All the best,
>
> Frank
>
> Frantisek (Frank) Borsik
>
>
>
> https://www.linkedin.com/in/frantisekborsik
>
> Signal, Telegram, WhatsApp: +421919416714
>
> iMessage, mobile: +420775230885
>
> Skype: casioa5302ca
>
> frantisek.borsik@gmail.com
>
>
>
> On Wed, Mar 29, 2023 at 10:28 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi Bob,
>>
>>
>> > On Mar 28, 2023, at 19:47, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>> >
>> > Interesting. I'm skeptical that our cities in the U.S. can get this (structural separation) right.
>>
>> There really isn't that much to get wrong, you built the access network and terminate the per household fibers in arge enough "exchanges" there you offer ISPs to lighten up the fibers on the premise that customers can use any ISP they want (that is present in the exchange)... and on ISP change will just be patched differently in the exchange.
>> While I think that local "government" also could successfully run internet access services, I see no reason why they should do so (unless there is no competition).
>> The goal here is to move the "natural monopoly" of the access network out of the hand of the "market" (as markets simply fail as optimizing resource allocation instruments under mono- and oligopoly conditions, on either side).
>>
>>
>> >
>> > Pre-coaxial cable & contract carriage, the FCC licensed spectrum to the major media companies and placed a news obligation on them for these OTA rights. A society can't run a democracy well without quality and factual information to the constituents. Sadly, contract carriage got rid of that news as a public service obligation as predicted by Eli Noam. http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we get January 6th and an insurrection.
>>
>>
>>
>> >
>> > It takes a staff of 300 to produce 30 minutes of news three times a day. The co-axial franchise agreements per each city traded this obligation for a community access channel and a small studio, and annual franchise fees. History has shown this is insufficient for a city to provide quality news to its citizens. Community access channels failed miserably.
>>
>>         I would argue this is that there are things where cities excel and some where they simply are mediocre... managing monopoly infrastructure (like roads, water, sometime power) with long amortization times is something they do well (either directly or via companies they own and operate).
>>
>> > Another requirement was two cables so there would be "competition" in the coaxial offerings. This rarely happened because of natural monopoly both in the last mile and in negotiating broadcast rights (mostly for sports.) There is only one broadcast rights winner, e.g. NBC for the Olympics, and only one last mile winner. That's been proven empirically in the U.S.
>>
>>         Yes, that is why the operator of the last mile, should really not offer services over that mile itself. Real competition on the access lines themselves is not going to happen (at least not is sufficient number to make a market solution viable), but there is precedence of getting enough service providers to offer their services over access lines (e.g. Amsterdam).
>>
>> > Now cities are dependent on those franchise fees for their budgets. And the cable cos rolled up to a national level. So it's mostly the FCC that regulates all of this where they care more about Janet Jackson's breast than providing accurate news to help a democracy function well. https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy
>> >
>> > It gets worse as people are moving to unicast networks for their "news." But we're really not getting news at all, we're gravitating to emotional validations per our dysfunctions. Facebook et al happily provide this because it sells more ads. And then the major equipment providers claim they're doing great engineering because they can carry "AI loads!!" and their stock goes up in value.  This means ads & news feeds that trigger dopamine hits for addicts are driving the money flows. Which is a sad theme for undereducated populations.
>>
>>         I am not 100% sure this is a uni- versus broadcast issue... even on uni-cast I can consume traditional middle-of the road news and even on broadcast I can opt for pretend-news. Sure the social media explosion with its auto-bias-amplification incentives (they care for time spend on the platform and will show anything they believe will people stay longer, and guess what that is not a strategy to rhymes well with objective information transmission, but emotional engagement, often negative, but I think we all know this).
>>
>>
>> >
>> > And ChatGPT is not the answer for our lack of education and a public obligation to support those educations, which includes addiction recovery programs, and the ability to think critically for ourselves.
>>
>>         Yes, for sure not ;) This is a fad mostly, and will go away some time in the future, once people realize that this flavor of machine learning is great for what it is, but what it is is not what we are prone to believe it is...
>>
>> Regards
>>         Sebastian
>>
>>
>> >
>> > Bob
>> >> Here is an old (2014) post on Stockholm to my class "textbook":
>> >> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>> >> [1]
>> >> Stockholm: 19 years of municipal broadband success [1]
>> >> The Stokab report should be required reading for all local government
>> >> officials. Stockholm is one of the  top Internet cities in the worl...
>> >> cis471.blogspot.com
>> >> -------------------------
>> >> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>> >> Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net>
>> >> Sent: Sunday, March 26, 2023 2:11 PM
>> >> To: David Lang <david@lang.hm>
>> >> Cc: dan <dandenson@gmail.com>; Frantisek Borsik
>> >> <frantisek.borsik@gmail.com>; libreqos
>> >> <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink
>> >> <starlink@lists.bufferbloat.net>; rjmcmahon <rjmcmahon@rjmcmahon.com>;
>> >> bloat <bloat@lists.bufferbloat.net>
>> >> Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure
>> >> w/Comcast chat
>> >> Hi David,
>> >>> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
>> >>> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
>> >>>>> The point of the thread is that we still do not treat digital
>> >> communications infrastructure as life support critical.
>> >>>>      Well, let's keep things in perspective, unlike power, water
>> >> (fresh and waste), and often gas, communications infrastructure is
>> >> mostly not critical yet. But I agree that we are clearly on a path in
>> >> that direction, so it is time to look at that from a different
>> >> perspective.
>> >>>>      Personally, I am a big fan of putting the access network into
>> >> communal hands, as these guys already do a decent job with other
>> >> critical infrastructure (see list above, plus roads) and I see a PtP
>> >> fiber access network terminating in some CO-like locations a viable
>> >> way to allow ISPs to compete in the internet service field all the
>> >> while using the communally build access network for a few. IIRC this
>> >> is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has
>> >> beed essentially unchanged for decades, I estimate that current fiber
>> >> access lines would also last for decades requiring no active component
>> >> changes in the field, making them candidates for communal management.
>> >> (With all my love for communal ownership and maintenance, these
>> >> typically are not very nimble and hence best when we talk about life
>> >> times of decades).
>> >>> This is happening in some places (the town where I live is doing
>> >> such a rollout), but the incumbant ISPs are fighting this and in many
>> >> states have gotten laws created that prohibit towns from building such
>> >> systems.
>> >>        A resistance that in the current system is understandable*...
>> >> btw, my point is not wanting to get rid of ISPs, I really just think
>> >> that the access network is more of a natural monopoly and if we want
>> >> actual ISP competition, the access network is the wrong place to
>> >> implement it... as it is unlikely that we will see multiple ISPs
>> >> running independent fibers to all/most dwelling units... There are two
>> >> ways I see to address this structural problem:
>> >> a) require ISPs to rent the access links to their competitors for
>> >> "reasonable" prices
>> >> b) as I proposed have some non-ISP entity build and maintain the
>> >> access network
>> >> None of these is terribly attractive to current ISPs, but we already
>> >> see how the economically more attractive PON approach throws a spanner
>> >> into a), on a PON the competitors might get bitstream access, but will
>> >> not be able to "light up" the fiber any way they see fit (as would be
>> >> possible in a PtP deployment, at least in theory). My subjective
>> >> preference is b) as I mentioned before, as I think that would offer a
>> >> level playing field for ISPs to compete doing what they do best, offer
>> >> internet access service while not pushing the cost of the access
>> >> network build-out to all-fiber onto the ISPs. This would allow a
>> >> fairer, less revenue driven approach to select which areas to convert
>> >> to FTTH first....
>> >> However this is pretty much orthogonal to Bob's idea, as I understand
>> >> it, as this subthread really is only about getting houses hooked up to
>> >> the internet and ignores his proposal how to do the in-house network
>> >> design in a future-proof way...
>> >> Regards
>> >>        Sebastian
>> >> *) I am not saying such resistance is nice or the right thing, just
>> >> that I can see why it is happening.
>> >>> David Lang
>> >> _______________________________________________
>> >> Starlink mailing list
>> >> Starlink@lists.bufferbloat.net
>> >> https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!P7nkOOY!vFtTwFdYBTFjrJCFqT0rp0o2dtaz2m-dskeRLX2dIW_Pujge6ZU8eOIxtkN_spTDlqyyzClrVbEMFFbvL3NlUgIHOg$
>> >> Links:
>> >> ------
>> >> [1] https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>>
> _______________________________________________
> LibreQoS mailing list
> LibreQoS@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/libreqos



--
AMA March 31: https://www.broadband.io/c/broadband-grant-events/dave-taht
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] Enabling a production model
  2023-03-29 14:54                                                                                                     ` dan
@ 2023-03-29 16:53                                                                                                       ` Jeremy Austin
  2023-03-29 18:33                                                                                                         ` [LibreQoS] [Starlink] " Sebastian Moeller
  2023-03-29 17:13                                                                                                       ` [LibreQoS] [Bloat] " David Lang
  1 sibling, 1 reply; 183+ messages in thread
From: Jeremy Austin @ 2023-03-29 16:53 UTC (permalink / raw)
  To: dan
  Cc: Dave Collier-Brown, Dave Taht, Dave Taht via Starlink,
	Doc Searls, bloat, libreqos

[-- Attachment #1: Type: text/plain, Size: 2086 bytes --]

On Wed, Mar 29, 2023 at 6:54 AM dan via LibreQoS <
libreqos@lists.bufferbloat.net> wrote:

> The obvious solution is to foster competition.  Anywhere you overlay cable
>> companies with fiber BOTH companies remain and compete against each other
>> and the cable company increases upload speeds.  If fiber was so naturally
>> superior, the cable companies would be erased.   I have MSP customers in
>> multiple markets with competing techs and it's VERY nice to be able to get
>> fiber and cable or terragraph and cable to a business for resilience.  I
>> cannot do that on single product dominated markets.  The 'exchange' model
>> above doesn't do it because of that single point of failure of the
>> municipal fiber.
>
>
To say categorically that competition is the only solution disenfranchises
the sparse edge where it doesn’t pay to have a *single* terrestrial
incumbent, let alone two.

Yes, we will have StarLink, and perhaps eventually some competition to it
(Bezos), but there is no escaping the reality that competition in the last
mile destroys value.

Between StarLink densities and this utopia where both fiber and cable can
afford to build (and maintain!) enough customers lie a giant wasteland —
not enough customers for lines, too many for LEO. Fixed Wireless Access
helps, but even in that context competition destroys value.

You can have subsidy (“Broadband for All” OR consumer choice, not both.

At this point I would hold up an Omnibus-podcast-like sign “Compatible With
Marxism”, or “Not Compatible With Marxism”, but I’m not sure which.

$.02
Jeremy

-- 
[image: Company logo]
*Jeremy Austin*
Sr. Product Manager
*Preseem | Aterlo Networks*
Book a call: https://app.hubspot.com/meetings/jeremy548
1-833-773-7336 ext 718 *|* 1-907-803-5422
jeremy@aterlo.com
www.preseem.com
 [image: facebook icon] [image: twitter icon] [image: linkedin icon] [image:
youtube icon]
[image: Ask us about our new features today!]
<https://preseem.com/2021/10/news-preseem-wins-wispa-service-of-the-year-award-2021/>

[-- Attachment #2: Type: text/html, Size: 5698 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat]  Enabling a production model
  2023-03-29 14:54                                                                                                     ` dan
  2023-03-29 16:53                                                                                                       ` Jeremy Austin
@ 2023-03-29 17:13                                                                                                       ` David Lang
  2023-03-29 17:34                                                                                                         ` dan
  2023-03-29 17:46                                                                                                         ` Rich Brown
  1 sibling, 2 replies; 183+ messages in thread
From: David Lang @ 2023-03-29 17:13 UTC (permalink / raw)
  To: dan
  Cc: Dave Taht, Dave Taht via Starlink, Doc Searls,
	Dave Collier-Brown, libreqos, bloat

[-- Attachment #1: Type: text/plain, Size: 1811 bytes --]

On Wed, 29 Mar 2023, dan via Bloat wrote:

> The obvious solution is to foster competition.  Anywhere you overlay cable
> companies with fiber BOTH companies remain and compete against each other
> and the cable company increases upload speeds.  If fiber was so naturally
> superior, the cable companies would be erased.   I have MSP customers in
> multiple markets with competing techs and it's VERY nice to be able to get
> fiber and cable or terragraph and cable to a business for resilience.  I
> cannot do that on single product dominated markets.  The 'exchange' model
> above doesn't do it because of that single point of failure of the
> municipal fiber.

The problem is that laying cable (or provisioning wifi access to cover the area) 
is expensive, and if you try to have multiple different companies doing it, they 
each need a minimum density of users to make it worth their while.

In the current monopoly approach, they are required by contract to serve less 
profitable areas in order to be given the monopoly for the profitable ones, take 
away that monopoly, and further dilute the user density by having multiple 
companies provide service, and the result isn't good.

Even in the big cities where there is enough density, the results aren't pretty. 
Go back in history and look at what was happening with phone and power lines 
in places like New York City before the monopolies were setup. Moving to the 
regulated monoopolies was hailed by users as a win from that chaos (including 
deliberate sabatage of competitors)

I'm in a Los Angeles Suburb, and until recently, I couldn't even get fast cable 
service to my home, the city owned fiber will be a huge win for me, and I can 
still have my starlink dish, cell phone, or (once they cover my area) a wireless 
ISP as a backup

David Lang

[-- Attachment #2: Type: text/plain, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat]  Enabling a production model
  2023-03-29 17:13                                                                                                       ` [LibreQoS] [Bloat] " David Lang
@ 2023-03-29 17:34                                                                                                         ` dan
  2023-03-29 20:03                                                                                                           ` David Lang
  2023-04-02 12:00                                                                                                           ` [LibreQoS] [Starlink] " Sebastian Moeller
  2023-03-29 17:46                                                                                                         ` Rich Brown
  1 sibling, 2 replies; 183+ messages in thread
From: dan @ 2023-03-29 17:34 UTC (permalink / raw)
  To: David Lang
  Cc: Dave Taht, Dave Taht via Starlink, Doc Searls,
	Dave Collier-Brown, libreqos, bloat

[-- Attachment #1: Type: text/plain, Size: 3415 bytes --]

On Mar 29, 2023 at 11:13:07 AM, David Lang <david@lang.hm> wrote:

> On Wed, 29 Mar 2023, dan via Bloat wrote:
>
> Even in the big cities where there is enough density, the results aren't
> pretty.
> Go back in history and look at what was happening with phone and power
> lines
> in places like New York City before the monopolies were setup. Moving to
> the
> regulated monoopolies was hailed by users as a win from that chaos
> (including
> deliberate sabatage of competitors)
>
> I'm in a Los Angeles Suburb, and until recently, I couldn't even get fast
> cable
> service to my home, the city owned fiber will be a huge win for me, and I
> can
> still have my starlink dish, cell phone, or (once they cover my area) a
> wireless
> ISP as a backup
>
> David Lang
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

When you said ‘even with’ you negated the previous point.  ‘Even with’
incredible density the monopoly structure of broadband in America today
makes competition beaurocratically hard.  That should be the place where we
see fierce competition.  Or, that should be the place the fiber has
completely wiped out cable, yet it hasn’t.   There are only so many
conclusions available here.  Fiber isn’t actually that much better than
cable, or the monopolies have non-monetary protections so competition can’t
move in,  or maybe those areas are already properly served 😕 . The
commonality in non-rural or small-town-rural areas that have connectivity
struggles is the monopoly that is in the way.  Rural areas often have few
options because the returns aren’t there for big companies, but they are
for small companies if they were actually able to get into those markets.
If you build in a monopoly in the rural areas, when they grow they will
have the same issue the urban areas have, a monopoly that was paid to
deliver last decades services and the only way they’ll upgrade is either
government money and mandates, or competition which you’ve prevented.  You
put a monopoly in place and that will be nearly permanent.  Outside the
scope of this debate but I’d rather see individual subsidies to promote
competition vs the government building out a monopoly.

I’ll remind you, I run 3 ISPs.  What limits my expansion is generally
protections given to a monopoly by local government.  You might ask Jeremy
from the previous comment, he has direct view to 2 of these networks and
might attest that we do reasonably well and are one of the ISPs putting in
real effort.   We welcome competition because it gives us an opportunity to
be the best.  Nothing better to drive positive reviews for your company
than being better than the other guys.

Also, in MOST of America, there is no shortage of money.  There is nothing
limiting multiple providers from building in.  You can find places this
isn’t true but 90%+ is it.  I run my businesses covering mostly rural areas
in a red state that is on the lower end of incomes and I’ve done this out
of pocket, operating in the black, and upgrading and expanding constantly.
I have 3 other wisps, spectrum, TDS, Century Link  in the area.  None of us
are hurting for money to expand services.  Also, I’m beating the
competition to the door vs their government money.

[-- Attachment #2: Type: text/html, Size: 4251 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat]  Enabling a production model
  2023-03-29 17:13                                                                                                       ` [LibreQoS] [Bloat] " David Lang
  2023-03-29 17:34                                                                                                         ` dan
@ 2023-03-29 17:46                                                                                                         ` Rich Brown
  2023-03-29 19:02                                                                                                           ` tom
  2023-03-29 19:11                                                                                                           ` Dave Collier-Brown
  1 sibling, 2 replies; 183+ messages in thread
From: Rich Brown @ 2023-03-29 17:46 UTC (permalink / raw)
  To: David Lang
  Cc: dan, Dave Collier-Brown, libreqos, Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 1159 bytes --]


> On Mar 29, 2023, at 1:13 PM, David Lang via Starlink <starlink@lists.bufferbloat.net> wrote:
> 
> The problem is that laying cable (or provisioning wifi access to cover the area) is expensive, and if you try to have multiple different companies doing it, they each need a minimum density of users to make it worth their while.

Yes, this stuff is expensive, Here is reasonably current order-of-magnitude cost breakdown for a rural NH town nearby:

1) $55,000 per road-mile to design the system, get licenses to install on the utility poles, "make ready" (to check that the poles are ready for new facilities) and to hang the fiber on the pole. Installing coax would save $5K to $8K per mile.

2) $2,000 to $4,000 per premise to install the drop from the utility pole to the building, bring the fiber into the building and install the router. 

3) Pole rental (in NH) is about $10/pole/year. Divide miles of road by 200 feet between poles to get an estimate of the number of poles.

So density of customers is critical for the business case. That's why there are so many monopoly providers - it's costly to overbuild an already served area.


[-- Attachment #2: Type: text/html, Size: 2209 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink]  Enabling a production model
  2023-03-29 16:53                                                                                                       ` Jeremy Austin
@ 2023-03-29 18:33                                                                                                         ` Sebastian Moeller
  0 siblings, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-29 18:33 UTC (permalink / raw)
  To: Jeremy Austin
  Cc: dan, Dave Collier-Brown, libreqos, Dave Taht via Starlink, bloat

Hi Jeremy,


> On Mar 29, 2023, at 18:53, Jeremy Austin via Starlink <starlink@lists.bufferbloat.net> wrote:
> 
> 
> 
> On Wed, Mar 29, 2023 at 6:54 AM dan via LibreQoS <libreqos@lists.bufferbloat.net> wrote:
> The obvious solution is to foster competition.  Anywhere you overlay cable companies with fiber BOTH companies remain and compete against each other and the cable company increases upload speeds.  If fiber was so naturally superior, the cable companies would be erased.   I have MSP customers in multiple markets with competing techs and it's VERY nice to be able to get fiber and cable or terragraph and cable to a business for resilience.  I cannot do that on single product dominated markets.  The 'exchange' model above doesn't do it because of that single point of failure of the municipal fiber.
> 
> To say categorically that competition is the only solution disenfranchises the sparse edge where it doesn’t pay to have a *single* terrestrial incumbent, let alone two.
> 
> Yes, we will have StarLink, and perhaps eventually some competition to it (Bezos), but there is no escaping the reality that competition in the last mile destroys value.
> 
> Between StarLink densities and this utopia where both fiber and cable can afford to build (and maintain!) enough customers lie a giant wasteland — not enough customers for lines, too many for LEO. Fixed Wireless Access helps, but even in that context competition destroys value.

	Let's be real, even a dwelling unit that can choose between LTE/5G, DOCSIS-cable and FTTH really will be limited to a low single digit number of ISPs, that is still an oligopoly situation, and we know that competition/markets do not work well in such situations.


> You can have subsidy (“Broadband for All” OR consumer choice, not both.

	I argue that if e.g. the same set of "hands" that builds/maintains the access roads to the dwelling units would also deploy dark fiber concentrated in a few large enough "exchanges", can actually offer consumer choice (by enabling ISPs to do what they do best, offer internet access service lighting-up those dark fibers) and broadband for all... (sooner or later, roll-out still takes a long while...)


> At this point I would hold up an Omnibus-podcast-like sign “Compatible With Marxism”, or “Not Compatible With Marxism”, but I’m not sure which.  \

	;)

> 
> $.02
> Jeremy
> 
> -- 
> 	
> Jeremy Austin
> Sr. Product Manager
> Preseem | Aterlo Networks
> Book a call: https://app.hubspot.com/meetings/jeremy548
> 1-833-773-7336 ext 718 | 1-907-803-5422
> jeremy@aterlo.com
> www.preseem.com
>      
> 
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat]  Enabling a production model
  2023-03-29 17:46                                                                                                         ` Rich Brown
@ 2023-03-29 19:02                                                                                                           ` tom
  2023-03-29 19:08                                                                                                             ` Dave Taht
  2023-03-29 19:11                                                                                                           ` Dave Collier-Brown
  1 sibling, 1 reply; 183+ messages in thread
From: tom @ 2023-03-29 19:02 UTC (permalink / raw)
  To: 'Rich Brown', 'David Lang'
  Cc: 'Dave Taht via Starlink', 'dan',
	'Dave Collier-Brown', 'libreqos', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 2214 bytes --]

What's missing in this math is how much cheaper (and better) the
installation is if you displace or hang from the existing copper usually in
great position below the electricity and almost no makeready in this case.
Problem is getting rid of the almost but not quite unused copper plus
ownership problems. I was on an FCC TAC which tried to plan for this 14
years ago but came to nothing.

 

Also could be burying fiber and electric with road repaving which is way
over-funded to increase reliability and decrease ongoing maintenance costs.

 

From: Starlink <starlink-bounces@lists.bufferbloat.net> On Behalf Of Rich
Brown via Starlink
Sent: Wednesday, March 29, 2023 1:46 PM
To: David Lang <david@lang.hm>
Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>; dan
<dandenson@gmail.com>; Dave Collier-Brown
<dave.collier-Brown@indexexchange.com>; libreqos
<libreqos@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Starlink] [Bloat] [LibreQoS] Enabling a production model

 





On Mar 29, 2023, at 1:13 PM, David Lang via Starlink
<starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net> >
wrote:

 

The problem is that laying cable (or provisioning wifi access to cover the
area) is expensive, and if you try to have multiple different companies
doing it, they each need a minimum density of users to make it worth their
while.

 

Yes, this stuff is expensive, Here is reasonably current order-of-magnitude
cost breakdown for a rural NH town nearby:

 

1) $55,000 per road-mile to design the system, get licenses to install on
the utility poles, "make ready" (to check that the poles are ready for new
facilities) and to hang the fiber on the pole. Installing coax would save
$5K to $8K per mile.

 

2) $2,000 to $4,000 per premise to install the drop from the utility pole to
the building, bring the fiber into the building and install the router. 

 

3) Pole rental (in NH) is about $10/pole/year. Divide miles of road by 200
feet between poles to get an estimate of the number of poles.

 

So density of customers is critical for the business case. That's why there
are so many monopoly providers - it's costly to overbuild an already served
area.

 


[-- Attachment #2: Type: text/html, Size: 4980 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-29  8:28                                                                                             ` Sebastian Moeller
       [not found]                                                                                               ` <a2857ec4-a6ea-e9eb-cf99-17ef7ea08ef2@indexexchange.com>
  2023-03-29 13:46                                                                                               ` [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat Frantisek Borsik
@ 2023-03-29 19:02                                                                                               ` rjmcmahon
  2023-03-29 19:37                                                                                                 ` dan
  2 siblings, 1 reply; 183+ messages in thread
From: rjmcmahon @ 2023-03-29 19:02 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Larry Press, David Lang, dan, Frantisek Borsik, libreqos,
	Dave Taht via Starlink, bloat

Hi Sebastian,

I'm fine with municipal broadband projects. I do think they'll need to 
leverage the economy of scale driven by others. An ASIC tape out, just 
for the design, is ~$80M and a minimum of 18 mos of high-skill, 
engineering work by many specialties, signal integrity, etc. Then, after 
all that, one has to get in line with a foundry that needs to produce in 
volume per their mfg economies of scale. These markets fundamentally 
have to be driven by large orders from providers with millions of 
subscribers. That's just the market & engineering reality of things.

An aspect of the FiWi argument is that these NRE spends today and 
tomorrow are mostly from SERDES & lasers/optics in the data centers and 
the CMOS radios & PHYs in handsets. Let us look here for the thousands 
of engineers needed and for the supply of parts for the next decade+. I 
don't see it coming from anywhere else.

Then we need the in-premise fiber installers and the OSP labor forces 
who are critical to our success.

And finally, it's the operations & management and the reduction of those 
expenses in a manner that scales.

Bob
> Hi Bob,
> 
> 
>> On Mar 28, 2023, at 19:47, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>> 
>> Interesting. I'm skeptical that our cities in the U.S. can get this 
>> (structural separation) right.
> 
> There really isn't that much to get wrong, you built the access
> network and terminate the per household fibers in arge enough
> "exchanges" there you offer ISPs to lighten up the fibers on the
> premise that customers can use any ISP they want (that is present in
> the exchange)... and on ISP change will just be patched differently in
> the exchange.
> While I think that local "government" also could successfully run
> internet access services, I see no reason why they should do so
> (unless there is no competition).
> The goal here is to move the "natural monopoly" of the access network
> out of the hand of the "market" (as markets simply fail as optimizing
> resource allocation instruments under mono- and oligopoly conditions,
> on either side).
> 
> 
>> 
>> Pre-coaxial cable & contract carriage, the FCC licensed spectrum to 
>> the major media companies and placed a news obligation on them for 
>> these OTA rights. A society can't run a democracy well without quality 
>> and factual information to the constituents. Sadly, contract carriage 
>> got rid of that news as a public service obligation as predicted by 
>> Eli Noam. http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we 
>> get January 6th and an insurrection.
> 
> 
> 
>> 
>> It takes a staff of 300 to produce 30 minutes of news three times a 
>> day. The co-axial franchise agreements per each city traded this 
>> obligation for a community access channel and a small studio, and 
>> annual franchise fees. History has shown this is insufficient for a 
>> city to provide quality news to its citizens. Community access 
>> channels failed miserably.
> 
> 	I would argue this is that there are things where cities excel and
> some where they simply are mediocre... managing monopoly
> infrastructure (like roads, water, sometime power) with long
> amortization times is something they do well (either directly or via
> companies they own and operate).
> 
>> Another requirement was two cables so there would be "competition" in 
>> the coaxial offerings. This rarely happened because of natural 
>> monopoly both in the last mile and in negotiating broadcast rights 
>> (mostly for sports.) There is only one broadcast rights winner, e.g. 
>> NBC for the Olympics, and only one last mile winner. That's been 
>> proven empirically in the U.S.
> 
> 	Yes, that is why the operator of the last mile, should really not
> offer services over that mile itself. Real competition on the access
> lines themselves is not going to happen (at least not is sufficient
> number to make a market solution viable), but there is precedence of
> getting enough service providers to offer their services over access
> lines (e.g. Amsterdam).
> 
>> Now cities are dependent on those franchise fees for their budgets. 
>> And the cable cos rolled up to a national level. So it's mostly the 
>> FCC that regulates all of this where they care more about Janet 
>> Jackson's breast than providing accurate news to help a democracy 
>> function well. 
>> https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy
>> 
>> It gets worse as people are moving to unicast networks for their 
>> "news." But we're really not getting news at all, we're gravitating to 
>> emotional validations per our dysfunctions. Facebook et al happily 
>> provide this because it sells more ads. And then the major equipment 
>> providers claim they're doing great engineering because they can carry 
>> "AI loads!!" and their stock goes up in value.  This means ads & news 
>> feeds that trigger dopamine hits for addicts are driving the money 
>> flows. Which is a sad theme for undereducated populations.
> 
> 	I am not 100% sure this is a uni- versus broadcast issue... even on
> uni-cast I can consume traditional middle-of the road news and even on
> broadcast I can opt for pretend-news. Sure the social media explosion
> with its auto-bias-amplification incentives (they care for time spend
> on the platform and will show anything they believe will people stay
> longer, and guess what that is not a strategy to rhymes well with
> objective information transmission, but emotional engagement, often
> negative, but I think we all know this).
> 
> 
>> 
>> And ChatGPT is not the answer for our lack of education and a public 
>> obligation to support those educations, which includes addiction 
>> recovery programs, and the ability to think critically for ourselves.
> 
> 	Yes, for sure not ;) This is a fad mostly, and will go away some time
> in the future, once people realize that this flavor of machine
> learning is great for what it is, but what it is is not what we are
> prone to believe it is...
> 
> Regards
> 	Sebastian
> 
> 
>> 
>> Bob
>>> Here is an old (2014) post on Stockholm to my class "textbook":
>>> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html
>>> [1]
>>> Stockholm: 19 years of municipal broadband success [1]
>>> The Stokab report should be required reading for all local government
>>> officials. Stockholm is one of the  top Internet cities in the 
>>> worl...
>>> cis471.blogspot.com
>>> -------------------------
>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>>> Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net>
>>> Sent: Sunday, March 26, 2023 2:11 PM
>>> To: David Lang <david@lang.hm>
>>> Cc: dan <dandenson@gmail.com>; Frantisek Borsik
>>> <frantisek.borsik@gmail.com>; libreqos
>>> <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink
>>> <starlink@lists.bufferbloat.net>; rjmcmahon 
>>> <rjmcmahon@rjmcmahon.com>;
>>> bloat <bloat@lists.bufferbloat.net>
>>> Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure
>>> w/Comcast chat
>>> Hi David,
>>>> On Mar 26, 2023, at 22:57, David Lang <david@lang.hm> wrote:
>>>> On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:
>>>>>> The point of the thread is that we still do not treat digital
>>> communications infrastructure as life support critical.
>>>>>      Well, let's keep things in perspective, unlike power, water
>>> (fresh and waste), and often gas, communications infrastructure is
>>> mostly not critical yet. But I agree that we are clearly on a path in
>>> that direction, so it is time to look at that from a different
>>> perspective.
>>>>>      Personally, I am a big fan of putting the access network into
>>> communal hands, as these guys already do a decent job with other
>>> critical infrastructure (see list above, plus roads) and I see a PtP
>>> fiber access network terminating in some CO-like locations a viable
>>> way to allow ISPs to compete in the internet service field all the
>>> while using the communally build access network for a few. IIRC this
>>> is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has
>>> beed essentially unchanged for decades, I estimate that current fiber
>>> access lines would also last for decades requiring no active 
>>> component
>>> changes in the field, making them candidates for communal management.
>>> (With all my love for communal ownership and maintenance, these
>>> typically are not very nimble and hence best when we talk about life
>>> times of decades).
>>>> This is happening in some places (the town where I live is doing
>>> such a rollout), but the incumbant ISPs are fighting this and in many
>>> states have gotten laws created that prohibit towns from building 
>>> such
>>> systems.
>>>        A resistance that in the current system is understandable*...
>>> btw, my point is not wanting to get rid of ISPs, I really just think
>>> that the access network is more of a natural monopoly and if we want
>>> actual ISP competition, the access network is the wrong place to
>>> implement it... as it is unlikely that we will see multiple ISPs
>>> running independent fibers to all/most dwelling units... There are 
>>> two
>>> ways I see to address this structural problem:
>>> a) require ISPs to rent the access links to their competitors for
>>> "reasonable" prices
>>> b) as I proposed have some non-ISP entity build and maintain the
>>> access network
>>> None of these is terribly attractive to current ISPs, but we already
>>> see how the economically more attractive PON approach throws a 
>>> spanner
>>> into a), on a PON the competitors might get bitstream access, but 
>>> will
>>> not be able to "light up" the fiber any way they see fit (as would be
>>> possible in a PtP deployment, at least in theory). My subjective
>>> preference is b) as I mentioned before, as I think that would offer a
>>> level playing field for ISPs to compete doing what they do best, 
>>> offer
>>> internet access service while not pushing the cost of the access
>>> network build-out to all-fiber onto the ISPs. This would allow a
>>> fairer, less revenue driven approach to select which areas to convert
>>> to FTTH first....
>>> However this is pretty much orthogonal to Bob's idea, as I understand
>>> it, as this subthread really is only about getting houses hooked up 
>>> to
>>> the internet and ignores his proposal how to do the in-house network
>>> design in a future-proof way...
>>> Regards
>>>        Sebastian
>>> *) I am not saying such resistance is nice or the right thing, just
>>> that I can see why it is happening.
>>>> David Lang
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!P7nkOOY!vFtTwFdYBTFjrJCFqT0rp0o2dtaz2m-dskeRLX2dIW_Pujge6ZU8eOIxtkN_spTDlqyyzClrVbEMFFbvL3NlUgIHOg$
>>> Links:
>>> ------
>>> [1] 
>>> https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat]  Enabling a production model
  2023-03-29 19:02                                                                                                           ` tom
@ 2023-03-29 19:08                                                                                                             ` Dave Taht
  2023-03-29 19:31                                                                                                               ` tom
  0 siblings, 1 reply; 183+ messages in thread
From: Dave Taht @ 2023-03-29 19:08 UTC (permalink / raw)
  To: tom
  Cc: Rich Brown, David Lang, Dave Taht via Starlink, dan,
	Dave Collier-Brown, libreqos, bloat

On Wed, Mar 29, 2023 at 12:02 PM Tom Evslin via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
> What’s missing in this math is how much cheaper (and better) the installation is if you displace or hang from the existing copper usually in great position below the electricity and almost no makeready in this case. Problem is getting rid of the almost but not quite unused copper plus ownership problems. I was on an FCC TAC which tried to plan for this 14 years ago but came to nothing.

What was the name of that?

I have been trying to find a great talk by Henning Shulzerinne about
the copper plant, that I think took place at IETF in the 2013? 2015?
timeframe that so far I have had no luck in finding. Maybe I am
remembering the wrong conference...

Btw Henning is my nominee for the 5th FCC commissioner, if only we had
a vote: see: https://twitter.com/mtaht/status/1640480264760741889

It really bothers me that STILL both the CTO for the USA and the CTO
of the FCC, are only "acting".




>
>
> Also could be burying fiber and electric with road repaving which is way over-funded to increase reliability and decrease ongoing maintenance costs.
>
>
>
> From: Starlink <starlink-bounces@lists.bufferbloat.net> On Behalf Of Rich Brown via Starlink
> Sent: Wednesday, March 29, 2023 1:46 PM
> To: David Lang <david@lang.hm>
> Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>; dan <dandenson@gmail.com>; Dave Collier-Brown <dave.collier-Brown@indexexchange.com>; libreqos <libreqos@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
> Subject: Re: [Starlink] [Bloat] [LibreQoS] Enabling a production model
>
>
>
>
>
> On Mar 29, 2023, at 1:13 PM, David Lang via Starlink <starlink@lists.bufferbloat.net> wrote:
>
>
>
> The problem is that laying cable (or provisioning wifi access to cover the area) is expensive, and if you try to have multiple different companies doing it, they each need a minimum density of users to make it worth their while.
>
>
>
> Yes, this stuff is expensive, Here is reasonably current order-of-magnitude cost breakdown for a rural NH town nearby:
>
>
>
> 1) $55,000 per road-mile to design the system, get licenses to install on the utility poles, "make ready" (to check that the poles are ready for new facilities) and to hang the fiber on the pole. Installing coax would save $5K to $8K per mile.
>
>
>
> 2) $2,000 to $4,000 per premise to install the drop from the utility pole to the building, bring the fiber into the building and install the router.
>
>
>
> 3) Pole rental (in NH) is about $10/pole/year. Divide miles of road by 200 feet between poles to get an estimate of the number of poles.
>
>
>
> So density of customers is critical for the business case. That's why there are so many monopoly providers - it's costly to overbuild an already served area.
>
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink



-- 
AMA March 31: https://www.broadband.io/c/broadband-grant-events/dave-taht
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat]  Enabling a production model
  2023-03-29 17:46                                                                                                         ` Rich Brown
  2023-03-29 19:02                                                                                                           ` tom
@ 2023-03-29 19:11                                                                                                           ` Dave Collier-Brown
  2023-04-02 11:39                                                                                                             ` [LibreQoS] [Bloat] [Starlink] " Sebastian Moeller
  1 sibling, 1 reply; 183+ messages in thread
From: Dave Collier-Brown @ 2023-03-29 19:11 UTC (permalink / raw)
  To: Rich Brown, David Lang; +Cc: dan, libreqos, Dave Taht via Starlink, bloat

[-- Attachment #1: Type: text/plain, Size: 3181 bytes --]

It can be worse than that: if a monopoly owns the poles, you're going to have to bury your fibre. That will cost you something like $800,000 per mile, more if you have to cross a road.

In my home town, Chatham, Ontario, the local ISP is installing fibre underground because the duopoly of cable and telephone companies won't rent them pole space, much less bandwidth on their existing fibre.

This works for Chatham and Blenheim and a few others, but not for the smaller towns of Bothwell or Dresden, much less any of the villages or individual farms. They're out of luck.

--dave


On 3/29/23 13:46, Rich Brown wrote:

[EXTERNAL] This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.

On Mar 29, 2023, at 1:13 PM, David Lang via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:

The problem is that laying cable (or provisioning wifi access to cover the area) is expensive, and if you try to have multiple different companies doing it, they each need a minimum density of users to make it worth their while.

Yes, this stuff is expensive, Here is reasonably current order-of-magnitude cost breakdown for a rural NH town nearby:

1) $55,000 per road-mile to design the system, get licenses to install on the utility poles, "make ready" (to check that the poles are ready for new facilities) and to hang the fiber on the pole. Installing coax would save $5K to $8K per mile.

2) $2,000 to $4,000 per premise to install the drop from the utility pole to the building, bring the fiber into the building and install the router.

3) Pole rental (in NH) is about $10/pole/year. Divide miles of road by 200 feet between poles to get an estimate of the number of poles.

So density of customers is critical for the business case. That's why there are so many monopoly providers - it's costly to overbuild an already served area.


--
David Collier-Brown,         | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
dave.collier-brown@indexexchange.com<mailto:dave.collier-brown@indexexchange.com> |              -- Mark Twain


CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory.

[-- Attachment #2: Type: text/html, Size: 4932 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-29 14:57                                                                                                 ` Dave Taht
@ 2023-03-29 19:23                                                                                                   ` Sebastian Moeller
  0 siblings, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-03-29 19:23 UTC (permalink / raw)
  To: Dave Täht
  Cc: Frantisek Borsik, David Lang, Dave Taht via Starlink, libreqos,
	Larry Press, rjmcmahon, bloat

Hi Dave,

edited down to a single point

> On Mar 29, 2023, at 16:57, Dave Taht <dave.taht@gmail.com> wrote:
> [...]
> Fiber is great for long distances, it is great in high density
> environments, and it is also great within a datacenter or internet
> exchange point. As for to the home, I'm still of two minds regarding
> GPON vs active ethernet, I vastly prefer the idea of an interoperable
> network with active fiber ethernet gear you can get at best buy, but
> nearly everyone with actual deployment experience is saying gpon...
> [...]
> --
> AMA March 31: https://www.broadband.io/c/broadband-grant-events/dave-taht
> Dave Täht CEO, TekLibre, LLC

I agree with you, fully standardized ethernet over PtP fiber is preferable from an end-user perspective. The PONs really are making inroads for a number of reasons, that mostly are attractive to those deploying them (and that seem to be related mostly to cost).
	The good thing is, that at least GPON and XGS-PON (where I bothered to read up a bit, oh boy, ITU documents are not reader friendly) seem "good enough", and deploying even those requires pulling the hottest chestnuts out of the fire (per dwelling unit fiber deployment), so any switch back to a fully point-to-point network in the future (should that ever be required) should be considerably cheaper than the initial PON roll-out. However I predict that PON will be good enough for quite a while....

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat]  Enabling a production model
  2023-03-29 19:08                                                                                                             ` Dave Taht
@ 2023-03-29 19:31                                                                                                               ` tom
  0 siblings, 0 replies; 183+ messages in thread
From: tom @ 2023-03-29 19:31 UTC (permalink / raw)
  To: 'Dave Taht'
  Cc: 'Rich Brown', 'David Lang',
	'Dave Taht via Starlink', 'dan',
	'Dave Collier-Brown', 'libreqos', 'bloat'

It was a TAC set up my Genachowski in 2010. Wheeler chaired the TAC before he was on FCC chair. I don't remember if Henning was on it. Vint Cerf was.  A post I wrote about the TAC and the end of the PSTN is here https://blog.tomevslin.com/2011/07/tac-to-fcc-set-a-date-certain-for-the-end-of-the-pstn.html

-----Original Message-----
From: Dave Taht <dave.taht@gmail.com> 
Sent: Wednesday, March 29, 2023 3:09 PM
To: tom@evslin.com
Cc: Rich Brown <richb.hanover@gmail.com>; David Lang <david@lang.hm>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>; dan <dandenson@gmail.com>; Dave Collier-Brown <dave.collier-Brown@indexexchange.com>; libreqos <libreqos@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Starlink] [Bloat] [LibreQoS] Enabling a production model

On Wed, Mar 29, 2023 at 12:02 PM Tom Evslin via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> What’s missing in this math is how much cheaper (and better) the installation is if you displace or hang from the existing copper usually in great position below the electricity and almost no makeready in this case. Problem is getting rid of the almost but not quite unused copper plus ownership problems. I was on an FCC TAC which tried to plan for this 14 years ago but came to nothing.

What was the name of that?

I have been trying to find a great talk by Henning Shulzerinne about the copper plant, that I think took place at IETF in the 2013? 2015?
timeframe that so far I have had no luck in finding. Maybe I am remembering the wrong conference...

Btw Henning is my nominee for the 5th FCC commissioner, if only we had a vote: see: https://twitter.com/mtaht/status/1640480264760741889

It really bothers me that STILL both the CTO for the USA and the CTO of the FCC, are only "acting".




>
>
> Also could be burying fiber and electric with road repaving which is way over-funded to increase reliability and decrease ongoing maintenance costs.
>
>
>
> From: Starlink <starlink-bounces@lists.bufferbloat.net> On Behalf Of 
> Rich Brown via Starlink
> Sent: Wednesday, March 29, 2023 1:46 PM
> To: David Lang <david@lang.hm>
> Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>; dan 
> <dandenson@gmail.com>; Dave Collier-Brown 
> <dave.collier-Brown@indexexchange.com>; libreqos 
> <libreqos@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
> Subject: Re: [Starlink] [Bloat] [LibreQoS] Enabling a production model
>
>
>
>
>
> On Mar 29, 2023, at 1:13 PM, David Lang via Starlink <starlink@lists.bufferbloat.net> wrote:
>
>
>
> The problem is that laying cable (or provisioning wifi access to cover the area) is expensive, and if you try to have multiple different companies doing it, they each need a minimum density of users to make it worth their while.
>
>
>
> Yes, this stuff is expensive, Here is reasonably current order-of-magnitude cost breakdown for a rural NH town nearby:
>
>
>
> 1) $55,000 per road-mile to design the system, get licenses to install on the utility poles, "make ready" (to check that the poles are ready for new facilities) and to hang the fiber on the pole. Installing coax would save $5K to $8K per mile.
>
>
>
> 2) $2,000 to $4,000 per premise to install the drop from the utility pole to the building, bring the fiber into the building and install the router.
>
>
>
> 3) Pole rental (in NH) is about $10/pole/year. Divide miles of road by 200 feet between poles to get an estimate of the number of poles.
>
>
>
> So density of customers is critical for the business case. That's why there are so many monopoly providers - it's costly to overbuild an already served area.
>
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink



--
AMA March 31: https://www.broadband.io/c/broadband-grant-events/dave-taht
Dave Täht CEO, TekLibre, LLC


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat
  2023-03-29 19:02                                                                                               ` rjmcmahon
@ 2023-03-29 19:37                                                                                                 ` dan
  0 siblings, 0 replies; 183+ messages in thread
From: dan @ 2023-03-29 19:37 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Larry Press, David Lang, Frantisek Borsik, libreqos,
	Dave Taht via Starlink, bloat, Sebastian Moeller

[-- Attachment #1: Type: text/plain, Size: 4249 bytes --]

On Mar 29, 2023 at 1:02:51 PM, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:

> Hi Sebastian,
>
> I'm fine with municipal broadband projects. I do think they'll need to
> leverage the economy of scale driven by others. An ASIC tape out, just
> for the design, is ~$80M and a minimum of 18 mos of high-skill,
> engineering work by many specialties, signal integrity, etc. Then, after
> all that, one has to get in line with a foundry that needs to produce in
> volume per their mfg economies of scale. These markets fundamentally
> have to be driven by large orders from providers with millions of
> subscribers. That's just the market & engineering reality of things.
>
>
Every ASIC necessary to deploy is already on the market in high volume.  No
additional ~$80M needs spent.  ~$80M that MUST come from the customer at
the end of the day.  Another increase in broadband costs.   Every massive
change you suggest will pull money from actually running mainline fiber to
communities where various technologies can already deliver huge speeds at
low latency.  I’m an operator, my primary limitations logistically speaking
is inability to get 10Gbps+ fiber off the existing fiber footprint.  Even
the lowly DSL footprint could be upgraded with relative ease to get a few
hundred Mbps if, and forgive me for leaning not his so hard, the previously
designed monopoly that owned not only the copper plant but also the fiber
that is already there wasn’t waiting around for the next government hand
out before upgrading.   Fiber to the DSLAM and VSDL would be a nearly
instant upgrade to 100+ x 50+ speeds for easily 80% of rural users.

We don’t need a completely different model (FiWi) when we have all of the
parts and pieces in mass production and available right now, we have a
political system that promotes monopoly and actively encourages them to
wait until either a self funded competitor moves in or government money
shows up with mandates.  There is no reason at all to have 3-7Mbps DSL in
most of America.  This is not a technical limit.

An aspect of the FiWi argument is that these NRE spends today and
> tomorrow are mostly from SERDES & lasers/optics in the data centers and
> the CMOS radios & PHYs in handsets. Let us look here for the thousands
> of engineers needed and for the supply of parts for the next decade+. I
> don't see it coming from anywhere else.
>
We have 100G hardware routers from multiple vendors, Qualcomm, Broadcom,
Marvell.  We have 1-100G optics on the market today for cheap.  Marvell
makes a line of chips that can do 40Gbps hardware switch or routed for like
$20, get’s put in $200 MikroTik devices today. A grand gets you into a
device that can do 100G today.  Obviously that’s from the cheapest vendor
but 2-10x that price will get you into the ‘good stuff’.  We already have
this.


> Then we need the in-premise fiber installers and the OSP labor forces
> who are critical to our success.
>
> And finally, it's the operations & management and the reduction of those
> expenses in a manner that scales.
>
>
Where exactly are the costs, operations, and management savings here?

Basically this leads me to the question which I’m asking with an attempt to
avoid condescension, do you/have you run an ISP?  My operations and
management costs are primarily customer service and logistic (vehicles,
labor, and so on) and not network management.

Fiber in-premise has a negative value.  It’s more expensive to terminate
and repair, port costs are more, vastly (like 100x) more likely to damage a
fiber patch cable vs cat5e, and the advantages of fiber are lost on short
distances.  1,2.5, 5, and 10G copper is easy, cheap to terminate, cheap to
install, cheap ports in switches, cheap ports on devices, and fast.  The
entire ‘need’ for fiber in this context is the FiWi concept of centralized
networking which again IMO is something ALL IT/MSP will outright reject
killing it off for business uses and will not fare well for consumers who
are concerned more and more about privacy.

Just my opinion here, but the entirety of the FiWi concept will be dead on
arrival with almost all opposing it and only a few supporters.

[-- Attachment #2: Type: text/html, Size: 5612 bytes --]

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat]  Enabling a production model
  2023-03-29 17:34                                                                                                         ` dan
@ 2023-03-29 20:03                                                                                                           ` David Lang
  2023-04-02 12:00                                                                                                           ` [LibreQoS] [Starlink] " Sebastian Moeller
  1 sibling, 0 replies; 183+ messages in thread
From: David Lang @ 2023-03-29 20:03 UTC (permalink / raw)
  To: dan
  Cc: David Lang, Dave Taht, Dave Taht via Starlink, Doc Searls,
	Dave Collier-Brown, libreqos, bloat

[-- Attachment #1: Type: text/plain, Size: 3898 bytes --]

On Wed, 29 Mar 2023, dan wrote:

> On Mar 29, 2023 at 11:13:07 AM, David Lang <david@lang.hm> wrote:
>
>> On Wed, 29 Mar 2023, dan via Bloat wrote:
>>
>> Even in the big cities where there is enough density, the results aren't
>> pretty.
>> Go back in history and look at what was happening with phone and power
>> lines
>> in places like New York City before the monopolies were setup. Moving to
>> the
>> regulated monoopolies was hailed by users as a win from that chaos
>> (including
>> deliberate sabatage of competitors)
>>
>> I'm in a Los Angeles Suburb, and until recently, I couldn't even get fast
>> cable
>> service to my home, the city owned fiber will be a huge win for me, and I
>> can
>> still have my starlink dish, cell phone, or (once they cover my area) a
>> wireless
>> ISP as a backup
>>
>> David Lang
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
> When you said ‘even with’ you negated the previous point.  ‘Even with’
> incredible density the monopoly structure of broadband in America today
> makes competition beaurocratically hard.  That should be the place where we
> see fierce competition.

the monopoly structure prevents the competition, what was I not clear about 
related to that?

even in places where google fiber attempted to be competition, the incumbant 
monopoly blocked them by just being inconvienient with positioning fiber and got 
away with it. That's better than the old days of telephone service in NYC where 
competitors would cut other people's wires, but not by a lot.

David Lang

>  Or, that should be the place the fiber has
> completely wiped out cable, yet it hasn’t.   There are only so many
> conclusions available here.  Fiber isn’t actually that much better than
> cable, or the monopolies have non-monetary protections so competition can’t
> move in,  or maybe those areas are already properly served 😕 . The
> commonality in non-rural or small-town-rural areas that have connectivity
> struggles is the monopoly that is in the way.  Rural areas often have few
> options because the returns aren’t there for big companies, but they are
> for small companies if they were actually able to get into those markets.
> If you build in a monopoly in the rural areas, when they grow they will
> have the same issue the urban areas have, a monopoly that was paid to
> deliver last decades services and the only way they’ll upgrade is either
> government money and mandates, or competition which you’ve prevented.  You
> put a monopoly in place and that will be nearly permanent.  Outside the
> scope of this debate but I’d rather see individual subsidies to promote
> competition vs the government building out a monopoly.
>
> I’ll remind you, I run 3 ISPs.  What limits my expansion is generally
> protections given to a monopoly by local government.  You might ask Jeremy
> from the previous comment, he has direct view to 2 of these networks and
> might attest that we do reasonably well and are one of the ISPs putting in
> real effort.   We welcome competition because it gives us an opportunity to
> be the best.  Nothing better to drive positive reviews for your company
> than being better than the other guys.
>
> Also, in MOST of America, there is no shortage of money.  There is nothing
> limiting multiple providers from building in.  You can find places this
> isn’t true but 90%+ is it.  I run my businesses covering mostly rural areas
> in a red state that is on the lower end of incomes and I’ve done this out
> of pocket, operating in the black, and upgrading and expanding constantly.
> I have 3 other wisps, spectrum, TDS, Century Link  in the area.  None of us
> are hurting for money to expand services.  Also, I’m beating the
> competition to the door vs their government money.
>

^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Bloat] [Starlink]   Enabling a production model
  2023-03-29 19:11                                                                                                           ` Dave Collier-Brown
@ 2023-04-02 11:39                                                                                                             ` Sebastian Moeller
  0 siblings, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-04-02 11:39 UTC (permalink / raw)
  To: Dave Collier-Brown
  Cc: Rich Brown, David Lang, Dave Taht via Starlink, dan, libreqos

Hi Dave,


> On Mar 29, 2023, at 21:11, Dave Collier-Brown via Bloat <bloat@lists.bufferbloat.net> wrote:
> 
> It can be worse than that: if a monopoly owns the poles, you're going to have to bury your fibre. That will cost you something like $800,000 per mile, more if you have to cross a road.
> 
> In my home town, Chatham, Ontario, the local ISP is installing fibre underground because the duopoly of cable and telephone companies won't rent them pole space, much less bandwidth on their existing fibre.

	And that is why we can't have nice things... IMHO this also nicely demonstrates that giving monopoly power to private companies has side-effects. Side-effects that can be remedied by sufficiently strong rules and regulations (which companies fight tooth and claw against).

> This works for Chatham and Blenheim and a few others, but not for the smaller towns of Bothwell or Dresden, much less any of the villages or individual farms. They're out of luck.

	Yes, it seems pretty clear that the way to assert near universal internet access is not relaying on "the free market" alone, and likely requires not treating each individual link as individual project that needs to come out even (over a reasonable amortization horizon).

	Now, I am sure there are ISPs around that do not abuse their position and that aim at giving even rural users internet access at acceptable cost, but the big incumbents that deliver internet to the masses, in my limited experience do not excel at that unless "prodded" by rules and regulations.

Regards
	Sebastian

P.S.: dropped the bloat list, trying to appease Jan ;)


> 
> --dave
> 
> 
> 
> On 3/29/23 13:46, Rich Brown wrote:
>> [EXTERNAL] This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
>> 
>> 
>>> On Mar 29, 2023, at 1:13 PM, David Lang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>> 
>>> The problem is that laying cable (or provisioning wifi access to cover the area) is expensive, and if you try to have multiple different companies doing it, they each need a minimum density of users to make it worth their while.
>> 
>> Yes, this stuff is expensive, Here is reasonably current order-of-magnitude cost breakdown for a rural NH town nearby:
>> 
>> 1) $55,000 per road-mile to design the system, get licenses to install on the utility poles, "make ready" (to check that the poles are ready for new facilities) and to hang the fiber on the pole. Installing coax would save $5K to $8K per mile.
>> 
>> 2) $2,000 to $4,000 per premise to install the drop from the utility pole to the building, bring the fiber into the building and install the router. 
>> 
>> 3) Pole rental (in NH) is about $10/pole/year. Divide miles of road by 200 feet between poles to get an estimate of the number of poles.
>> 
>> So density of customers is critical for the business case. That's why there are so many monopoly providers - it's costly to overbuild an already served area.
>> 
> -- 
> David Collier-Brown,         | Always do right. This will gratify
> System Programmer and Author | some people and astonish the rest
> 
> dave.collier-brown@indexexchange.com |              -- Mark Twain
> 
> CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory.
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 183+ messages in thread

* Re: [LibreQoS] [Starlink] [Bloat]  Enabling a production model
  2023-03-29 17:34                                                                                                         ` dan
  2023-03-29 20:03                                                                                                           ` David Lang
@ 2023-04-02 12:00                                                                                                           ` Sebastian Moeller
  1 sibling, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2023-04-02 12:00 UTC (permalink / raw)
  To: dan; +Cc: David Lang, Dave Collier-Brown, libreqos, Dave Taht via Starlink

Hi Dan,


> On Mar 29, 2023, at 19:34, dan via Starlink <starlink@lists.bufferbloat.net> wrote:
> 
> 
> 
> 
> 
> 
> On Mar 29, 2023 at 11:13:07 AM, David Lang <david@lang.hm> wrote:
>> On Wed, 29 Mar 2023, dan via Bloat wrote:
>> 
>> Even in the big cities where there is enough density, the results aren't pretty. 
>> Go back in history and look at what was happening with phone and power lines 
>> in places like New York City before the monopolies were setup. Moving to the 
>> regulated monoopolies was hailed by users as a win from that chaos (including 
>> deliberate sabatage of competitors)
>> 
>> I'm in a Los Angeles Suburb, and until recently, I couldn't even get fast cable 
>> service to my home, the city owned fiber will be a huge win for me, and I can 
>> still have my starlink dish, cell phone, or (once they cover my area) a wireless 
>> ISP as a backup
>> 
>> David Lang
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
> 
> When you said ‘even with’ you negated the previous point.  ‘Even with’ incredible density the monopoly structure of broadband in America today makes competition beaurocratically hard.  That should be the place where we see fierce competition.  Or, that should be the place the fiber has completely wiped out cable, yet it hasn’t.   There are only so many conclusions available here.  Fiber isn’t actually that much better than cable, or the monopolies have non-monetary protections so competition can’t move in,  or maybe those areas are already properly served 😕 .

	Let's rephrase that, DOCSIS HFC networks currently allow sufficient service quality (aka speed, but we all agree that it is not actually "speed" nor what end-users should desire ;) ) to allow prices that make it economically problematic to deploy other costly access networks. This is orthogonal to the fact that in the intermediate turn fiber will become more attractive as is is getting harder to increase the rate of copper infrastructure (g.fast, docsis 4.0) taking more and more heroic efforts, signal processing and power consumption. So the point is IMHO not fiber or something else, but only when fiber... ;) (I agree that both DOCSIS and VDSL2 can work just fine for today, but neither is terribly future proof, and both are essentially in the process of moving fiber closer and closer to the end-points already). So IMHO the long game clearly is fiber, and macro-economically every dollar spent on extension of copper networks instead of deploying fiber is a dollar wasted... (it still can be micro-economically in the interest of a company to extend the life of a copper plant).


> The commonality in non-rural or small-town-rural areas that have connectivity struggles is the monopoly that is in the way.  Rural areas often have few options because the returns aren’t there for big companies, but they are for small companies if they were actually able to get into those markets.  If you build in a monopoly in the rural areas, when they grow they will have the same issue the urban areas have, a monopoly that was paid to deliver last decades services and the only way they’ll upgrade is either government money and mandates, or competition which you’ve prevented.  

	The point is that (reasonably) fast access networks are "natural monopolies" that is if one ISP has wired-up a dwelling unit it becomes harder to justify the cost for additional "wires". IMHO the reason why we often see POTS and cable is because these were initially not-competing and tapping into different pools od end-users willingness to pay. So even if no ISP is given true monopoly power over the access link the effects are still similar. Plus even if 3 ISPs would independently wire-up a unit, that still leaves us deep in oligopoly territory and we know that market mechanisms will still not work well to deliver internet access at reasonable cost.


> You put a monopoly in place and that will be nearly permanent.  Outside the scope of this debate but I’d rather see individual subsidies to promote competition vs the government building out a monopoly.

	In theory that sounds nice, but we will not see sufficient competition and choice in the access network to get us out of the monopoly/oligopoly regime. And there I subjectively favor monopolies in government hand, as government actually has checks and balances...
	That said, over here we end up giving subsidies, but at least encourage ISPs deploying fiber to offer bitstream access to their competitors. (Over POTS the incumbent is not encouraged but required via ex-ante regulation to offer bitstream access for controlled whole sake prices, for FTTH this currently is still only encouraged, but it seems clear that blantant abuse will result in ex-ante regulation again; let's see how well this works).


> I’ll remind you, I run 3 ISPs.

	Thanks, that is why the discussion with you is so fruitful and interesting, you offer a perspective and well-funded arguments that as a pure end-user I do not see. So, let me take the opportunity to thank you.


>  What limits my expansion is generally protections given to a monopoly by local government.  

	Well, how would you fare in a situation like Amsterdam; so if a municipality could offer you dark fibers to each dewlling unit terminating in a a few data centers? So if you had equal access to the monopoly access network as all other ISPs?


> You might ask Jeremy from the previous comment, he has direct view to 2 of these networks and might attest that we do reasonably well and are one of the ISPs putting in real effort.   We welcome competition because it gives us an opportunity to be the best.  Nothing better to drive positive reviews for your company than being better than the other guys. 

	+1; alas I do not see that spirit in the local incumbents.... and here in Germany smaller ISPs are a mixed bag, ranging from enlightened ones' that do not fear competition to those that try to build their own quasi-monopoly fiefdoms.


> Also, in MOST of America, there is no shortage of money.  There is nothing limiting multiple providers from building in.

	ROI... if you are the only one wiring-up a place you essentially have a captive audience that will (within reason) needs to accept your prices, if you are the second ISP wiring-up a place, you now have to deal with that other ISPs pricing. As an example the incumbent DOCSIS ISP in Germany a few years ago pushed down the monthly price for "gigabit-internet" (~1000/50 Mbps) to ~40EUR/month setting a price-point that makes is hard for fiber-ISPs to establish prices above. As end-customer I do not complain, but I understand that this is intended to a) increase the customer base (docsis ISPs still are well below the DSL ISPs even if jut looking inside the cable fooot-print) b) to make it harder for those FTTH competition to quickly recoup their costs (this one is speculative, as nobody would openly admit that ;) ).


>  You can find places this isn’t true but 90%+ is it.  I run my businesses covering mostly rural areas in a red state that is on the lower end of incomes and I’ve done this out of pocket, operating in the black, and upgrading and expanding constantly.  I have 3 other wisps, spectrum, TDS, Century Link  in the area.  None of us are hurting for money to expand services.  Also, I’m beating the competition to the door vs their government money.  

	+1; good for your customers! Less so for customers only served by the incumbents, no?

Regards
	Sebastian



> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 183+ messages in thread

end of thread, other threads:[~2023-04-02 12:00 UTC | newest]

Thread overview: 183+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <mailman.2651.1672779463.1281.starlink@lists.bufferbloat.net>
     [not found] ` <1672786712.106922180@apps.rackspace.com>
     [not found]   ` <F4CA66DA-516C-438A-8D8A-5F172E5DFA75@cable.comcast.com>
2023-01-09 15:26     ` [LibreQoS] [Starlink] Researchers Seeking Probe Volunteers in USA Dave Taht
2023-01-09 17:00       ` Sebastian Moeller
2023-01-09 17:04       ` Jeremy Austin
2023-01-09 18:33         ` Dave Taht
2023-01-09 18:54       ` [LibreQoS] [EXTERNAL] " Livingood, Jason
2023-01-09 19:19         ` [LibreQoS] [Rpm] " rjmcmahon
2023-01-09 19:56           ` dan
2023-01-09 21:00             ` rjmcmahon
2023-03-13 10:02             ` Sebastian Moeller
2023-03-13 15:08               ` [LibreQoS] [Starlink] [Rpm] [EXTERNAL] " Jeremy Austin
2023-03-13 15:50                 ` Sebastian Moeller
2023-03-13 16:06                   ` [LibreQoS] [Bloat] " Dave Taht
2023-03-13 16:19                     ` Sebastian Moeller
2023-03-13 16:12                   ` [LibreQoS] " dan
2023-03-13 16:36                     ` Sebastian Moeller
2023-03-13 17:26                       ` dan
2023-03-13 17:37                         ` Jeremy Austin
2023-03-13 18:34                           ` Sebastian Moeller
2023-03-13 18:14                         ` Sebastian Moeller
2023-03-13 18:42                           ` rjmcmahon
2023-03-13 18:51                             ` Sebastian Moeller
2023-03-13 19:32                               ` rjmcmahon
2023-03-13 20:00                                 ` Sebastian Moeller
2023-03-13 20:28                                   ` rjmcmahon
2023-03-14  4:27                                     ` [LibreQoS] On FiWi rjmcmahon
2023-03-14 11:10                                       ` [LibreQoS] [Starlink] " Mike Puchol
2023-03-14 16:54                                         ` [LibreQoS] [Rpm] " Robert McMahon
2023-03-14 17:06                                           ` Robert McMahon
2023-03-14 17:11                                             ` [LibreQoS] [Bloat] " Sebastian Moeller
2023-03-14 17:35                                               ` Robert McMahon
2023-03-14 17:54                                                 ` dan
2023-03-14 18:14                                                   ` Robert McMahon
2023-03-14 19:18                                                     ` dan
2023-03-14 19:30                                                       ` Dave Taht
2023-03-14 20:06                                                         ` rjmcmahon
2023-03-14 19:30                                                       ` rjmcmahon
2023-03-14 23:30                                                         ` [LibreQoS] [Starlink] [Bloat] [Rpm] " Bruce Perens
2023-03-15  0:11                                                           ` Robert McMahon
2023-03-15  5:20                                                             ` Bruce Perens
2023-03-15 16:17                                                               ` [LibreQoS] [Rpm] [Starlink] [Bloat] " Aaron Wood
2023-03-15 17:05                                                                 ` Bruce Perens
2023-03-15 17:44                                                                   ` rjmcmahon
2023-03-15 19:22                                                                   ` [LibreQoS] [Bloat] [Rpm] [Starlink] " David Lang
2023-03-15 17:32                                                               ` [LibreQoS] [Starlink] [Bloat] [Rpm] " rjmcmahon
2023-03-15 17:42                                                                 ` dan
2023-03-15 19:33                                                                   ` [LibreQoS] [Bloat] [Starlink] " David Lang
2023-03-15 19:39                                                                     ` [LibreQoS] [Rpm] [Bloat] [Starlink] " Dave Taht
2023-03-15 21:52                                                                       ` David Lang
2023-03-15 22:04                                                                         ` Dave Taht
2023-03-15 22:08                                                                           ` dan
2023-03-15 17:43                                                                 ` [LibreQoS] [Bloat] [Starlink] [Rpm] " Sebastian Moeller
2023-03-15 17:49                                                                   ` rjmcmahon
2023-03-15 17:53                                                                     ` [LibreQoS] [Rpm] [Bloat] [Starlink] " Dave Taht
2023-03-15 17:59                                                                       ` dan
2023-03-15 19:39                                                                       ` rjmcmahon
2023-03-17 16:38                                         ` [LibreQoS] [Rpm] " Dave Taht
2023-03-17 18:21                                           ` Mike Puchol
2023-03-17 19:01                                           ` [LibreQoS] [Starlink] [Rpm] " Sebastian Moeller
2023-03-17 19:19                                             ` [LibreQoS] [Rpm] [Starlink] " rjmcmahon
2023-03-17 20:37                                               ` [LibreQoS] [Starlink] [Rpm] " Bruce Perens
2023-03-17 20:57                                                 ` rjmcmahon
2023-03-17 22:50                                                   ` Bruce Perens
2023-03-18 18:18                                                     ` rjmcmahon
2023-03-18 19:57                                                       ` dan
2023-03-18 20:40                                                         ` rjmcmahon
2023-03-19 10:26                                                           ` Michael Richardson
2023-03-19 21:00                                                             ` [LibreQoS] On metrics rjmcmahon
2023-03-20  0:26                                                               ` dan
2023-03-20  3:03                                                                 ` [LibreQoS] [Starlink] " David Lang
2023-03-20 20:46                                                             ` [LibreQoS] [Rpm] [Starlink] On FiWi Frantisek Borsik
2023-03-20 21:28                                                               ` dan
2023-03-20 21:38                                                                 ` Frantisek Borsik
2023-03-20 22:02                                                                   ` [LibreQoS] On FiWi power envelope rjmcmahon
2023-03-20 23:47                                                                     ` [LibreQoS] [Starlink] " Bruce Perens
2023-03-21  0:10                                                                 ` [LibreQoS] [Starlink] [Rpm] On FiWi Brandon Butterworth
2023-03-21  5:21                                                                   ` Frantisek Borsik
2023-03-21 11:26                                                                     ` [LibreQoS] Annoyed at 5/1 Mbps Rich Brown
2023-03-21 12:31                                                                       ` [LibreQoS] [Starlink] " Sebastian Moeller
2023-03-21 12:53                                                                         ` Rich Brown
2023-03-21 17:22                                                                         ` dan
2023-03-21 19:04                                                                           ` Sebastian Moeller
2023-03-23 18:23                                                                             ` dan
2023-03-21 12:29                                                                     ` [LibreQoS] [Starlink] [Rpm] On FiWi Brandon Butterworth
2023-03-21 12:30                                                                   ` [LibreQoS] [Rpm] [Starlink] " Sebastian Moeller
2023-03-21 17:42                                                                     ` rjmcmahon
2023-03-21 18:08                                                                       ` rjmcmahon
2023-03-21 18:51                                                                         ` Frantisek Borsik
2023-03-21 19:58                                                                           ` rjmcmahon
2023-03-21 20:06                                                                             ` [LibreQoS] [Bloat] " David Lang
2023-03-25 19:39                                                                             ` [LibreQoS] On fiber as critical infrastructure w/Comcast chat rjmcmahon
2023-03-25 20:09                                                                               ` [LibreQoS] [Starlink] " Bruce Perens
2023-03-25 20:47                                                                                 ` rjmcmahon
2023-03-25 20:15                                                                               ` [LibreQoS] [Bloat] " Sebastian Moeller
2023-03-25 20:43                                                                                 ` rjmcmahon
2023-03-25 21:08                                                                                   ` [LibreQoS] [Starlink] " Bruce Perens
2023-03-25 22:04                                                                                     ` Robert McMahon
2023-03-25 22:50                                                                                       ` dan
2023-03-25 23:21                                                                                         ` Robert McMahon
2023-03-25 23:35                                                                                           ` [LibreQoS] [Bloat] [Starlink] " David Lang
2023-03-26  0:04                                                                                             ` Robert McMahon
2023-03-26  0:07                                                                                               ` Nathan Owens
2023-03-26  0:50                                                                                                 ` Robert McMahon
2023-03-26  8:45                                                                                                 ` Livingood, Jason
2023-03-26 18:54                                                                                                   ` rjmcmahon
2023-03-26  0:28                                                                                               ` David Lang
2023-03-26  0:57                                                                                                 ` Robert McMahon
2023-03-25 22:57                                                                                       ` [LibreQoS] [Starlink] [Bloat] " Bruce Perens
2023-03-25 23:33                                                                                         ` [LibreQoS] [Bloat] [Starlink] " David Lang
2023-03-25 23:38                                                                                         ` [LibreQoS] [Starlink] [Bloat] " Robert McMahon
2023-03-25 23:20                                                                                       ` [LibreQoS] [Bloat] [Starlink] " David Lang
2023-03-26 18:29                                                                                         ` rjmcmahon
2023-03-26 10:34                                                                                   ` [LibreQoS] [Bloat] " Sebastian Moeller
2023-03-26 18:12                                                                                     ` rjmcmahon
2023-03-26 20:57                                                                                     ` David Lang
2023-03-26 21:11                                                                                       ` Sebastian Moeller
2023-03-26 21:26                                                                                         ` David Lang
2023-03-28 17:06                                                                                         ` [LibreQoS] [Starlink] " Larry Press
2023-03-28 17:47                                                                                           ` rjmcmahon
2023-03-28 18:11                                                                                             ` Frantisek Borsik
2023-03-28 18:46                                                                                               ` rjmcmahon
2023-03-28 20:37                                                                                                 ` David Lang
2023-03-28 21:31                                                                                                   ` rjmcmahon
2023-03-28 22:18                                                                                                     ` dan
2023-03-28 22:42                                                                                                       ` rjmcmahon
2023-03-29  8:28                                                                                             ` Sebastian Moeller
     [not found]                                                                                               ` <a2857ec4-a6ea-e9eb-cf99-17ef7ea08ef2@indexexchange.com>
     [not found]                                                                                                 ` <716ECAAD-E2EE-4647-9E73-D60BF8BF9C1E@searls.com>
2023-03-29 13:40                                                                                                   ` [LibreQoS] Enabling a production model Dave Taht
2023-03-29 14:54                                                                                                     ` dan
2023-03-29 16:53                                                                                                       ` Jeremy Austin
2023-03-29 18:33                                                                                                         ` [LibreQoS] [Starlink] " Sebastian Moeller
2023-03-29 17:13                                                                                                       ` [LibreQoS] [Bloat] " David Lang
2023-03-29 17:34                                                                                                         ` dan
2023-03-29 20:03                                                                                                           ` David Lang
2023-04-02 12:00                                                                                                           ` [LibreQoS] [Starlink] " Sebastian Moeller
2023-03-29 17:46                                                                                                         ` Rich Brown
2023-03-29 19:02                                                                                                           ` tom
2023-03-29 19:08                                                                                                             ` Dave Taht
2023-03-29 19:31                                                                                                               ` tom
2023-03-29 19:11                                                                                                           ` Dave Collier-Brown
2023-04-02 11:39                                                                                                             ` [LibreQoS] [Bloat] [Starlink] " Sebastian Moeller
2023-03-29 13:46                                                                                               ` [LibreQoS] [Starlink] [Bloat] On fiber as critical infrastructure w/Comcast chat Frantisek Borsik
2023-03-29 14:57                                                                                                 ` Dave Taht
2023-03-29 19:23                                                                                                   ` Sebastian Moeller
2023-03-29 19:02                                                                                               ` rjmcmahon
2023-03-29 19:37                                                                                                 ` dan
2023-03-25 20:27                                                                               ` [LibreQoS] " rjmcmahon
2023-03-17 23:15                                             ` [LibreQoS] [Bloat] [Starlink] [Rpm] On FiWi David Lang
2023-03-13 19:33                           ` [LibreQoS] [Starlink] [Rpm] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA dan
2023-03-13 19:52                             ` Jeremy Austin
2023-03-13 21:00                               ` Sebastian Moeller
2023-03-13 21:27                                 ` dan
2023-03-14  9:11                                   ` Sebastian Moeller
2023-03-13 20:45                             ` Sebastian Moeller
2023-03-13 21:02                               ` [LibreQoS] When do you drop? Always! Dave Taht
2023-03-13 16:04                 ` [LibreQoS] UnderBloat on fiber and wisps Dave Taht
2023-03-13 16:09                   ` Sebastian Moeller
2023-01-09 20:49         ` [LibreQoS] [EXTERNAL] Re: [Starlink] Researchers Seeking Probe Volunteers in USA Dave Taht
2023-01-09 19:13       ` [LibreQoS] [Rpm] " rjmcmahon
2023-01-09 19:47         ` [LibreQoS] [Starlink] [Rpm] " Sebastian Moeller
2023-01-11 18:32           ` Rodney W. Grimes
2023-01-11 20:01             ` Sebastian Moeller
2023-01-11 21:46               ` Dick Roy
2023-01-12  8:22                 ` Sebastian Moeller
2023-01-12 18:02                   ` rjmcmahon
2023-01-12 21:34                     ` Dick Roy
2023-01-12 20:39                   ` Dick Roy
2023-01-13  7:33                     ` Sebastian Moeller
2023-01-13  8:26                       ` Dick Roy
2023-01-13  7:40                     ` rjmcmahon
2023-01-13  8:10                       ` Dick Roy
2023-01-15 23:09                         ` rjmcmahon
2023-01-11 20:09             ` rjmcmahon
2023-01-12  8:14               ` Sebastian Moeller
2023-01-12 17:49                 ` Robert McMahon
2023-01-12 21:57                   ` Dick Roy
2023-01-13  7:44                     ` Sebastian Moeller
2023-01-13  8:01                       ` Dick Roy
2023-01-09 20:20         ` [LibreQoS] [Rpm] [Starlink] " Dave Taht
2023-01-09 20:46           ` rjmcmahon
2023-01-09 20:59             ` Dave Taht
2023-01-09 21:06               ` rjmcmahon
2023-01-09 21:18                 ` rjmcmahon
2023-01-09 21:02             ` [LibreQoS] [Starlink] [Rpm] " Dick Roy
2023-01-10 17:36         ` [LibreQoS] [Rpm] [Starlink] " David P. Reed

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox