[LibreQoS] [Rpm] [Starlink] On FiWi

rjmcmahon rjmcmahon at rjmcmahon.com
Tue Mar 21 15:58:17 EDT 2023


I was around when BGP & other critical junctures 
https://en.wikipedia.org/wiki/Critical_juncture_theory  the commercial 
internet. Here's a short write-up from another thread with some thoughts 
(Note: there are no queues in the Schramm Model 
https://en.wikipedia.org/wiki/Schramm%27s_model_of_communication )

On why we're here.

I think Stuart's point about not having the correct framing is spot on. 
I also think part of that may come from the internet's origin story 
so-to-speak. In the early days of the commercial internet, ISPs formed 
by buying MODEM banks from suppliers and connecting them to the 
telephone company central offices (thanks Strowger!) and then leasing T1 
lines from the same telco, connecting the two.  Products like a Cisco 
Access Gateway were used for the MODEM side. The 4K independent ISPs 
formed in the U.S. took advantage of statistical multiplexing per IP 
packets to optimize the PSTN's time division multiplexing (TDM) design. 
That design had a lot of extra capacity because of the mother's day 
problem - the network had to carry the peak volume of calls. It was 
always odd to me that the telephone companies basically contracted out 
statistical to TDM coupling of networks and didn't do it themselves. 
This was rectified with broadband and most all the independent ISPs went 
out of business.

IP statistical multiplexing was great except for one thing. The attached 
computers were faster than their network i/o so TCP had to do things 
like congestion control to avoid network collapse based on congestion 
signals (and a very imperfect control loop.) Basically, that extra TDM 
capacity for voice calls was consumed very quickly. This set in motion 
the idea that network channel capacity is a proxy for computer speed as 
when networks are underprovisioned and congested that's basically 
accurate. Van Jacobson's work was most always about congestion on what 
today are bandwidth constrained networks.

This also started a bit of a cultural war colloquially known as 
Bellheads vs Netheads. The human engineers took sides more or less. The 
netheads mostly kept increasing capacity. The market demand curve for 
computer connections drove this. It's come to a head though, in that 
netheads most always overprovisioned similar to solving the mother's day 
problem. (This is different from the electric build out where the goal 
is to drive peak and average loads to merge in order to keep generators 
efficient at a constant speed.)

Many were first stuck with the concept of bandwidth scarcity per those 
origins. But then came bandwidth abundance and many haven't adjusted. 
Mental block number one. Mental block two occurs when one sees all that 
bandwidth and says, let's use it all as it's going to be scarce, like a 
Great Depression-era person hoarding basic items.

A digression; This isn't that much different in the early days before 
Einstein. Einstein changed thinking by realizing that the speed of 
causality was defined or limited by the speed of massless particles, 
i.e. energy or photons. We all come from energy in one way or another. 
So of course it makes sense that our causality system, e.g. aging, is 
determined by that speed. It had to be relative for Maxwell's equations 
to be held true - which Einstein agreed with as true irrelevant of 
inertial frame. A leap for us comes when we realize that the speed of 
causality, i.e. time, is fundamentally the speed of energy.  It's true 
for all clocks, objects, etc. even computers.

So when we engineer systems that queue information, we don't slow down 
energy, we slow down information. Computers are mass information tools 
so slowing down information slows down distributed compute. As Stuart 
says, "It's the latency, stupid".  It's physics too.

I was trying to explain to a dark fiber provider that I wanted 100Gb/s 
SFPs to a residential building in Boston. They said, nobody needs 
100Gb/s and that's correct from a link capacity perspective. But the 
economics & energy required for the lowest latency ber bit delivered 
actually is 100Gb/s SERDES attached to lasers attached to fiber.

What we really want is low latency at the lowest energy possible, and 
also to be unleashed from cables (as we're not dogs.) Hence FiWi.

Bob

> I do believe that we all want to get the best - latency and speed,
> hopefully, in this particular order :-)
> The problem was that from the very beginning of the Internet (yeah, I
> was still not here, on this planet, when it all started), everything
> was optimised for speed, bandwidth and other numbers, but not so much
> for bufferbloat in general.
> Some of the things that goes into it in the need for speed, are
> directly against the fixing latency...and it was not setup for it.
> Gamers and Covid (work from home, the need for the enterprise network
> but in homes...) brings it into conversation, thankfully, and now we
> will deal with it.
> 
> Also, there is another thing I see and it's a negative sentiment
> against anything business (monetisation of, say - lower latency
> solutions) in general. If it comes from the general geeky/open
> source/etc folks, I can understand it a bit. But it comes also from
> the business people - assuming some of You works in big corporations
> or run ISPs. I'm all against cronyism, but to throw out the baby with
> the bathwater - to say that doing business (i.e. getting paid for
> delivering something that is missing/fixing something that is
> implementing insufficiently) is wrong, to look at it with disdain, is
> asinine.
> 
> This has the connection with the general "Net Neutrality" (NN)
> sentiment. I have 2 suggestions for reading from the other side of the
> aisle, on this topic: https://www.martingeddes.com/1261-2 [1]/ (Martin
> was censored by all major social media back then, during the days of
> NN fight in the FCC and elsewhere.) Second thing is written by one and
> only Dave Taht:
> https://blog.cerowrt.org/post/net_neutrality_customers/
> 
> To conclude, we need to find the way how to benchmark and/or
> communicate (translate, if You will) the whole variety of the quality
> of network statistics/metrics (which are complex) like QoE, QoS,
> latency, jitter, bufferbloat...to something, that is meaningful for
> the end user. See this short proposition of the Quality of Outcome by
> Domos: https://www.youtube.com/watch?app=desktop&v=MRmcWyIVXvg&t=4185s
> There is definitely a lot of work on this - and also on the finding
> the right benchmark and its actual measurement side, but it's a step
> in the right direction.
> 
> Looking forward to seeing Your take on that proposed Quality of
> Outcome. Thanks a lot.
> 
> All the best,
> 
> Frank
> 
> Frantisek (Frank) Borsik
> 
> https://www.linkedin.com/in/frantisekborsik
> 
> Signal, Telegram, WhatsApp: +421919416714
> 
> iMessage, mobile: +420775230885
> 
> Skype: casioa5302ca
> 
> frantisek.borsik at gmail.com
> 
> On Tue, Mar 21, 2023 at 7:08 PM rjmcmahon via Rpm
> <rpm at lists.bufferbloat.net> wrote:
> 
>> Also, I want my network to be the color clear because I value
>> transparency, honesty, and clarity.
>> 
>> 
> https://carbuzz.com/news/car-colors-are-more-important-to-buyers-than-you-think
>> 
>> "There are many factors to consider when buying a new car, from
>> price
>> and comfort to safety equipment. For many people, color is another
>> important factor since it reflects their personality."
>> 
>> "In a study by Automotive Color Preferences 2021 Consumer Survey,
>> 4,000
>> people aged 25 to 60 in four of the largest car markets in the world
>> 
>> (China, Germany, Mexico and the US) were asked about their car color
>> 
>> preferences. Out of these, 88 percent said that color is a key
>> deciding
>> factor when buying a car."
>> 
>> Bob
>>> I think we may all be still stuck on numbers. Since infinity is
>> taken,
>>> the new marketing number is "infinity & beyond" per Buzz Lightyear
>>> 
>>> Here's what I want, I'm sure others have ideas too:
>>> 
>>> o) We all deserve COPPA. Get the advertiser & their cohorts to
>> stop
>>> mining my data & communications - limit or prohibit access to my
>>> information by those who continue to violate privacy rights
>>> o) An unlimited storage offering with the lowest possible latency
>> paid
>>> for annually. That equipment ends up as close as possible to my
>> main
>>> home per speed of light limits.
>>> o) Security of my network including 24x7x365 monitoring for
>> breaches
>>> and for performance
>>> o) Access to any cloud software app. Google & Apple are getting
>>> something like 30% for every app on a phone. Seems like a
>> last-mile
>>> provider should get a revenue share for hosting apps that aren't
>> being
>>> downloaded. Blockbuster did this for DVDs before streaming took
>> over.
>>> Revenue shares done properly, while imperfect, can work.
>>> o) A life-support capable, future proof, componentized,
>> leash-free,
>>> in-home network that is dual-homed over the last mile for
>> redundancy
>>> o) Per room FiWi and sensors that can be replaced and upgraded by
>> me
>>> ordering and swapping the parts without an ISP getting all my
>>> neighbors' consensus & buy in
>>> o) VPN capabilities & offerings to the content rights owners'
>>> intellectual property for when the peering agreements fall apart
>>> o) Video conferencing that works 24x7x365 on all devices
>>> o) A single & robust shut-off circuit
>>> 
>>> Bob
>>> 
>>> PS. I think the sweet spot may turn out to be 100Gb/s when
>> considering
>>> climate impact. Type 2 emissions are a big deal so we need to
>> deliver
>>> the fastest causality possible (incl. no queueing) at the lowest
>>> energy consumption engineers can achieve.
>>> 
>>> _______________________________________________
>>> Rpm mailing list
>>> Rpm at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm
>> _______________________________________________
>> Rpm mailing list
>> Rpm at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
> 
> 
> Links:
> ------
> [1] https://www.martingeddes.com/1261-2


More information about the LibreQoS mailing list