* Re: [Starlink] It’s the Latency, FCC
[not found] <mailman.11.1710518402.17089.starlink@lists.bufferbloat.net>
@ 2024-03-15 18:32 ` Colin Higbie
2024-03-15 18:41 ` Colin_Higbie
2024-04-30 0:39 ` [Starlink] It’s " David Lang
0 siblings, 2 replies; 42+ messages in thread
From: Colin Higbie @ 2024-03-15 18:32 UTC (permalink / raw)
To: starlink
> I have now been trying to break the common conflation that download "speed"
> means anything at all for day to day, minute to minute, second to second, use,
> once you crack 10mbit, now, for over 14 years. Am I succeeding? I lost the 25/10
> battle, and keep pointing at really terrible latency under load and wifi weirdnesses
> for many existing 100/20 services today.
While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
Latency: below 50ms under load always feels good except for some intensive gaming
(I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference)
Download Bandwidth: 10Mbps good enough if not doing UHD video streaming
Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k
Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams
So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
Cheers,
Colin
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-15 18:32 ` [Starlink] It’s the Latency, FCC Colin Higbie
@ 2024-03-15 18:41 ` Colin_Higbie
2024-03-15 19:53 ` Spencer Sevilla
2024-04-30 0:39 ` [Starlink] It’s " David Lang
1 sibling, 1 reply; 42+ messages in thread
From: Colin_Higbie @ 2024-03-15 18:41 UTC (permalink / raw)
To: starlink
> I have now been trying to break the common conflation that download "speed"
> means anything at all for day to day, minute to minute, second to
> second, use, once you crack 10mbit, now, for over 14 years. Am I
> succeeding? I lost the 25/10 battle, and keep pointing at really
> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
Latency: below 50ms under load always feels good except for some intensive gaming (I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference)
Download Bandwidth: 10Mbps good enough if not doing UHD video streaming
Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k
Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams
So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
Cheers,
Colin
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-15 18:41 ` Colin_Higbie
@ 2024-03-15 19:53 ` Spencer Sevilla
2024-03-15 20:31 ` Colin_Higbie
2024-03-15 23:07 ` David Lang
0 siblings, 2 replies; 42+ messages in thread
From: Spencer Sevilla @ 2024-03-15 19:53 UTC (permalink / raw)
To: Colin_Higbie; +Cc: Dave Taht via Starlink
Your comment about 4k HDR TVs got me thinking about the bandwidth “arms race” between infrastructure and its clients. It’s a particular pet peeve of mine that as any resource (bandwidth in this case, but the same can be said for memory) becomes more plentiful, software engineers respond by wasting it until it’s scarce enough to require optimization again. Feels like an awkward kind of malthusian inflation that ends up forcing us to buy newer/faster/better devices to perform the same basic functions, which haven’t changed almost at all.
I completely agree that no one “needs” 4K UHDR, but when we say this I think we generally mean as opposed to a slightly lower codec, like regular HDR or 1080p. In practice, I’d be willing to bet that there’s at least one poorly programmed TV out there that doesn’t downgrade well or at all, so the tradeoff becomes "4K UHDR or endless stuttering/buffering.” Under this (totally unnecessarily forced upon us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise arms raced ourselves into a TV that can’t really stream anything. A technical downgrade from literally the 1960s.
See also: The endless march of “smart appliances” and TVs/gaming systems that require endless humongous software updates. My stove requires natural gas and 120VAC, and I like it that way. Other stoves require… how many Mbps to function regularly? Other food for thought, I wonder how increasing minimum broadband speed requirements across the country will encourage or discourage this behavior among network engineers. I sincerely don’t look forward to a future in which we all require 10Gbps to the house but can’t do much with it cause it’s all taken up by lightbulb software updates every evening /rant.
> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>
>> I have now been trying to break the common conflation that download "speed"
>> means anything at all for day to day, minute to minute, second to
>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>> succeeding? I lost the 25/10 battle, and keep pointing at really
>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>
> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>
> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>
> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>
> Latency: below 50ms under load always feels good except for some intensive gaming (I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference)
>
> Download Bandwidth: 10Mbps good enough if not doing UHD video streaming
>
> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k
>
> Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams
>
> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>
> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>
> Cheers,
> Colin
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-15 19:53 ` Spencer Sevilla
@ 2024-03-15 20:31 ` Colin_Higbie
2024-03-16 17:18 ` Alexandre Petrescu
2024-03-15 23:07 ` David Lang
1 sibling, 1 reply; 42+ messages in thread
From: Colin_Higbie @ 2024-03-15 20:31 UTC (permalink / raw)
To: Dave Taht via Starlink
Spencer, great point. We certainly see that with RAM, CPU, and graphics power that the software just grows to fill up the space. I do think that there are still enough users with bandwidth constraints (millions of users limited to DSL and 7Mbps DL speeds) that it provides some pressure against streaming and other services requiring huge swaths of data for basic functions, but, to your point, if there were a mandate that everyone would have 100Mbps connection, I agree that would then quickly become saturated so everyone would need more.
Fortunately, the video compression codecs have improved dramatically over the past couple of decades from MPEG-1 to MPEG-2 to H.264 to VP9 and H.265. There's still room for further improvements, but I think we're probably getting to a point of diminishing returns on further compression improvements. Even with further improvements, I don't think we'll see bandwidth needs drop so much as improved quality at the same bandwidth, but this does offset the natural bloat-to-fill-available-capacity movement we see.
-----Original Message-----
From: Spencer Sevilla
Sent: Friday, March 15, 2024 3:54 PM
To: Colin_Higbie
Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: Re: [Starlink] It’s the Latency, FCC
Your comment about 4k HDR TVs got me thinking about the bandwidth “arms race” between infrastructure and its clients. It’s a particular pet peeve of mine that as any resource (bandwidth in this case, but the same can be said for memory) becomes more plentiful, software engineers respond by wasting it until it’s scarce enough to require optimization again. Feels like an awkward kind of malthusian inflation that ends up forcing us to buy newer/faster/better devices to perform the same basic functions, which haven’t changed almost at all.
I completely agree that no one “needs” 4K UHDR, but when we say this I think we generally mean as opposed to a slightly lower codec, like regular HDR or 1080p. In practice, I’d be willing to bet that there’s at least one poorly programmed TV out there that doesn’t downgrade well or at all, so the tradeoff becomes "4K UHDR or endless stuttering/buffering.” Under this (totally unnecessarily forced upon us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise arms raced ourselves into a TV that can’t really stream anything. A technical downgrade from literally the 1960s.
See also: The endless march of “smart appliances” and TVs/gaming systems that require endless humongous software updates. My stove requires natural gas and 120VAC, and I like it that way. Other stoves require… how many Mbps to function regularly? Other food for thought, I wonder how increasing minimum broadband speed requirements across the country will encourage or discourage this behavior among network engineers. I sincerely don’t look forward to a future in which we all require 10Gbps to the house but can’t do much with it cause it’s all taken up by lightbulb software updates every evening /rant.
> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>
>> I have now been trying to break the common conflation that download "speed"
>> means anything at all for day to day, minute to minute, second to
>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>> succeeding? I lost the 25/10 battle, and keep pointing at really
>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>
> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>
> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>
> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>
> Latency: below 50ms under load always feels good except for some
> intensive gaming (I don't see any benefit to getting loaded latency
> further below ~20ms for typical applications, with an exception for
> cloud-based gaming that benefits with lower latency all the way down
> to about 5ms for young, really fast players, the rest of us won't be
> able to tell the difference)
>
> Download Bandwidth: 10Mbps good enough if not doing UHD video
> streaming
>
> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
> depending on # of streams or if wanting to be ready for 8k
>
> Upload Bandwidth: 10Mbps good enough for quality video conferencing,
> higher only needed for multiple concurrent outbound streams
>
> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>
> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>
> Cheers,
> Colin
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-15 19:53 ` Spencer Sevilla
2024-03-15 20:31 ` Colin_Higbie
@ 2024-03-15 23:07 ` David Lang
2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie
2024-03-16 18:51 ` [Starlink] It?s the Latency, FCC Gert Doering
1 sibling, 2 replies; 42+ messages in thread
From: David Lang @ 2024-03-15 23:07 UTC (permalink / raw)
To: Spencer Sevilla; +Cc: Colin_Higbie, Dave Taht via Starlink
[-- Attachment #1: Type: TEXT/PLAIN, Size: 6025 bytes --]
one person's 'wasteful resolution' is another person's 'large enhancement'
going from 1080p to 4k video is not being wasteful, it's opting to use the
bandwidth in a different way.
saying that it's wasteful for someone to choose to do something is saying that
you know better what their priorities should be.
I agree that increasing resources allow programmers to be lazier and write apps
that are bigger, but they are also writing them in less time.
What right do you have to say that the programmer's time is less important than
the ram/bandwidth used?
I agree that it would be nice to have more people write better code, but
everything, including this, has trade-offs.
David Lang
On Fri, 15 Mar 2024, Spencer Sevilla via Starlink wrote:
> Your comment about 4k HDR TVs got me thinking about the bandwidth “arms race” between infrastructure and its clients. It’s a particular pet peeve of mine that as any resource (bandwidth in this case, but the same can be said for memory) becomes more plentiful, software engineers respond by wasting it until it’s scarce enough to require optimization again. Feels like an awkward kind of malthusian inflation that ends up forcing us to buy newer/faster/better devices to perform the same basic functions, which haven’t changed almost at all.
>
> I completely agree that no one “needs” 4K UHDR, but when we say this I think we generally mean as opposed to a slightly lower codec, like regular HDR or 1080p. In practice, I’d be willing to bet that there’s at least one poorly programmed TV out there that doesn’t downgrade well or at all, so the tradeoff becomes "4K UHDR or endless stuttering/buffering.” Under this (totally unnecessarily forced upon us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise arms raced ourselves into a TV that can’t really stream anything. A technical downgrade from literally the 1960s.
>
> See also: The endless march of “smart appliances” and TVs/gaming systems that require endless humongous software updates. My stove requires natural gas and 120VAC, and I like it that way. Other stoves require… how many Mbps to function regularly? Other food for thought, I wonder how increasing minimum broadband speed requirements across the country will encourage or discourage this behavior among network engineers. I sincerely don’t look forward to a future in which we all require 10Gbps to the house but can’t do much with it cause it’s all taken up by lightbulb software updates every evening /rant.
>
>> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>>> I have now been trying to break the common conflation that download "speed"
>>> means anything at all for day to day, minute to minute, second to
>>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>>> succeeding? I lost the 25/10 battle, and keep pointing at really
>>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>>
>> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>>
>> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>>
>> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>>
>> Latency: below 50ms under load always feels good except for some intensive gaming (I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference)
>>
>> Download Bandwidth: 10Mbps good enough if not doing UHD video streaming
>>
>> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k
>>
>> Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams
>>
>> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>>
>> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>>
>> Cheers,
>> Colin
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-15 20:31 ` Colin_Higbie
@ 2024-03-16 17:18 ` Alexandre Petrescu
2024-03-16 17:21 ` Alexandre Petrescu
2024-03-16 17:36 ` Sebastian Moeller
0 siblings, 2 replies; 42+ messages in thread
From: Alexandre Petrescu @ 2024-03-16 17:18 UTC (permalink / raw)
To: starlink
Le 15/03/2024 à 21:31, Colin_Higbie via Starlink a écrit :
> Spencer, great point. We certainly see that with RAM, CPU, and graphics power that the software just grows to fill up the space. I do think that there are still enough users with bandwidth constraints (millions of users limited to DSL and 7Mbps DL speeds) that it provides some pressure against streaming and other services requiring huge swaths of data for basic functions, but, to your point, if there were a mandate that everyone would have 100Mbps connection, I agree that would then quickly become saturated so everyone would need more.
>
> Fortunately, the video compression codecs have improved dramatically over the past couple of decades from MPEG-1 to MPEG-2 to H.264 to VP9 and H.265. There's still room for further improvements, but I think we're probably getting to a point of diminishing returns on further compression improvements. Even with further improvements, I don't think we'll see bandwidth needs drop so much as improved quality at the same bandwidth, but this does offset the natural bloat-to-fill-available-capacity movement we see.
I think the 4K-latency discussion is a bit difficult, regardless of how
great the codecs are.
For one, 4K can be considered outdated for those who look forward to 8K
and why not 16K; so we should forget 4K. 8K is delivered from space
already by a japanese provider, but not on IP. So, if we discuss TV
resolutions we should look at these (8K, 16K, and why not 3D 16K for
ever more strength testing).
Second, 4K etc. are for TV. In TV the latency is rarely if ever an
issue. There are some rare cases where latency is very important in TV
(I could think of betting in sports, time synch of clocks) but they dont
look at such low latency as in our typical visioconference or remote
surgery or group music playing use-cases on Internet starlink.
So, I dont know how much 4K, 8K, 16K might be imposing any new latency
requirement on starlink.
Alex
>
>
> -----Original Message-----
> From: Spencer Sevilla
> Sent: Friday, March 15, 2024 3:54 PM
> To: Colin_Higbie
> Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] It’s the Latency, FCC
>
> Your comment about 4k HDR TVs got me thinking about the bandwidth “arms race” between infrastructure and its clients. It’s a particular pet peeve of mine that as any resource (bandwidth in this case, but the same can be said for memory) becomes more plentiful, software engineers respond by wasting it until it’s scarce enough to require optimization again. Feels like an awkward kind of malthusian inflation that ends up forcing us to buy newer/faster/better devices to perform the same basic functions, which haven’t changed almost at all.
>
> I completely agree that no one “needs” 4K UHDR, but when we say this I think we generally mean as opposed to a slightly lower codec, like regular HDR or 1080p. In practice, I’d be willing to bet that there’s at least one poorly programmed TV out there that doesn’t downgrade well or at all, so the tradeoff becomes "4K UHDR or endless stuttering/buffering.” Under this (totally unnecessarily forced upon us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise arms raced ourselves into a TV that can’t really stream anything. A technical downgrade from literally the 1960s.
>
> See also: The endless march of “smart appliances” and TVs/gaming systems that require endless humongous software updates. My stove requires natural gas and 120VAC, and I like it that way. Other stoves require… how many Mbps to function regularly? Other food for thought, I wonder how increasing minimum broadband speed requirements across the country will encourage or discourage this behavior among network engineers. I sincerely don’t look forward to a future in which we all require 10Gbps to the house but can’t do much with it cause it’s all taken up by lightbulb software updates every evening /rant.
>
>> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>>> I have now been trying to break the common conflation that download "speed"
>>> means anything at all for day to day, minute to minute, second to
>>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>>> succeeding? I lost the 25/10 battle, and keep pointing at really
>>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>>
>> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>>
>> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>>
>> Latency: below 50ms under load always feels good except for some
>> intensive gaming (I don't see any benefit to getting loaded latency
>> further below ~20ms for typical applications, with an exception for
>> cloud-based gaming that benefits with lower latency all the way down
>> to about 5ms for young, really fast players, the rest of us won't be
>> able to tell the difference)
>>
>> Download Bandwidth: 10Mbps good enough if not doing UHD video
>> streaming
>>
>> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
>> depending on # of streams or if wanting to be ready for 8k
>>
>> Upload Bandwidth: 10Mbps good enough for quality video conferencing,
>> higher only needed for multiple concurrent outbound streams
>>
>> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>>
>> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>>
>> Cheers,
>> Colin
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-16 17:18 ` Alexandre Petrescu
@ 2024-03-16 17:21 ` Alexandre Petrescu
2024-03-16 17:36 ` Sebastian Moeller
1 sibling, 0 replies; 42+ messages in thread
From: Alexandre Petrescu @ 2024-03-16 17:21 UTC (permalink / raw)
To: starlink
I retract the message, sorry, it is true that some teleoperation and
visioconf also use 4K. So the latency is important there too.
A visioconf with 8K and 3D 16K might need latency reqs too.
Le 16/03/2024 à 18:18, Alexandre Petrescu via Starlink a écrit :
>
> Le 15/03/2024 à 21:31, Colin_Higbie via Starlink a écrit :
>> Spencer, great point. We certainly see that with RAM, CPU, and
>> graphics power that the software just grows to fill up the space. I
>> do think that there are still enough users with bandwidth constraints
>> (millions of users limited to DSL and 7Mbps DL speeds) that it
>> provides some pressure against streaming and other services requiring
>> huge swaths of data for basic functions, but, to your point, if there
>> were a mandate that everyone would have 100Mbps connection, I agree
>> that would then quickly become saturated so everyone would need more.
>>
>> Fortunately, the video compression codecs have improved dramatically
>> over the past couple of decades from MPEG-1 to MPEG-2 to H.264 to VP9
>> and H.265. There's still room for further improvements, but I think
>> we're probably getting to a point of diminishing returns on further
>> compression improvements. Even with further improvements, I don't
>> think we'll see bandwidth needs drop so much as improved quality at
>> the same bandwidth, but this does offset the natural
>> bloat-to-fill-available-capacity movement we see.
>
> I think the 4K-latency discussion is a bit difficult, regardless of
> how great the codecs are.
>
> For one, 4K can be considered outdated for those who look forward to
> 8K and why not 16K; so we should forget 4K. 8K is delivered from
> space already by a japanese provider, but not on IP. So, if we
> discuss TV resolutions we should look at these (8K, 16K, and why not
> 3D 16K for ever more strength testing).
>
> Second, 4K etc. are for TV. In TV the latency is rarely if ever an
> issue. There are some rare cases where latency is very important in
> TV (I could think of betting in sports, time synch of clocks) but they
> dont look at such low latency as in our typical visioconference or
> remote surgery or group music playing use-cases on Internet starlink.
>
> So, I dont know how much 4K, 8K, 16K might be imposing any new latency
> requirement on starlink.
>
> Alex
>
>>
>>
>> -----Original Message-----
>> From: Spencer Sevilla
>> Sent: Friday, March 15, 2024 3:54 PM
>> To: Colin_Higbie
>> Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
>> Subject: Re: [Starlink] It’s the Latency, FCC
>>
>> Your comment about 4k HDR TVs got me thinking about the bandwidth
>> “arms race” between infrastructure and its clients. It’s a particular
>> pet peeve of mine that as any resource (bandwidth in this case, but
>> the same can be said for memory) becomes more plentiful, software
>> engineers respond by wasting it until it’s scarce enough to require
>> optimization again. Feels like an awkward kind of malthusian
>> inflation that ends up forcing us to buy newer/faster/better devices
>> to perform the same basic functions, which haven’t changed almost at
>> all.
>>
>> I completely agree that no one “needs” 4K UHDR, but when we say this
>> I think we generally mean as opposed to a slightly lower codec, like
>> regular HDR or 1080p. In practice, I’d be willing to bet that there’s
>> at least one poorly programmed TV out there that doesn’t downgrade
>> well or at all, so the tradeoff becomes "4K UHDR or endless
>> stuttering/buffering.” Under this (totally unnecessarily forced upon
>> us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise
>> arms raced ourselves into a TV that can’t really stream anything. A
>> technical downgrade from literally the 1960s.
>>
>> See also: The endless march of “smart appliances” and TVs/gaming
>> systems that require endless humongous software updates. My stove
>> requires natural gas and 120VAC, and I like it that way. Other stoves
>> require… how many Mbps to function regularly? Other food for thought,
>> I wonder how increasing minimum broadband speed requirements across
>> the country will encourage or discourage this behavior among network
>> engineers. I sincerely don’t look forward to a future in which we all
>> require 10Gbps to the house but can’t do much with it cause it’s all
>> taken up by lightbulb software updates every evening /rant.
>>
>>> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink
>>> <starlink@lists.bufferbloat.net> wrote:
>>>
>>>> I have now been trying to break the common conflation that download
>>>> "speed"
>>>> means anything at all for day to day, minute to minute, second to
>>>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>>>> succeeding? I lost the 25/10 battle, and keep pointing at really
>>>> terrible latency under load and wifi weirdnesses for many existing
>>>> 100/20 services today.
>>> While I completely agree that latency has bigger impact on how
>>> responsive the Internet feels to use, I do think that 10Mbit is too
>>> low for some standard applications regardless of latency: with the
>>> more recent availability of 4K and higher streaming, that does
>>> require a higher minimum bandwidth to work at all. One could argue
>>> that no one NEEDS 4K streaming, but many families would view this as
>>> an important part of what they do with their Internet (Starlink
>>> makes this reliably possible at our farmhouse). 4K HDR-supporting
>>> TV's are among the most popular TVs being purchased in the U.S.
>>> today. Netflix, Amazon, Max, Disney and other streaming services
>>> provide a substantial portion of 4K HDR content.
>>>
>>> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming.
>>> 100/20 would provide plenty of bandwidth for multiple concurrent 4K
>>> users or a 1-2 8K streams.
>>>
>>> For me, not claiming any special expertise on market needs, just my
>>> own personal assessment on what typical families will need and care
>>> about:
>>>
>>> Latency: below 50ms under load always feels good except for some
>>> intensive gaming (I don't see any benefit to getting loaded latency
>>> further below ~20ms for typical applications, with an exception for
>>> cloud-based gaming that benefits with lower latency all the way down
>>> to about 5ms for young, really fast players, the rest of us won't be
>>> able to tell the difference)
>>>
>>> Download Bandwidth: 10Mbps good enough if not doing UHD video
>>> streaming
>>>
>>> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
>>> depending on # of streams or if wanting to be ready for 8k
>>>
>>> Upload Bandwidth: 10Mbps good enough for quality video conferencing,
>>> higher only needed for multiple concurrent outbound streams
>>>
>>> So, for example (and ignoring upload for this), I would rather have
>>> latency at 50ms (under load) and DL bandwidth of 25Mbps than latency
>>> of 1ms with a max bandwidth of 10Mbps, because the super-low latency
>>> doesn't solve the problem with insufficient bandwidth to watch 4K
>>> HDR content. But, I'd also rather have latency of 20ms with 100Mbps
>>> DL, then latency that exceeds 100ms under load with 1Gbps DL
>>> bandwidth. I think the important thing is to reach "good enough" on
>>> both, not just excel at one while falling short of "good enough" on
>>> the other.
>>>
>>> Note that Starlink handles all of this well, including kids watching
>>> YouTube while my wife and I watch 4K UHD Netflix, except the upload
>>> speed occasionally tops at under 3Mbps for me, causing quality
>>> degradation for outbound video calls (or used to, it seems to have
>>> gotten better in recent months – no problems since sometime in 2023).
>>>
>>> Cheers,
>>> Colin
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-16 17:18 ` Alexandre Petrescu
2024-03-16 17:21 ` Alexandre Petrescu
@ 2024-03-16 17:36 ` Sebastian Moeller
2024-03-16 22:51 ` David Lang
1 sibling, 1 reply; 42+ messages in thread
From: Sebastian Moeller @ 2024-03-16 17:36 UTC (permalink / raw)
To: Alexandre Petrescu; +Cc: Dave Taht via Starlink
Hi Alex...
> On 16. Mar 2024, at 18:18, Alexandre Petrescu via Starlink <starlink@lists.bufferbloat.net> wrote:
>
>
> Le 15/03/2024 à 21:31, Colin_Higbie via Starlink a écrit :
>> Spencer, great point. We certainly see that with RAM, CPU, and graphics power that the software just grows to fill up the space. I do think that there are still enough users with bandwidth constraints (millions of users limited to DSL and 7Mbps DL speeds) that it provides some pressure against streaming and other services requiring huge swaths of data for basic functions, but, to your point, if there were a mandate that everyone would have 100Mbps connection, I agree that would then quickly become saturated so everyone would need more.
>>
>> Fortunately, the video compression codecs have improved dramatically over the past couple of decades from MPEG-1 to MPEG-2 to H.264 to VP9 and H.265. There's still room for further improvements, but I think we're probably getting to a point of diminishing returns on further compression improvements. Even with further improvements, I don't think we'll see bandwidth needs drop so much as improved quality at the same bandwidth, but this does offset the natural bloat-to-fill-available-capacity movement we see.
>
> I think the 4K-latency discussion is a bit difficult, regardless of how great the codecs are.
>
> For one, 4K can be considered outdated for those who look forward to 8K and why not 16K; so we should forget 4K.
[SM] Mmmh, numerically that might make sense, however increasing the resolution of video material brings diminishing returns in perceived quality (the human optical system has limits...).... I remember well how the steps from QVGA, to VGA/SD to HD (720) to FullHD (1080) each resulted in an easily noticeable improvement in quality. However now I have a hard time seeing an improvement (heck even just noticing) if I see fullHD of 4K material on our 43" screen from a normal distance (I need to do immediate A?B comparisons from short distance)....
I am certainly not super sensitive/picky, but I guess others will reach the same point maybe after 4K or after 8K. My point is the potential for growth in resolution is limited by psychophysics (ultimately driven by the visual arc covered by individual photoreceptors in the fovea). And I am not sure whether for normal screen sizes and distances we do not already have past that point at 4K....
> 8K is delivered from space already by a japanese provider, but not on IP. So, if we discuss TV resolutions we should look at these (8K, 16K, and why not 3D 16K for ever more strength testing).
[SM] This might work as a gimmick for marketing, but if 16K does not deliver clearly superior experience why bother putting in the extra effort and energy (and storage size and network capacity) to actually deliver something like that?
>
> Second, 4K etc. are for TV. In TV the latency is rarely if ever an issue. There are some rare cases where latency is very important in TV (I could think of betting in sports, time synch of clocks) but they dont look at such low latency as in our typical visioconference or remote surgery
[SM] Can we please bury this example please, "remote surgery" over the best effort internet, is really really a stupid idea, or something that should only ever be attempted as the very last effort. As a society we already failed if we need to rely somthing like that.
> or group music playing use-cases
[SM] That IMHO is a great example for a realistic low latency use-case (exactly because the failure mode is not as catastrophic as in tele surgery, so this seems acceptable for a best effort network).
> on Internet starlink.
>
> So, I dont know how much 4K, 8K, 16K might be imposing any new latency requirement on starlink.
>
> Alex
>
>>
>>
>> -----Original Message-----
>> From: Spencer Sevilla
>> Sent: Friday, March 15, 2024 3:54 PM
>> To: Colin_Higbie
>> Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
>> Subject: Re: [Starlink] It’s the Latency, FCC
>>
>> Your comment about 4k HDR TVs got me thinking about the bandwidth “arms race” between infrastructure and its clients. It’s a particular pet peeve of mine that as any resource (bandwidth in this case, but the same can be said for memory) becomes more plentiful, software engineers respond by wasting it until it’s scarce enough to require optimization again. Feels like an awkward kind of malthusian inflation that ends up forcing us to buy newer/faster/better devices to perform the same basic functions, which haven’t changed almost at all.
>>
>> I completely agree that no one “needs” 4K UHDR, but when we say this I think we generally mean as opposed to a slightly lower codec, like regular HDR or 1080p. In practice, I’d be willing to bet that there’s at least one poorly programmed TV out there that doesn’t downgrade well or at all, so the tradeoff becomes "4K UHDR or endless stuttering/buffering.” Under this (totally unnecessarily forced upon us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise arms raced ourselves into a TV that can’t really stream anything. A technical downgrade from literally the 1960s.
>>
>> See also: The endless march of “smart appliances” and TVs/gaming systems that require endless humongous software updates. My stove requires natural gas and 120VAC, and I like it that way. Other stoves require… how many Mbps to function regularly? Other food for thought, I wonder how increasing minimum broadband speed requirements across the country will encourage or discourage this behavior among network engineers. I sincerely don’t look forward to a future in which we all require 10Gbps to the house but can’t do much with it cause it’s all taken up by lightbulb software updates every evening /rant.
>>
>>> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>
>>>> I have now been trying to break the common conflation that download "speed"
>>>> means anything at all for day to day, minute to minute, second to
>>>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>>>> succeeding? I lost the 25/10 battle, and keep pointing at really
>>>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>>> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>>>
>>> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>>>
>>> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>>>
>>> Latency: below 50ms under load always feels good except for some
>>> intensive gaming (I don't see any benefit to getting loaded latency
>>> further below ~20ms for typical applications, with an exception for
>>> cloud-based gaming that benefits with lower latency all the way down
>>> to about 5ms for young, really fast players, the rest of us won't be
>>> able to tell the difference)
>>>
>>> Download Bandwidth: 10Mbps good enough if not doing UHD video
>>> streaming
>>>
>>> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
>>> depending on # of streams or if wanting to be ready for 8k
>>>
>>> Upload Bandwidth: 10Mbps good enough for quality video conferencing,
>>> higher only needed for multiple concurrent outbound streams
>>>
>>> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>>>
>>> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>>>
>>> Cheers,
>>> Colin
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-03-15 23:07 ` David Lang
@ 2024-03-16 18:45 ` Colin_Higbie
2024-03-16 19:09 ` Sebastian Moeller
2024-03-16 23:05 ` David Lang
2024-03-16 18:51 ` [Starlink] It?s the Latency, FCC Gert Doering
1 sibling, 2 replies; 42+ messages in thread
From: Colin_Higbie @ 2024-03-16 18:45 UTC (permalink / raw)
To: David Lang, Dave Taht via Starlink
Beautifully said, David Lang. I completely agree.
At the same time, I do think if you give people tools where latency is rarely an issue (say a 10x improvement, so perception of 1/10 the latency), developers will be less efficient UNTIL that inefficiency begins to yield poor UX. For example, if I know I can rely on latency being 10ms and users don't care until total lag exceeds 500ms, I might design something that uses a lot of back-and-forth between client and server. As long as there are fewer than 50 iterations (500 / 10), users will be happy. But if I need to do 100 iterations to get the result, then I'll do some bundling of the operations to keep the total observable lag at or below that 500ms.
I remember programming computer games in the 1980s and the typical RAM users had increased. Before that, I had to contort my code to get it to run in 32kB. After the increase, I could stretch out and use 48kB and stop wasting time shoehorning my code or loading-in segments from floppy disk into the limited RAM. To your point: yes, this made things faster for me as a developer, just as the latency improvements ease the burden on the client-server application developer who needs to ensure a maximum lag below 500ms.
In terms of user experience (UX), I think of there as being "good enough" plateaus based on different use-cases. For example, when web browsing, even 1,000ms of latency is barely noticeable. So any web browser application that comes in under 1,000ms will be "good enough." For VoIP, the "good enough" figure is probably more like 100ms. For video conferencing, maybe it's 80ms (the ability to see the person's facial expression likely increases the expectation of reactions and reduces the tolerance for lag). For some forms of cloud gaming, the "good enough" figure may be as low as 5ms.
That's not to say that 20ms isn't better for VoIP than 100 or 500ms isn't better than 1,000 for web browsing, just that the value for each further incremental reduction in latency drops significantly once you get to that good-enough point. However, those further improvements may open entirely new applications, such as enabling VoIP where before maybe it was only "good enough" for web browsing (think geosynchronous satellites).
In other words, more important than just chasing ever lower latency, it's important to provide SUFFICIENTLY LOW latency for users to perform their intended applications. Getting even lower is still great for opening things up to new applications we never considered before, just like faster CPU's, more RAM, better graphics, etc. have always done since the first computer. But if we're talking about measuring what people need today, this can be done fairly easily based on intended applications.
Bandwidth scales a little differently. There's still a "good enough" level driven by time for a web page to load of about 5s (as web pages become ever more complex and dynamic, this means that bandwidth needs increase), 1Mbps for VoIP, 7Mbps UL/DL for video conferencing, 20Mbps DL for 4K streaming, etc. In addition, there's also a linear scaling to the number of concurrent users. If 1 user needs 15Mbps to stream 4K, 3 users in the household will need about 45Mbps to all stream 4K at the same time, a very real-world scenario at 7pm in a home. This differs from the latency hit of multiple users. I don't know how latency is affected by users, but I know if it's 20ms with 1 user, it's NOT 40ms with 2 users, 60ms with 3, etc. With the bufferbloat improvements created and put forward by members of this group, I think latency doesn't increase by much with multiple concurrent streams.
So all taken together, there can be fairly straightforward descriptions of latency and bandwidth based on expected usage. These are not mysterious attributes. It can be easily calculated per user based on expected use cases.
Cheers,
Colin
-----Original Message-----
From: David Lang <david@lang.hm>
Sent: Friday, March 15, 2024 7:08 PM
To: Spencer Sevilla <spencer.builds.networks@gmail.com>
Cc: Colin_Higbie <CHigbie1@Higbie.name>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: Re: [Starlink] Itʼs the Latency, FCC
one person's 'wasteful resolution' is another person's 'large enhancement'
going from 1080p to 4k video is not being wasteful, it's opting to use the bandwidth in a different way.
saying that it's wasteful for someone to choose to do something is saying that you know better what their priorities should be.
I agree that increasing resources allow programmers to be lazier and write apps that are bigger, but they are also writing them in less time.
What right do you have to say that the programmer's time is less important than the ram/bandwidth used?
I agree that it would be nice to have more people write better code, but everything, including this, has trade-offs.
David Lang
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] It?s the Latency, FCC
2024-03-15 23:07 ` David Lang
2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie
@ 2024-03-16 18:51 ` Gert Doering
2024-03-16 23:08 ` David Lang
1 sibling, 1 reply; 42+ messages in thread
From: Gert Doering @ 2024-03-16 18:51 UTC (permalink / raw)
To: David Lang; +Cc: Spencer Sevilla, Dave Taht via Starlink, Colin_Higbie
Hi,
On Fri, Mar 15, 2024 at 04:07:54PM -0700, David Lang via Starlink wrote:
> What right do you have to say that the programmer's time is less important
> than the ram/bandwidth used?
A single computer programmer saving some two hours in not optimizing code,
costing 1 extra minute for each of a million of users.
Bad trade-off in lifetime well-spent.
Gert Doering
-- NetMaster
--
have you enabled IPv6 on something today...?
SpaceNet AG Vorstand: Sebastian v. Bomhard, Michael Emmer,
Ingo Lalla, Karin Schuler
Joseph-Dollinger-Bogen 14 Aufsichtsratsvors.: A. Grundner-Culemann
D-80807 Muenchen HRB: 136055 (AG Muenchen)
Tel: +49 (0)89/32356-444 USt-IdNr.: DE813185279
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie
@ 2024-03-16 19:09 ` Sebastian Moeller
2024-03-16 19:26 ` Colin_Higbie
2024-03-16 23:05 ` David Lang
1 sibling, 1 reply; 42+ messages in thread
From: Sebastian Moeller @ 2024-03-16 19:09 UTC (permalink / raw)
To: Colin_Higbie; +Cc: David Lang, Dave Taht via Starlink
> On 16. Mar 2024, at 19:45, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> Beautifully said, David Lang. I completely agree.
>
> At the same time, I do think if you give people tools where latency is rarely an issue (say a 10x improvement, so perception of 1/10 the latency), developers will be less efficient UNTIL that inefficiency begins to yield poor UX. For example, if I know I can rely on latency being 10ms and users don't care until total lag exceeds 500ms, I might design something that uses a lot of back-and-forth between client and server. As long as there are fewer than 50 iterations (500 / 10), users will be happy. But if I need to do 100 iterations to get the result, then I'll do some bundling of the operations to keep the total observable lag at or below that 500ms.
>
> I remember programming computer games in the 1980s and the typical RAM users had increased. Before that, I had to contort my code to get it to run in 32kB. After the increase, I could stretch out and use 48kB and stop wasting time shoehorning my code or loading-in segments from floppy disk into the limited RAM. To your point: yes, this made things faster for me as a developer, just as the latency improvements ease the burden on the client-server application developer who needs to ensure a maximum lag below 500ms.
>
> In terms of user experience (UX), I think of there as being "good enough" plateaus based on different use-cases. For example, when web browsing, even 1,000ms of latency is barely noticeable. So any web browser application that comes in under 1,000ms will be "good enough." For VoIP, the "good enough" figure is probably more like 100ms. For video conferencing, maybe it's 80ms (the ability to see the person's facial expression likely increases the expectation of reactions and reduces the tolerance for lag). For some forms of cloud gaming, the "good enough" figure may be as low as 5ms.
>
> That's not to say that 20ms isn't better for VoIP than 100 or 500ms isn't better than 1,000 for web browsing, just that the value for each further incremental reduction in latency drops significantly once you get to that good-enough point. However, those further improvements may open entirely new applications, such as enabling VoIP where before maybe it was only "good enough" for web browsing (think geosynchronous satellites).
>
> In other words, more important than just chasing ever lower latency, it's important to provide SUFFICIENTLY LOW latency for users to perform their intended applications. Getting even lower is still great for opening things up to new applications we never considered before, just like faster CPU's, more RAM, better graphics, etc. have always done since the first computer. But if we're talking about measuring what people need today, this can be done fairly easily based on intended applications.
>
> Bandwidth scales a little differently. There's still a "good enough" level driven by time for a web page to load of about 5s (as web pages become ever more complex and dynamic, this means that bandwidth needs increase), 1Mbps for VoIP, 7Mbps UL/DL for video conferencing, 20Mbps DL for 4K streaming, etc. In addition, there's also a linear scaling to the number of concurrent users. If 1 user needs 15Mbps to stream 4K, 3 users in the household will need about 45Mbps to all stream 4K at the same time, a very real-world scenario at 7pm in a home. This differs from the latency hit of multiple users. I don't know how latency is affected by users, but I know if it's 20ms with 1 user, it's NOT 40ms with 2 users, 60ms with 3, etc. With the bufferbloat improvements created and put forward by members of this group, I think latency doesn't increase by much with multiple concurrent streams.
>
> So all taken together, there can be fairly straightforward descriptions of latency and bandwidth based on expected usage. These are not mysterious attributes. It can be easily calculated per user based on expected use cases.
Well, for most applications there is an absolute lower capacity limit below which it does not work, and for most there is also an upper limit beyond which any additional capacity will not result in noticeable improvements. Latency tends to work differently, instead of a hard cliff there tends to be a slow increasing degradation...
And latency over the internet is never guaranteed, just as network paths outside a single AS rarely are guaranteed...
Now for different applications there are different amounts of delay that users find acceptable, for reaction time gates games this will be lower, for correspondence chess with one move per day this will be higher. Conceptually this can be thought of as a latency budget that one can spend on different components (access latency, transport latency, latency variation buffers...), and and latency in the immediate access network will eat into this budget irrevocably ... and that e.g. restricts the "conus" of the world that can be reached/communicated within the latency budget. But due to the lack of a hard cliff, it is always easy to argue that any latency number is good enough and hard to claim that any random latency number is too large.
Regards
Sebastian
>
> Cheers,
> Colin
>
> -----Original Message-----
> From: David Lang <david@lang.hm>
> Sent: Friday, March 15, 2024 7:08 PM
> To: Spencer Sevilla <spencer.builds.networks@gmail.com>
> Cc: Colin_Higbie <CHigbie1@Higbie.name>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] Itʼs the Latency, FCC
>
> one person's 'wasteful resolution' is another person's 'large enhancement'
>
> going from 1080p to 4k video is not being wasteful, it's opting to use the bandwidth in a different way.
>
> saying that it's wasteful for someone to choose to do something is saying that you know better what their priorities should be.
>
> I agree that increasing resources allow programmers to be lazier and write apps that are bigger, but they are also writing them in less time.
>
> What right do you have to say that the programmer's time is less important than the ram/bandwidth used?
>
> I agree that it would be nice to have more people write better code, but everything, including this, has trade-offs.
>
> David Lang
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-03-16 19:09 ` Sebastian Moeller
@ 2024-03-16 19:26 ` Colin_Higbie
2024-03-16 19:45 ` Sebastian Moeller
0 siblings, 1 reply; 42+ messages in thread
From: Colin_Higbie @ 2024-03-16 19:26 UTC (permalink / raw)
To: Sebastian Moeller, Dave Taht via Starlink
Sebastian,
Not sure if we're saying the same thing here or not. While I would agree with the statement that all else being equal, lower latency is always better than higher latency, there are definitely latency (and bandwidth) requirements, where if the latency is higher (or the bandwidth lower) than those requirements, the application becomes non-viable. That's what I mean when I say it falls short of being "good enough."
For example, you cannot have a pleasant phone call with someone if latency exceeds 1s. Yes, the call may go through, but it's a miserable UX. For VoIP, I would suggest the latency ceiling, even under load, is 100ms. That's a bit arbitrary and I'd accept any number roughly in that ballpark. If a provider's latency gets worse than that for more than a few percent of packets, then that provider should not be able to claim that their Internet supports VoIP.
If the goal is to ensure that Internet providers, including Starlink, are going to provide Internet to meet the needs of users, it is essential to understand the good-enough levels for the expected use cases of those users.
And we can do that based on the most common end-user applications:
Web browsing
VoIP
Video Conferencing
HD Streaming
4K HDR Streaming
Typical Gaming
Competitive Gaming
And maybe throw in to help users: time to DL and UL a 1GB file
Similarly, if we're going to evaluate the merits of government policy for defining latency and bandwidth requirements to qualify for earning taxpayer support, that comes down essentially to understanding those good-enough levels.
Cheers,
Colin
-----Original Message-----
From: Sebastian Moeller <moeller0@gmx.de>
Sent: Saturday, March 16, 2024 3:10 PM
To: Colin_Higbie <CHigbie1@Higbie.name>
Cc: David Lang <david@lang.hm>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: Re: [Starlink] Itʼs the Latency, FCC
>...
Well, for most applications there is an absolute lower capacity limit below which it does not work, and for most there is also an upper limit beyond which any additional capacity will not result in noticeable improvements. Latency tends to work differently, instead of a hard cliff there tends to be a slow increasing degradation...
And latency over the internet is never guaranteed, just as network paths outside a single AS rarely are guaranteed...
Now for different applications there are different amounts of delay that users find acceptable, for reaction time gates games this will be lower, for correspondence chess with one move per day this will be higher. Conceptually this can be thought of as a latency budget that one can spend on different components (access latency, transport latency, latency variation buffers...), and and latency in the immediate access network will eat into this budget irrevocably ... and that e.g. restricts the "conus" of the world that can be reached/communicated within the latency budget. But due to the lack of a hard cliff, it is always easy to argue that any latency number is good enough and hard to claim that any random latency number is too large.
Regards
Sebastian
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-03-16 19:26 ` Colin_Higbie
@ 2024-03-16 19:45 ` Sebastian Moeller
0 siblings, 0 replies; 42+ messages in thread
From: Sebastian Moeller @ 2024-03-16 19:45 UTC (permalink / raw)
To: Colin_Higbie; +Cc: Dave Taht via Starlink
Hi Colin,
> On 16. Mar 2024, at 20:26, Colin_Higbie <CHigbie1@Higbie.name> wrote:
>
> Sebastian,
>
> Not sure if we're saying the same thing here or not. While I would agree with the statement that all else being equal, lower latency is always better than higher latency, there are definitely latency (and bandwidth) requirements, where if the latency is higher (or the bandwidth lower) than those requirements, the application becomes non-viable. That's what I mean when I say it falls short of being "good enough."
>
> For example, you cannot have a pleasant phone call with someone if latency exceeds 1s. Yes, the call may go through, but it's a miserable UX.
[SM] That is my point, miserable but still funktional, while running a 100Kbps constant bitrate VoIP stream over a 50 Kbps link where the loss will make thinks completely imperceivable...
> For VoIP, I would suggest the latency ceiling, even under load, is 100ms. That's a bit arbitrary and I'd accept any number roughly in that ballpark. If a provider's latency gets worse than that for more than a few percent of packets, then that provider should not be able to claim that their Internet supports VoIP.
[SM] Interestingly the ITU claims that one way mouth-to-ear delay up to 200ms (aka 400ms RTT) results in very satisfied and up to 300ms OWD in satisfied telephony customers (ITU-T Rec. G.114 (05/2003)). That is considerably above your 100ms RTT. Now, I am still trying to find the actual psychophysics studies the ITU used to come to that conclusion (as I do not believe the numbers are showing what they are claimed to show), but policy makers still look at these numbers an take them as valid references.
> If the goal is to ensure that Internet providers, including Starlink, are going to provide Internet to meet the needs of users, it is essential to understand the good-enough levels for the expected use cases of those users.
>
> And we can do that based on the most common end-user applications:
>
> Web browsing
> VoIP
> Video Conferencing
> HD Streaming
> 4K HDR Streaming
> Typical Gaming
> Competitive Gaming
> And maybe throw in to help users: time to DL and UL a 1GB file
[SM] Only if we think of latency as a budget, if I can competitively play with a latency up to 100ms, any millisecond of delay I am spending on the access link is not going to restrict my "cone" of players I can match radius with by approximately 100Km...
> Similarly, if we're going to evaluate the merits of government policy for defining latency and bandwidth requirements to qualify for earning taxpayer support, that comes down essentially to understanding those good-enough levels.
[SM] Here is the rub, for 100 Kbps VoIP it is pretty simple to understand that it needs capacity >= 100 Kbps, but if competitive gaming needs an RTT <= 100 ms, what is an ecceptable split between access link and distance?
>
> Cheers,
> Colin
>
> -----Original Message-----
> From: Sebastian Moeller <moeller0@gmx.de>
> Sent: Saturday, March 16, 2024 3:10 PM
> To: Colin_Higbie <CHigbie1@Higbie.name>
> Cc: David Lang <david@lang.hm>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] Itʼs the Latency, FCC
>
>> ...
>
> Well, for most applications there is an absolute lower capacity limit below which it does not work, and for most there is also an upper limit beyond which any additional capacity will not result in noticeable improvements. Latency tends to work differently, instead of a hard cliff there tends to be a slow increasing degradation...
> And latency over the internet is never guaranteed, just as network paths outside a single AS rarely are guaranteed...
> Now for different applications there are different amounts of delay that users find acceptable, for reaction time gates games this will be lower, for correspondence chess with one move per day this will be higher. Conceptually this can be thought of as a latency budget that one can spend on different components (access latency, transport latency, latency variation buffers...), and and latency in the immediate access network will eat into this budget irrevocably ... and that e.g. restricts the "conus" of the world that can be reached/communicated within the latency budget. But due to the lack of a hard cliff, it is always easy to argue that any latency number is good enough and hard to claim that any random latency number is too large.
>
> Regards
> Sebastian
>
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-16 17:36 ` Sebastian Moeller
@ 2024-03-16 22:51 ` David Lang
0 siblings, 0 replies; 42+ messages in thread
From: David Lang @ 2024-03-16 22:51 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Alexandre Petrescu, Dave Taht via Starlink
[-- Attachment #1: Type: TEXT/PLAIN, Size: 2613 bytes --]
On Sat, 16 Mar 2024, Sebastian Moeller via Starlink wrote:
> Hi Alex...
>
>> On 16. Mar 2024, at 18:18, Alexandre Petrescu via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>>
>> Le 15/03/2024 à 21:31, Colin_Higbie via Starlink a écrit :
>>> Spencer, great point. We certainly see that with RAM, CPU, and graphics power that the software just grows to fill up the space. I do think that there are still enough users with bandwidth constraints (millions of users limited to DSL and 7Mbps DL speeds) that it provides some pressure against streaming and other services requiring huge swaths of data for basic functions, but, to your point, if there were a mandate that everyone would have 100Mbps connection, I agree that would then quickly become saturated so everyone would need more.
>>>
>>> Fortunately, the video compression codecs have improved dramatically over the past couple of decades from MPEG-1 to MPEG-2 to H.264 to VP9 and H.265. There's still room for further improvements, but I think we're probably getting to a point of diminishing returns on further compression improvements. Even with further improvements, I don't think we'll see bandwidth needs drop so much as improved quality at the same bandwidth, but this does offset the natural bloat-to-fill-available-capacity movement we see.
>>
>> I think the 4K-latency discussion is a bit difficult, regardless of how great the codecs are.
>>
>> For one, 4K can be considered outdated for those who look forward to 8K and why not 16K; so we should forget 4K.
>
> [SM] Mmmh, numerically that might make sense, however increasing the resolution of video material brings diminishing returns in perceived quality (the human optical system has limits...).... I remember well how the steps from QVGA, to VGA/SD to HD (720) to FullHD (1080) each resulted in an easily noticeable improvement in quality. However now I have a hard time seeing an improvement (heck even just noticing) if I see fullHD of 4K material on our 43" screen from a normal distance (I need to do immediate A?B comparisons from short distance)....
> I am certainly not super sensitive/picky, but I guess others will reach the same point maybe after 4K or after 8K. My point is the potential for growth in resolution is limited by psychophysics (ultimately driven by the visual arc covered by individual photoreceptors in the fovea). And I am not sure whether for normal screen sizes and distances we do not already have past that point at 4K....
true, but go to a 70" screen, or use it for a computer display instead of a TV
and you notice it much easier.
David Lang
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie
2024-03-16 19:09 ` Sebastian Moeller
@ 2024-03-16 23:05 ` David Lang
2024-03-17 15:47 ` [Starlink] It’s " Colin_Higbie
1 sibling, 1 reply; 42+ messages in thread
From: David Lang @ 2024-03-16 23:05 UTC (permalink / raw)
To: Colin_Higbie; +Cc: Dave Taht via Starlink
On Sat, 16 Mar 2024, Colin_Higbie wrote:
> At the same time, I do think if you give people tools where latency is rarely
> an issue (say a 10x improvement, so perception of 1/10 the latency),
> developers will be less efficient UNTIL that inefficiency begins to yield poor
> UX. For example, if I know I can rely on latency being 10ms and users don't
> care until total lag exceeds 500ms, I might design something that uses a lot
> of back-and-forth between client and server. As long as there are fewer than
> 50 iterations (500 / 10), users will be happy. But if I need to do 100
> iterations to get the result, then I'll do some bundling of the operations to
> keep the total observable lag at or below that 500ms.
I don't think developers think about latency at all (as a general rule)
they develop and test over their local lan, and assume it will 'just work' over
the Internet.
> In terms of user experience (UX), I think of there as being "good enough"
> plateaus based on different use-cases. For example, when web browsing, even
> 1,000ms of latency is barely noticeable. So any web browser application that
> comes in under 1,000ms will be "good enough." For VoIP, the "good enough"
> figure is probably more like 100ms. For video conferencing, maybe it's 80ms
> (the ability to see the person's facial expression likely increases the
> expectation of reactions and reduces the tolerance for lag). For some forms of
> cloud gaming, the "good enough" figure may be as low as 5ms.
1 second for the page to load is acceptable (ot nice), but one second delay in
reacting to a clip is unacceptable.
As I understand it, below 100ms is considered 'instantanious response' for most
people.
> That's not to say that 20ms isn't better for VoIP than 100 or 500ms isn't
> better than 1,000 for web browsing, just that the value for each further
> incremental reduction in latency drops significantly once you get to that
> good-enough point. However, those further improvements may open entirely new
> applications, such as enabling VoIP where before maybe it was only "good
> enough" for web browsing (think geosynchronous satellites).
the problem is that latency stacks, you click on the web page, you do a dns
lookup for the page, then a http request for the page contents, which triggers a
http request for a css page, and possibly multiple dns/http requests for
libraries
so a 100ms latency on the network can result in multiple second page load times
for the user (even if all of the content ends up being cached already)
<snip a bunch of good discussion>
> So all taken together, there can be fairly straightforward descriptions of
> latency and bandwidth based on expected usage. These are not mysterious
> attributes. It can be easily calculated per user based on expected use cases.
however, the lag between new uses showing up and changes to the network driven
by those new uses is multiple years long, so the network operators and engineers
need to be proactive, not reactive.
don't wait until the users are complaining before upgrading bandwidth/latency
David Lang
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] It?s the Latency, FCC
2024-03-16 18:51 ` [Starlink] It?s the Latency, FCC Gert Doering
@ 2024-03-16 23:08 ` David Lang
0 siblings, 0 replies; 42+ messages in thread
From: David Lang @ 2024-03-16 23:08 UTC (permalink / raw)
To: Gert Doering; +Cc: Spencer Sevilla, Dave Taht via Starlink, Colin_Higbie
if that one programmer's code is used by millions of users, you are correct, but
if it's used by dozens of users, not so much.
David Lang
On Sat, 16 Mar 2024, Gert Doering wrote:
> Hi,
>
> On Fri, Mar 15, 2024 at 04:07:54PM -0700, David Lang via Starlink wrote:
>> What right do you have to say that the programmer's time is less important
>> than the ram/bandwidth used?
>
> A single computer programmer saving some two hours in not optimizing code,
> costing 1 extra minute for each of a million of users.
>
> Bad trade-off in lifetime well-spent.
>
> Gert Doering
> -- NetMaster
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-16 23:05 ` David Lang
@ 2024-03-17 15:47 ` Colin_Higbie
2024-03-17 16:17 ` [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Dave Collier-Brown
0 siblings, 1 reply; 42+ messages in thread
From: Colin_Higbie @ 2024-03-17 15:47 UTC (permalink / raw)
To: David Lang, Dave Taht via Starlink
David,
Just on that one point that you "don't think developers think about latency at all," what developers (en masse, and as managed by their employers) care about is the user experience. If they don't think latency is an important part of the UX, then indeed they won't think about it. However, if latency is vital to the UX, such as in gaming or voice and video calling, it will be a focus.
Standard QA will include use cases that they believe reflect the majority of their users. We have done testing with artificially high latencies to simulate geosynchronous satellite users, back when they represented a notable portion of our userbase. They no longer do (thanks to services like Starlink and recent proliferation of FTTH and even continued spreading of slower cable and DSL availability into more rural areas), so we no longer include those high latencies in our testing. This does indeed mean that our services will probably become less tolerant of higher latencies (and if we still have any geosynchronous satellite customers, they may resent this possible degradation in service). Some could call this lazy on our part, but it's just doing what's cost effective for most of our users.
I'm estimating, but I think probably about 3 sigma of our users have typical latency (unloaded) of under 120ms. You or others on this list probably know better than I what fraction of our users will suffer severe enough bufferbloat to push a perceptible % of their transactions beyond 200ms.
Fortunately, in our case, even high latency shouldn't be too terrible, but as you rightly point out, if there are many iterations, 1s minimum latency could yield a several second lag, which would be poor UX for almost any application. Since we're no longer testing for that on the premise that 1s minimum latency is no longer a common real-world scenario, it's possible those painful lags could creep into our system without our knowledge.
This is rational and what we should expect and want application and solution developers to do. We would not want developers to spend time, and thereby increase costs, focusing on areas that are not particularly important to their users and customers.
Cheers,
Colin
-----Original Message-----
From: David Lang <david@lang.hm>
Sent: Saturday, March 16, 2024 7:06 PM
To: Colin_Higbie <CHigbie1@Higbie.name>
Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: RE: [Starlink] It’s the Latency, FCC
On Sat, 16 Mar 2024, Colin_Higbie wrote:
> At the same time, I do think if you give people tools where latency is
> rarely an issue (say a 10x improvement, so perception of 1/10 the
> latency), developers will be less efficient UNTIL that inefficiency
> begins to yield poor UX. For example, if I know I can rely on latency
> being 10ms and users don't care until total lag exceeds 500ms, I might
> design something that uses a lot of back-and-forth between client and
> server. As long as there are fewer than
> 50 iterations (500 / 10), users will be happy. But if I need to do 100
> iterations to get the result, then I'll do some bundling of the
> operations to keep the total observable lag at or below that 500ms.
I don't think developers think about latency at all (as a general rule)
^ permalink raw reply [flat|nested] 42+ messages in thread
* [Starlink] Sidebar to It’s the Latency, FCC: Measure it?
2024-03-17 15:47 ` [Starlink] It’s " Colin_Higbie
@ 2024-03-17 16:17 ` Dave Collier-Brown
0 siblings, 0 replies; 42+ messages in thread
From: Dave Collier-Brown @ 2024-03-17 16:17 UTC (permalink / raw)
To: starlink
On 2024-03-17 11:47, Colin_Higbie via Starlink wrote:
> Fortunately, in our case, even high latency shouldn't be too terrible, but as you rightly point out, if there are many iterations, 1s minimum latency could yield a several second lag, which would be poor UX for almost any application. Since we're no longer testing for that on the premise that 1s minimum latency is no longer a common real-world scenario, it's possible those painful lags could creep into our system without our knowledge.
Does that suggest that you should have an easy way to see if you're
unexpectedly delivering a slow service? A tool that reports your RTT to
customers and an alert on it being high for a significant period might
be something all ISPs want, even ones like mine, who just want it to be
able to tell a customer "you don't have a network problem" (;-))
And the FCC might find the data illuminating
--dave
--
David Collier-Brown, | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
dave.collier-brown@indexexchange.com | -- Mark Twain
CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-15 18:32 ` [Starlink] It’s the Latency, FCC Colin Higbie
2024-03-15 18:41 ` Colin_Higbie
@ 2024-04-30 0:39 ` David Lang
2024-04-30 1:30 ` [Starlink] Itʼs " Colin_Higbie
1 sibling, 1 reply; 42+ messages in thread
From: David Lang @ 2024-04-30 0:39 UTC (permalink / raw)
To: Colin Higbie; +Cc: starlink
[-- Attachment #1: Type: text/plain, Size: 3755 bytes --]
hmm, before my DSL got disconnected (the carrier decided they didn't want to
support it any more), I could stream 4k at 8Mb down if there wasn't too much
other activity on the network (doing so at 2x speed was a problem)
David Lang
On Fri, 15 Mar 2024, Colin Higbie via Starlink wrote:
> Date: Fri, 15 Mar 2024 18:32:36 +0000
> From: Colin Higbie via Starlink <starlink@lists.bufferbloat.net>
> Reply-To: Colin Higbie <colin.higbie@scribl.com>
> To: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] It’s the Latency, FCC
>
>> I have now been trying to break the common conflation that download "speed"
>> means anything at all for day to day, minute to minute, second to second, use,
>> once you crack 10mbit, now, for over 14 years. Am I succeeding? I lost the 25/10
>> battle, and keep pointing at really terrible latency under load and wifi weirdnesses
>> for many existing 100/20 services today.
>
> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>
> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>
> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>
> Latency: below 50ms under load always feels good except for some intensive gaming
> (I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference)
>
> Download Bandwidth: 10Mbps good enough if not doing UHD video streaming
>
> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k
>
> Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams
>
> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>
> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>
> Cheers,
> Colin
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-04-30 0:39 ` [Starlink] It’s " David Lang
@ 2024-04-30 1:30 ` Colin_Higbie
2024-04-30 2:16 ` David Lang
0 siblings, 1 reply; 42+ messages in thread
From: Colin_Higbie @ 2024-04-30 1:30 UTC (permalink / raw)
To: David Lang; +Cc: starlink
Was that 4K HDR (not SDR) using the standard protocols that streaming services use (Netflix, Amazon Prime, Disney+, etc.) or was it just some YouTube 4K SDR videos? YouTube will show "HDR" on the gear icon for content that's 4K HDR. If it only shows "4K" instead of "HDR," then means it's SDR. Note that if YouTube, if left to the default of Auto for streaming resolution it will also automatically drop the quality to something that fits within the bandwidth and most of the "4K" content on YouTube is low-quality and not true UHD content (even beyond missing HDR). For example, many smartphones will record 4K video, but their optics are not sufficient to actually have distinct per-pixel image detail, meaning it compresses down to a smaller image with no real additional loss in picture quality, but only because it's really a 4K UHD stream to begin with.
Note that 4K video compression codecs are lossy, so the lower quality the initial image, the lower the bandwidth needed to convey the stream w/o additional quality loss. The needed bandwidth also changes with scene complexity. Falling confetti, like on Newy Year's Eve or at the Super Bowl make for one of the most demanding scenes. Lots of detailed fire and explosions with fast-moving fast panning full dynamic backgrounds are also tough for a compressed signal to preserve (but not as hard as a screen full of falling confetti).
I'm dubious that 8Mbps can handle that except for some of the simplest video, like cartoons or fairly static scenes like the news. Those scenes don't require much data, but that's not the case for all 4K HDR scenes by any means.
It's obviously in Netflix and the other streaming services' interest to be able to sell their more expensive 4K HDR service to as many people as possible. There's a reason they won't offer it to anyone with less than 25Mbps – they don't want the complaints and service calls. Now, to be fair, 4K HDR definitely doesn’t typically require 25Mbps, but it's to their credit that they do include a small bandwidth buffer. In my experience monitoring bandwidth usage for 4K HDR streaming, 15Mbps is the minimum if doing nothing else and that will frequently fall short, depending on the 4K HDR content.
Cheers,
Colin
-----Original Message-----
From: David Lang <david@lang.hm>
Sent: Monday, April 29, 2024 8:40 PM
To: Colin Higbie <colin.higbie@scribl.com>
Cc: starlink@lists.bufferbloat.net
Subject: Re: [Starlink] Itʼs the Latency, FCC
hmm, before my DSL got disconnected (the carrier decided they didn't want to support it any more), I could stream 4k at 8Mb down if there wasn't too much other activity on the network (doing so at 2x speed was a problem)
David Lang
On Fri, 15 Mar 2024, Colin Higbie via Starlink wrote:
> Date: Fri, 15 Mar 2024 18:32:36 +0000
> From: Colin Higbie via Starlink <starlink@lists.bufferbloat.net>
> Reply-To: Colin Higbie <colin.higbie@scribl.com>
> To: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] It’s the Latency, FCC
>
>> I have now been trying to break the common conflation that download "speed"
>> means anything at all for day to day, minute to minute, second to
>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>> succeeding? I lost the 25/10 battle, and keep pointing at really
>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>
> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>
> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>
> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>
> Latency: below 50ms under load always feels good except for some
> intensive gaming (I don't see any benefit to getting loaded latency
> further below ~20ms for typical applications, with an exception for
> cloud-based gaming that benefits with lower latency all the way down
> to about 5ms for young, really fast players, the rest of us won't be
> able to tell the difference)
>
> Download Bandwidth: 10Mbps good enough if not doing UHD video
> streaming
>
> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
> depending on # of streams or if wanting to be ready for 8k
>
> Upload Bandwidth: 10Mbps good enough for quality video conferencing,
> higher only needed for multiple concurrent outbound streams
>
> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>
> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>
> Cheers,
> Colin
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-04-30 1:30 ` [Starlink] Itʼs " Colin_Higbie
@ 2024-04-30 2:16 ` David Lang
0 siblings, 0 replies; 42+ messages in thread
From: David Lang @ 2024-04-30 2:16 UTC (permalink / raw)
To: Colin_Higbie; +Cc: David Lang, starlink
[-- Attachment #1: Type: text/plain, Size: 6673 bytes --]
Amazon, youtube set explicitly to 4k (I didn't say HDR)
David Lang
On Tue, 30 Apr 2024, Colin_Higbie wrote:
> Date: Tue, 30 Apr 2024 01:30:21 +0000
> From: Colin_Higbie <CHigbie1@Higbie.name>
> To: David Lang <david@lang.hm>
> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> Subject: RE: [Starlink] Itʼs the Latency, FCC
>
> Was that 4K HDR (not SDR) using the standard protocols that streaming services use (Netflix, Amazon Prime, Disney+, etc.) or was it just some YouTube 4K SDR videos? YouTube will show "HDR" on the gear icon for content that's 4K HDR. If it only shows "4K" instead of "HDR," then means it's SDR. Note that if YouTube, if left to the default of Auto for streaming resolution it will also automatically drop the quality to something that fits within the bandwidth and most of the "4K" content on YouTube is low-quality and not true UHD content (even beyond missing HDR). For example, many smartphones will record 4K video, but their optics are not sufficient to actually have distinct per-pixel image detail, meaning it compresses down to a smaller image with no real additional loss in picture quality, but only because it's really a 4K UHD stream to begin with.
>
> Note that 4K video compression codecs are lossy, so the lower quality the initial image, the lower the bandwidth needed to convey the stream w/o additional quality loss. The needed bandwidth also changes with scene complexity. Falling confetti, like on Newy Year's Eve or at the Super Bowl make for one of the most demanding scenes. Lots of detailed fire and explosions with fast-moving fast panning full dynamic backgrounds are also tough for a compressed signal to preserve (but not as hard as a screen full of falling confetti).
>
> I'm dubious that 8Mbps can handle that except for some of the simplest video, like cartoons or fairly static scenes like the news. Those scenes don't require much data, but that's not the case for all 4K HDR scenes by any means.
>
> It's obviously in Netflix and the other streaming services' interest to be able to sell their more expensive 4K HDR service to as many people as possible. There's a reason they won't offer it to anyone with less than 25Mbps – they don't want the complaints and service calls. Now, to be fair, 4K HDR definitely doesn’t typically require 25Mbps, but it's to their credit that they do include a small bandwidth buffer. In my experience monitoring bandwidth usage for 4K HDR streaming, 15Mbps is the minimum if doing nothing else and that will frequently fall short, depending on the 4K HDR content.
>
> Cheers,
> Colin
>
>
>
> -----Original Message-----
> From: David Lang <david@lang.hm>
> Sent: Monday, April 29, 2024 8:40 PM
> To: Colin Higbie <colin.higbie@scribl.com>
> Cc: starlink@lists.bufferbloat.net
> Subject: Re: [Starlink] Itʼs the Latency, FCC
>
> hmm, before my DSL got disconnected (the carrier decided they didn't want to support it any more), I could stream 4k at 8Mb down if there wasn't too much other activity on the network (doing so at 2x speed was a problem)
>
> David Lang
>
>
> On Fri, 15 Mar 2024, Colin Higbie via Starlink wrote:
>
>> Date: Fri, 15 Mar 2024 18:32:36 +0000
>> From: Colin Higbie via Starlink <starlink@lists.bufferbloat.net>
>> Reply-To: Colin Higbie <colin.higbie@scribl.com>
>> To: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>> Subject: Re: [Starlink] It’s the Latency, FCC
>>
>>> I have now been trying to break the common conflation that download "speed"
>>> means anything at all for day to day, minute to minute, second to
>>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>>> succeeding? I lost the 25/10 battle, and keep pointing at really
>>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>>
>> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>>
>> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>>
>> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>>
>> Latency: below 50ms under load always feels good except for some
>> intensive gaming (I don't see any benefit to getting loaded latency
>> further below ~20ms for typical applications, with an exception for
>> cloud-based gaming that benefits with lower latency all the way down
>> to about 5ms for young, really fast players, the rest of us won't be
>> able to tell the difference)
>>
>> Download Bandwidth: 10Mbps good enough if not doing UHD video
>> streaming
>>
>> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
>> depending on # of streams or if wanting to be ready for 8k
>>
>> Upload Bandwidth: 10Mbps good enough for quality video conferencing,
>> higher only needed for multiple concurrent outbound streams
>>
>> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>>
>> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>>
>> Cheers,
>> Colin
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 1:30 ` [Starlink] Itʼs " Eugene Y Chang
2024-05-01 1:52 ` Jim Forster
@ 2024-05-02 19:17 ` Michael Richardson
1 sibling, 0 replies; 42+ messages in thread
From: Michael Richardson @ 2024-05-02 19:17 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: David Lang, Dave Taht via Starlink, Colin_Higbie
[-- Attachment #1: Type: text/plain, Size: 609 bytes --]
Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
> Shouldn’t we create a demo to show the solution? To show is more
> effective than to debate. It is impossible to explain to some people.
There were demos of fq_codel doing a visible *to the eye* improvement on
straight browsing at a number of IETF Hackathons. The kit involved was an
entire CMTS+CM head-end. The improvements were due to DNS not having to wait.
--
Michael Richardson <mcr+IETF@sandelman.ca> . o O ( IPv6 IøT consulting )
Sandelman Software Works Inc, Ottawa and Worldwide
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 515 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 21:27 ` Sebastian Moeller
@ 2024-05-01 22:19 ` Eugene Y Chang
0 siblings, 0 replies; 42+ messages in thread
From: Eugene Y Chang @ 2024-05-01 22:19 UTC (permalink / raw)
To: Sebastian Moeller
Cc: Eugene Y Chang, David Lang, Dave Taht via Starlink, Colin_Higbie
[-- Attachment #1.1: Type: text/plain, Size: 11252 bytes --]
Of course. For the gamers, the focus is managing latency. They have control of everything else.
With our high latency and wide range of values, the eSports teams train on campus. It will be interesting to see how much improvements there can be for teams to be able to training from their homes.
Gene
----------------------------------------------
Eugene Chang
IEEE Life Senior Member
IEEE Communications Society & Signal Processing Society,
Hawaii Chapter Chair
IEEE Life Member Affinity Group Hawaii Chair
IEEE Entrepreneurship, Mentor
eugene.chang@ieee.org
m 781-799-0233 (in Honolulu)
> On May 1, 2024, at 11:27 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Gene,
>
>
>> On 1. May 2024, at 23:12, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>
>> Thank you David.
>>
>> Now, shifting the focus a bit. Would a gamer experience some improvement if they made a change in their router?
>
> [SM] It depends... mostly what the root cause of the gaming issues are... fq_codel/cake can only fix issues related to bottleneck queuing and isolation of different flows (so big transfers do not interfere with low rate low latency flows). It will not magically make you a better gamer or fix upstream network issues like bad peering/transit of your ISP or overloaded game servers...
>
>> What needs to be done for a gamer to get tangible improvement?
>
> [SM] Keep static latency low ish, more importantly keep dynamic latency variation/jitter low, and that essentially requires to isolate gaming flows from the effect of concurrent bulk flows...
>
> Regards
> Sebastian
>
>
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Life Senior Member
>> IEEE Communications Society & Signal Processing Society,
>> Hawaii Chapter Chair
>> IEEE Life Member Affinity Group Hawaii Chair
>> IEEE Entrepreneurship, Mentor
>> eugene.chang@ieee.org
>> m 781-799-0233 (in Honolulu)
>>
>>
>>
>>> On May 1, 2024, at 9:18 AM, David Lang <david@lang.hm> wrote:
>>>
>>> On Wed, 1 May 2024, Eugene Y Chang wrote:
>>>
>>>> Thanks David,
>>>>
>>>>
>>>>> On Apr 30, 2024, at 6:12 PM, David Lang <david@lang.hm> wrote:
>>>>>
>>>>> On Tue, 30 Apr 2024, Eugene Y Chang wrote:
>>>>>
>>>>>> I’m not completely up to speed on the gory details. Please humor me. I am pretty good on the technical marketing magic.
>>>>>>
>>>>>> What is the minimum configuration of an ISP infrastructure where we can show an A/B (before and after) test?
>>>>>> It can be a simplified scenario. The simpler, the better. We can talk through the issues of how minimal is adequate. Of course and ISP engineer will argue against simplicity.
>>>>>
>>>>> I did not see a very big improvement on a 4/.5 dsl link, but there was improvement.
>>>>
>>>> Would a user feel the improvement with a 10 minute session:
>>>> shopping on Amazon?
>>>> using Salesforce?
>>>> working with a shared Google doc?
>>>
>>> When it's only a single user, they are unlikely to notice any difference.
>>>
>>> But if you have one person on zoom, a second downloading something, and a third on Amazon, it doesn't take much to notice a difference.
>>>
>>>>> if you put openwrt on the customer router and configure cake with the targeted bandwith at ~80% of line speed, you will usually see a drastic improvement for just about any connection.
>>>>
>>>> Are you saying some of the benefits can be realized with just upgrading the subscriber’s router? This makes adoption harder because the subscriber will lose the ISP’s support for any connectivity issues. If a demo impresses the subscribers, the ISP still needs to embrace this change; otherwise the ISP will wash their hands of any subscriber problems.
>>>
>>> Yes, just upgrading the subscriber's device with cake and configuring it appropriately largely solves the problem (at the cost of sacraficing bandwith because cake isn't working directly on the data flowing from the ISP to the client, and so it has to work indirectly to get the Internet server to slow down instead and that's a laggy, imperect work-around. If the ISPs router does active queue management with fq_codel, then you don't have to do this.
>>>
>>> This is how we know this works, many of use have been doing this for years (see the bufferbloat mailing list and it's archives_
>>>
>>>>> If you can put fq_codel on both ends of the link, you can usually skip capping the bandwidth.
>>>>
>>>> This is good if this means the benefits can be achieved with just the CPE. This also limits the changes to subscribers that care.
>>>
>>> fq_codel on the ISPs router for downlink, and on the subscribers router for uplink.
>>>
>>> putting cake on the router on the subscriber's end and tuning it appropriately can achieve most of the benefit, but is more work to configure.
>>>
>>>>>
>>>>> unfortunantly, it's not possible to just add this to the ISPs existing hardware without having the source for the firmware there (and if they have their queues in ASICs it's impossible to change them.
>>>>
>>>> Is this just an alternative to having the change at the CPE?
>>>> Yes this is harder for routers in the network.
>>>
>>> simple fq_codel on both ends of the bottleneck connection works quite well without any configuration. Cake adds some additional fairness capabilities and has a mode to work around the router on the other end of the bottleneck not doing active queue management
>>>
>>>>> If you can point at the dramatic decrease in latency, with no bandwidth losses, that Starlink has achieved on existing hardware, that may help.
>>>>
>>>> This is good to know for the engineers. This adds confusion with the subscribers.
>>>>
>>>>>
>>>>> There are a number of ISPs around the world that have implemented active queue management and report very good results from doing so.
>>>>
>>>> Can we get these ISPs to publically report how they have achieved great latency reduction?
>>>> We can help them get credit for caring about their subscribers. It would/could be a (short term) competitive advantage.
>>>> Of course their competitors will (might) adopt these changes and eliminate the advantage, BUT the subscribers will retain glow of the initial marketing for a much longer time.
>>>
>>> several of them have done so, I think someone else posted a report from one in this thread.
>>>
>>>>> But showing that their existing hardware can do it when their upstream vendor doesn't support it is going to be hard.
>>>>
>>>> Is the upstream vendor a network provider or a computing center?
>>>> Getting good latency from the subscriber, through the access network to the edge computing center and CDNs would be great. The CDNs would harvest the benefits. The other computing configurations would have make the change to be competitive.
>>>
>>> I'm talking about the manufacturer of the routers that the ISPs deploy at the last hop before getting to the subscriber, and the router on the subscriber end of the link (although most of those are running some variation of openWRT, so turning it on would not be significant work for the manufacturer)
>>>
>>>> We wouild have done our part at pushing the next round of adoption.
>>>
>>> Many of us have been pushing this for well over a decade. Getting Starlink's attention to address their bufferbloat issues is a major success.
>>>
>>> David Lang
>>>
>>>> Gene
>>>>
>>>>>
>>>>> David Lang
>>>>>
>>>>>>
>>>>>> We will want to show the human visible impact and not debate good or not so good measurements. If we get the business and community subscribers on our side, we win.
>>>>>>
>>>>>> Note:
>>>>>> Stage 1 is to show we have a pure software fix (that can work on their hardware). The fix is “so dramatic” that subscribers can experience it without debating measurements.
>>>>>> Stage 2 discusses why the ISP should demand that their equipment vendors add this software. (The software could already be available, but the ISP doesn’t think it is worth the trouble to enable it.) Nothing will happen unless we stay engaged. We need to keep the subscribers engaged, too.
>>>>>>
>>>>>> Should we have a conference call to discuss this?
>>>>>>
>>>>>>
>>>>>> Gene
>>>>>> ----------------------------------------------
>>>>>> Eugene Chang
>>>>>> IEEE Life Senior Member
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On Apr 30, 2024, at 3:52 PM, Jim Forster <jim@connectivitycap.com> wrote:
>>>>>>>
>>>>>>> Gene, David,
>>>>>>> ‘m
>>>>>>> Agreed that the technical problem is largely solved with cake & codel.
>>>>>>>
>>>>>>> Also that demos are good. How to do one for this problem>
>>>>>>>
>>>>>>> — Jim
>>>>>>>
>>>>>>>> The bandwidth mantra has been used for so long that a technical discussion cannot unseat the mantra.
>>>>>>>> Some technical parties use the mantra to sell more, faster, ineffective service. Gullible customers accept that they would be happy if they could afford even more speed.
>>>>>>>>
>>>>>>>> Shouldn’t we create a demo to show the solution?
>>>>>>>> To show is more effective than to debate. It is impossible to explain to some people.
>>>>>>>> Has anyone tried to create a demo (to unseat the bandwidth mantra)?
>>>>>>>> Is an effective demo too complicated to create?
>>>>>>>> I’d be glad to participate in defining a demo and publicity campaign.
>>>>>>>>
>>>>>>>> Gene
>>>>>>>>
>>>>>>>>
>>>>>>>>> On Apr 30, 2024, at 2:36 PM, David Lang <david@lang.hm <mailto:david@lang.hm> <mailto:david@lang.hm<mailto:david@lang.hm>>> wrote:
>>>>>>>>>
>>>>>>>>> On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
>>>>>>>>>
>>>>>>>>>> I am always surprised how complicated these discussions become. (Surprised mostly because I forgot the kind of issues this community care about.) The discussion doesn’t shed light on the following scenarios.
>>>>>>>>>>
>>>>>>>>>> While watching stream content, activating controls needed to switch content sometimes (often?) have long pauses. I attribute that to buffer bloat and high latency.
>>>>>>>>>>
>>>>>>>>>> With a happy household user watching streaming media, a second user could have terrible shopping experience with Amazon. The interactive response could be (is often) horrible. (Personally, I would be doing email and working on a shared doc. The Amazon analogy probably applies to more people.)
>>>>>>>>>>
>>>>>>>>>> How can we deliver graceful performance to both persons in a household?
>>>>>>>>>> Is seeking graceful performance too complicated to improve?
>>>>>>>>>> (I said “graceful” to allow technical flexibility.)
>>>>>>>>>
>>>>>>>>> it's largely a solved problem from a technical point of view. fq_codel and cake solve this.
>>>>>>>>>
>>>>>>>>> The solution is just not deployed widely, instead people argue that more bandwidth is needed instead.
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
[-- Attachment #1.2: Type: text/html, Size: 24694 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 21:11 ` Frantisek Borsik
@ 2024-05-01 22:10 ` Eugene Y Chang
0 siblings, 0 replies; 42+ messages in thread
From: Eugene Y Chang @ 2024-05-01 22:10 UTC (permalink / raw)
To: Frantisek Borsik
Cc: Eugene Y Chang, David Lang, Colin_Higbie, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 11162 bytes --]
Thank you all for the discussion.
Given everything that has been said my plan A will be to engage the eSports community and find some engineering students that will want to give this a test.
The business users will follow the ISPs and the state experts. Since they all learned their craft from the incumbent ISP, nothing will happen here (for now).
Gene
----------------------------------------------
Eugene Chang
IEEE Life Senior Member
IEEE Communications Society & Signal Processing Society,
Hawaii Chapter Chair
IEEE Life Member Affinity Group Hawaii Chair
IEEE Entrepreneurship, Mentor
eugene.chang@ieee.org
m 781-799-0233 (in Honolulu)
> On May 1, 2024, at 11:11 AM, Frantisek Borsik <frantisek.borsik@gmail.com> wrote:
>
> Eugene, this is one of the ISP examples of using OpenWrt, CAKE & FQ-CoDel to fix not only his network, but also to refurbish an old device - when the vendor didn't give a flying F:
> https://blog.nafiux.com/posts/cnpilot_r190w_openwrt_bufferbloat_fqcodel_cake/ <https://blog.nafiux.com/posts/cnpilot_r190w_openwrt_bufferbloat_fqcodel_cake/>
>
> Here is also the list of OpenWrt supported HW: https://openwrt.org/supported_devices <https://openwrt.org/supported_devices>
> If you/ISP want to go mainstream, MikroTik will be a good option.
>
> This is a great place to start (not only for your ISP): https://www.bufferbloat.net/projects/bloat/wiki/What_can_I_do_about_Bufferbloat/ <https://www.bufferbloat.net/projects/bloat/wiki/What_can_I_do_about_Bufferbloat/>
>
> All the best,
>
> Frank
>
> Frantisek (Frank) Borsik
>
>
>
> https://www.linkedin.com/in/frantisekborsik <https://www.linkedin.com/in/frantisekborsik>
> Signal, Telegram, WhatsApp: +421919416714
>
> iMessage, mobile: +420775230885
>
> Skype: casioa5302ca
>
> frantisek.borsik@gmail.com <mailto:frantisek.borsik@gmail.com>
>
> On Wed, May 1, 2024 at 9:18 PM David Lang via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
> On Wed, 1 May 2024, Eugene Y Chang wrote:
>
> > Thanks David,
> >
> >
> >> On Apr 30, 2024, at 6:12 PM, David Lang <david@lang.hm <mailto:david@lang.hm>> wrote:
> >>
> >> On Tue, 30 Apr 2024, Eugene Y Chang wrote:
> >>
> >>> I’m not completely up to speed on the gory details. Please humor me. I am pretty good on the technical marketing magic.
> >>>
> >>> What is the minimum configuration of an ISP infrastructure where we can show an A/B (before and after) test?
> >>> It can be a simplified scenario. The simpler, the better. We can talk through the issues of how minimal is adequate. Of course and ISP engineer will argue against simplicity.
> >>
> >> I did not see a very big improvement on a 4/.5 dsl link, but there was improvement.
> >
> > Would a user feel the improvement with a 10 minute session:
> > shopping on Amazon?
> > using Salesforce?
> > working with a shared Google doc?
>
> When it's only a single user, they are unlikely to notice any difference.
>
> But if you have one person on zoom, a second downloading something, and a third
> on Amazon, it doesn't take much to notice a difference.
>
> >> if you put openwrt on the customer router and configure cake with the targeted bandwith at ~80% of line speed, you will usually see a drastic improvement for just about any connection.
> >
> > Are you saying some of the benefits can be realized with just upgrading the
> > subscriber’s router? This makes adoption harder because the subscriber will
> > lose the ISP’s support for any connectivity issues. If a demo impresses the
> > subscribers, the ISP still needs to embrace this change; otherwise the ISP
> > will wash their hands of any subscriber problems.
>
> Yes, just upgrading the subscriber's device with cake and configuring it
> appropriately largely solves the problem (at the cost of sacraficing bandwith
> because cake isn't working directly on the data flowing from the ISP to the
> client, and so it has to work indirectly to get the Internet server to slow down
> instead and that's a laggy, imperect work-around. If the ISPs router does active
> queue management with fq_codel, then you don't have to do this.
>
> This is how we know this works, many of use have been doing this for years (see
> the bufferbloat mailing list and it's archives_
>
> >> If you can put fq_codel on both ends of the link, you can usually skip capping the bandwidth.
> >
> > This is good if this means the benefits can be achieved with just the CPE. This also limits the changes to subscribers that care.
>
> fq_codel on the ISPs router for downlink, and on the subscribers router for
> uplink.
>
> putting cake on the router on the subscriber's end and tuning it appropriately
> can achieve most of the benefit, but is more work to configure.
>
> >>
> >> unfortunantly, it's not possible to just add this to the ISPs existing hardware without having the source for the firmware there (and if they have their queues in ASICs it's impossible to change them.
> >
> > Is this just an alternative to having the change at the CPE?
> > Yes this is harder for routers in the network.
>
> simple fq_codel on both ends of the bottleneck connection works quite well
> without any configuration. Cake adds some additional fairness capabilities and
> has a mode to work around the router on the other end of the bottleneck not
> doing active queue management
>
> >> If you can point at the dramatic decrease in latency, with no bandwidth losses, that Starlink has achieved on existing hardware, that may help.
> >
> > This is good to know for the engineers. This adds confusion with the subscribers.
> >
> >>
> >> There are a number of ISPs around the world that have implemented active queue management and report very good results from doing so.
> >
> > Can we get these ISPs to publically report how they have achieved great latency reduction?
> > We can help them get credit for caring about their subscribers. It would/could be a (short term) competitive advantage.
> > Of course their competitors will (might) adopt these changes and eliminate the advantage, BUT the subscribers will retain glow of the initial marketing for a much longer time.
>
> several of them have done so, I think someone else posted a report from one in
> this thread.
>
> >> But showing that their existing hardware can do it when their upstream vendor doesn't support it is going to be hard.
> >
> > Is the upstream vendor a network provider or a computing center?
> > Getting good latency from the subscriber, through the access network to the edge computing center and CDNs would be great. The CDNs would harvest the benefits. The other computing configurations would have make the change to be competitive.
>
> I'm talking about the manufacturer of the routers that the ISPs deploy at the
> last hop before getting to the subscriber, and the router on the subscriber end
> of the link (although most of those are running some variation of openWRT, so
> turning it on would not be significant work for the manufacturer)
>
> > We wouild have done our part at pushing the next round of adoption.
>
> Many of us have been pushing this for well over a decade. Getting Starlink's
> attention to address their bufferbloat issues is a major success.
>
> David Lang
>
> > Gene
> >
> >>
> >> David Lang
> >>
> >>>
> >>> We will want to show the human visible impact and not debate good or not so good measurements. If we get the business and community subscribers on our side, we win.
> >>>
> >>> Note:
> >>> Stage 1 is to show we have a pure software fix (that can work on their hardware). The fix is “so dramatic” that subscribers can experience it without debating measurements.
> >>> Stage 2 discusses why the ISP should demand that their equipment vendors add this software. (The software could already be available, but the ISP doesn’t think it is worth the trouble to enable it.) Nothing will happen unless we stay engaged. We need to keep the subscribers engaged, too.
> >>>
> >>> Should we have a conference call to discuss this?
> >>>
> >>>
> >>> Gene
> >>> ----------------------------------------------
> >>> Eugene Chang
> >>> IEEE Life Senior Member
> >>>
> >>>
> >>>
> >>>> On Apr 30, 2024, at 3:52 PM, Jim Forster <jim@connectivitycap.com <mailto:jim@connectivitycap.com>> wrote:
> >>>>
> >>>> Gene, David,
> >>>> ‘m
> >>>> Agreed that the technical problem is largely solved with cake & codel.
> >>>>
> >>>> Also that demos are good. How to do one for this problem>
> >>>>
> >>>> — Jim
> >>>>
> >>>>> The bandwidth mantra has been used for so long that a technical discussion cannot unseat the mantra.
> >>>>> Some technical parties use the mantra to sell more, faster, ineffective service. Gullible customers accept that they would be happy if they could afford even more speed.
> >>>>>
> >>>>> Shouldn’t we create a demo to show the solution?
> >>>>> To show is more effective than to debate. It is impossible to explain to some people.
> >>>>> Has anyone tried to create a demo (to unseat the bandwidth mantra)?
> >>>>> Is an effective demo too complicated to create?
> >>>>> I’d be glad to participate in defining a demo and publicity campaign.
> >>>>>
> >>>>> Gene
> >>>>>
> >>>>>
> >>>>>> On Apr 30, 2024, at 2:36 PM, David Lang <david@lang.hm <mailto:david@lang.hm> <mailto:david@lang.hm <mailto:david@lang.hm>> <mailto:david@lang.hm <mailto:david@lang.hm> <mailto:david@lang.hm <mailto:david@lang.hm>>>> wrote:
> >>>>>>
> >>>>>> On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
> >>>>>>
> >>>>>>> I am always surprised how complicated these discussions become. (Surprised mostly because I forgot the kind of issues this community care about.) The discussion doesn’t shed light on the following scenarios.
> >>>>>>>
> >>>>>>> While watching stream content, activating controls needed to switch content sometimes (often?) have long pauses. I attribute that to buffer bloat and high latency.
> >>>>>>>
> >>>>>>> With a happy household user watching streaming media, a second user could have terrible shopping experience with Amazon. The interactive response could be (is often) horrible. (Personally, I would be doing email and working on a shared doc. The Amazon analogy probably applies to more people.)
> >>>>>>>
> >>>>>>> How can we deliver graceful performance to both persons in a household?
> >>>>>>> Is seeking graceful performance too complicated to improve?
> >>>>>>> (I said “graceful” to allow technical flexibility.)
> >>>>>>
> >>>>>> it's largely a solved problem from a technical point of view. fq_codel and cake solve this.
> >>>>>>
> >>>>>> The solution is just not deployed widely, instead people argue that more bandwidth is needed instead.
> >
> >_______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
[-- Attachment #1.2: Type: text/html, Size: 19871 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 21:12 ` Eugene Y Chang
@ 2024-05-01 21:27 ` Sebastian Moeller
2024-05-01 22:19 ` Eugene Y Chang
0 siblings, 1 reply; 42+ messages in thread
From: Sebastian Moeller @ 2024-05-01 21:27 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: David Lang, Dave Taht via Starlink, Colin_Higbie
Hi Gene,
> On 1. May 2024, at 23:12, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> Thank you David.
>
> Now, shifting the focus a bit. Would a gamer experience some improvement if they made a change in their router?
[SM] It depends... mostly what the root cause of the gaming issues are... fq_codel/cake can only fix issues related to bottleneck queuing and isolation of different flows (so big transfers do not interfere with low rate low latency flows). It will not magically make you a better gamer or fix upstream network issues like bad peering/transit of your ISP or overloaded game servers...
> What needs to be done for a gamer to get tangible improvement?
[SM] Keep static latency low ish, more importantly keep dynamic latency variation/jitter low, and that essentially requires to isolate gaming flows from the effect of concurrent bulk flows...
Regards
Sebastian
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Life Senior Member
> IEEE Communications Society & Signal Processing Society,
> Hawaii Chapter Chair
> IEEE Life Member Affinity Group Hawaii Chair
> IEEE Entrepreneurship, Mentor
> eugene.chang@ieee.org
> m 781-799-0233 (in Honolulu)
>
>
>
>> On May 1, 2024, at 9:18 AM, David Lang <david@lang.hm> wrote:
>>
>> On Wed, 1 May 2024, Eugene Y Chang wrote:
>>
>>> Thanks David,
>>>
>>>
>>>> On Apr 30, 2024, at 6:12 PM, David Lang <david@lang.hm> wrote:
>>>>
>>>> On Tue, 30 Apr 2024, Eugene Y Chang wrote:
>>>>
>>>>> I’m not completely up to speed on the gory details. Please humor me. I am pretty good on the technical marketing magic.
>>>>>
>>>>> What is the minimum configuration of an ISP infrastructure where we can show an A/B (before and after) test?
>>>>> It can be a simplified scenario. The simpler, the better. We can talk through the issues of how minimal is adequate. Of course and ISP engineer will argue against simplicity.
>>>>
>>>> I did not see a very big improvement on a 4/.5 dsl link, but there was improvement.
>>>
>>> Would a user feel the improvement with a 10 minute session:
>>> shopping on Amazon?
>>> using Salesforce?
>>> working with a shared Google doc?
>>
>> When it's only a single user, they are unlikely to notice any difference.
>>
>> But if you have one person on zoom, a second downloading something, and a third on Amazon, it doesn't take much to notice a difference.
>>
>>>> if you put openwrt on the customer router and configure cake with the targeted bandwith at ~80% of line speed, you will usually see a drastic improvement for just about any connection.
>>>
>>> Are you saying some of the benefits can be realized with just upgrading the subscriber’s router? This makes adoption harder because the subscriber will lose the ISP’s support for any connectivity issues. If a demo impresses the subscribers, the ISP still needs to embrace this change; otherwise the ISP will wash their hands of any subscriber problems.
>>
>> Yes, just upgrading the subscriber's device with cake and configuring it appropriately largely solves the problem (at the cost of sacraficing bandwith because cake isn't working directly on the data flowing from the ISP to the client, and so it has to work indirectly to get the Internet server to slow down instead and that's a laggy, imperect work-around. If the ISPs router does active queue management with fq_codel, then you don't have to do this.
>>
>> This is how we know this works, many of use have been doing this for years (see the bufferbloat mailing list and it's archives_
>>
>>>> If you can put fq_codel on both ends of the link, you can usually skip capping the bandwidth.
>>>
>>> This is good if this means the benefits can be achieved with just the CPE. This also limits the changes to subscribers that care.
>>
>> fq_codel on the ISPs router for downlink, and on the subscribers router for uplink.
>>
>> putting cake on the router on the subscriber's end and tuning it appropriately can achieve most of the benefit, but is more work to configure.
>>
>>>>
>>>> unfortunantly, it's not possible to just add this to the ISPs existing hardware without having the source for the firmware there (and if they have their queues in ASICs it's impossible to change them.
>>>
>>> Is this just an alternative to having the change at the CPE?
>>> Yes this is harder for routers in the network.
>>
>> simple fq_codel on both ends of the bottleneck connection works quite well without any configuration. Cake adds some additional fairness capabilities and has a mode to work around the router on the other end of the bottleneck not doing active queue management
>>
>>>> If you can point at the dramatic decrease in latency, with no bandwidth losses, that Starlink has achieved on existing hardware, that may help.
>>>
>>> This is good to know for the engineers. This adds confusion with the subscribers.
>>>
>>>>
>>>> There are a number of ISPs around the world that have implemented active queue management and report very good results from doing so.
>>>
>>> Can we get these ISPs to publically report how they have achieved great latency reduction?
>>> We can help them get credit for caring about their subscribers. It would/could be a (short term) competitive advantage.
>>> Of course their competitors will (might) adopt these changes and eliminate the advantage, BUT the subscribers will retain glow of the initial marketing for a much longer time.
>>
>> several of them have done so, I think someone else posted a report from one in this thread.
>>
>>>> But showing that their existing hardware can do it when their upstream vendor doesn't support it is going to be hard.
>>>
>>> Is the upstream vendor a network provider or a computing center?
>>> Getting good latency from the subscriber, through the access network to the edge computing center and CDNs would be great. The CDNs would harvest the benefits. The other computing configurations would have make the change to be competitive.
>>
>> I'm talking about the manufacturer of the routers that the ISPs deploy at the last hop before getting to the subscriber, and the router on the subscriber end of the link (although most of those are running some variation of openWRT, so turning it on would not be significant work for the manufacturer)
>>
>>> We wouild have done our part at pushing the next round of adoption.
>>
>> Many of us have been pushing this for well over a decade. Getting Starlink's attention to address their bufferbloat issues is a major success.
>>
>> David Lang
>>
>>> Gene
>>>
>>>>
>>>> David Lang
>>>>
>>>>>
>>>>> We will want to show the human visible impact and not debate good or not so good measurements. If we get the business and community subscribers on our side, we win.
>>>>>
>>>>> Note:
>>>>> Stage 1 is to show we have a pure software fix (that can work on their hardware). The fix is “so dramatic” that subscribers can experience it without debating measurements.
>>>>> Stage 2 discusses why the ISP should demand that their equipment vendors add this software. (The software could already be available, but the ISP doesn’t think it is worth the trouble to enable it.) Nothing will happen unless we stay engaged. We need to keep the subscribers engaged, too.
>>>>>
>>>>> Should we have a conference call to discuss this?
>>>>>
>>>>>
>>>>> Gene
>>>>> ----------------------------------------------
>>>>> Eugene Chang
>>>>> IEEE Life Senior Member
>>>>>
>>>>>
>>>>>
>>>>>> On Apr 30, 2024, at 3:52 PM, Jim Forster <jim@connectivitycap.com> wrote:
>>>>>>
>>>>>> Gene, David,
>>>>>> ‘m
>>>>>> Agreed that the technical problem is largely solved with cake & codel.
>>>>>>
>>>>>> Also that demos are good. How to do one for this problem>
>>>>>>
>>>>>> — Jim
>>>>>>
>>>>>>> The bandwidth mantra has been used for so long that a technical discussion cannot unseat the mantra.
>>>>>>> Some technical parties use the mantra to sell more, faster, ineffective service. Gullible customers accept that they would be happy if they could afford even more speed.
>>>>>>>
>>>>>>> Shouldn’t we create a demo to show the solution?
>>>>>>> To show is more effective than to debate. It is impossible to explain to some people.
>>>>>>> Has anyone tried to create a demo (to unseat the bandwidth mantra)?
>>>>>>> Is an effective demo too complicated to create?
>>>>>>> I’d be glad to participate in defining a demo and publicity campaign.
>>>>>>>
>>>>>>> Gene
>>>>>>>
>>>>>>>
>>>>>>>> On Apr 30, 2024, at 2:36 PM, David Lang <david@lang.hm <mailto:david@lang.hm> <mailto:david@lang.hm<mailto:david@lang.hm>>> wrote:
>>>>>>>>
>>>>>>>> On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
>>>>>>>>
>>>>>>>>> I am always surprised how complicated these discussions become. (Surprised mostly because I forgot the kind of issues this community care about.) The discussion doesn’t shed light on the following scenarios.
>>>>>>>>>
>>>>>>>>> While watching stream content, activating controls needed to switch content sometimes (often?) have long pauses. I attribute that to buffer bloat and high latency.
>>>>>>>>>
>>>>>>>>> With a happy household user watching streaming media, a second user could have terrible shopping experience with Amazon. The interactive response could be (is often) horrible. (Personally, I would be doing email and working on a shared doc. The Amazon analogy probably applies to more people.)
>>>>>>>>>
>>>>>>>>> How can we deliver graceful performance to both persons in a household?
>>>>>>>>> Is seeking graceful performance too complicated to improve?
>>>>>>>>> (I said “graceful” to allow technical flexibility.)
>>>>>>>>
>>>>>>>> it's largely a solved problem from a technical point of view. fq_codel and cake solve this.
>>>>>>>>
>>>>>>>> The solution is just not deployed widely, instead people argue that more bandwidth is needed instead.
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 19:18 ` David Lang
2024-05-01 21:11 ` Frantisek Borsik
@ 2024-05-01 21:12 ` Eugene Y Chang
2024-05-01 21:27 ` Sebastian Moeller
1 sibling, 1 reply; 42+ messages in thread
From: Eugene Y Chang @ 2024-05-01 21:12 UTC (permalink / raw)
To: David Lang
Cc: Eugene Y Chang, Jim Forster, Dave Taht via Starlink, Colin_Higbie
[-- Attachment #1.1: Type: text/plain, Size: 9265 bytes --]
Thank you David.
Now, shifting the focus a bit. Would a gamer experience some improvement if they made a change in their router?
What needs to be done for a gamer to get tangible improvement?
Gene
----------------------------------------------
Eugene Chang
IEEE Life Senior Member
IEEE Communications Society & Signal Processing Society,
Hawaii Chapter Chair
IEEE Life Member Affinity Group Hawaii Chair
IEEE Entrepreneurship, Mentor
eugene.chang@ieee.org
m 781-799-0233 (in Honolulu)
> On May 1, 2024, at 9:18 AM, David Lang <david@lang.hm> wrote:
>
> On Wed, 1 May 2024, Eugene Y Chang wrote:
>
>> Thanks David,
>>
>>
>>> On Apr 30, 2024, at 6:12 PM, David Lang <david@lang.hm> wrote:
>>>
>>> On Tue, 30 Apr 2024, Eugene Y Chang wrote:
>>>
>>>> I’m not completely up to speed on the gory details. Please humor me. I am pretty good on the technical marketing magic.
>>>>
>>>> What is the minimum configuration of an ISP infrastructure where we can show an A/B (before and after) test?
>>>> It can be a simplified scenario. The simpler, the better. We can talk through the issues of how minimal is adequate. Of course and ISP engineer will argue against simplicity.
>>>
>>> I did not see a very big improvement on a 4/.5 dsl link, but there was improvement.
>>
>> Would a user feel the improvement with a 10 minute session:
>> shopping on Amazon?
>> using Salesforce?
>> working with a shared Google doc?
>
> When it's only a single user, they are unlikely to notice any difference.
>
> But if you have one person on zoom, a second downloading something, and a third on Amazon, it doesn't take much to notice a difference.
>
>>> if you put openwrt on the customer router and configure cake with the targeted bandwith at ~80% of line speed, you will usually see a drastic improvement for just about any connection.
>>
>> Are you saying some of the benefits can be realized with just upgrading the subscriber’s router? This makes adoption harder because the subscriber will lose the ISP’s support for any connectivity issues. If a demo impresses the subscribers, the ISP still needs to embrace this change; otherwise the ISP will wash their hands of any subscriber problems.
>
> Yes, just upgrading the subscriber's device with cake and configuring it appropriately largely solves the problem (at the cost of sacraficing bandwith because cake isn't working directly on the data flowing from the ISP to the client, and so it has to work indirectly to get the Internet server to slow down instead and that's a laggy, imperect work-around. If the ISPs router does active queue management with fq_codel, then you don't have to do this.
>
> This is how we know this works, many of use have been doing this for years (see the bufferbloat mailing list and it's archives_
>
>>> If you can put fq_codel on both ends of the link, you can usually skip capping the bandwidth.
>>
>> This is good if this means the benefits can be achieved with just the CPE. This also limits the changes to subscribers that care.
>
> fq_codel on the ISPs router for downlink, and on the subscribers router for uplink.
>
> putting cake on the router on the subscriber's end and tuning it appropriately can achieve most of the benefit, but is more work to configure.
>
>>>
>>> unfortunantly, it's not possible to just add this to the ISPs existing hardware without having the source for the firmware there (and if they have their queues in ASICs it's impossible to change them.
>>
>> Is this just an alternative to having the change at the CPE?
>> Yes this is harder for routers in the network.
>
> simple fq_codel on both ends of the bottleneck connection works quite well without any configuration. Cake adds some additional fairness capabilities and has a mode to work around the router on the other end of the bottleneck not doing active queue management
>
>>> If you can point at the dramatic decrease in latency, with no bandwidth losses, that Starlink has achieved on existing hardware, that may help.
>>
>> This is good to know for the engineers. This adds confusion with the subscribers.
>>
>>>
>>> There are a number of ISPs around the world that have implemented active queue management and report very good results from doing so.
>>
>> Can we get these ISPs to publically report how they have achieved great latency reduction?
>> We can help them get credit for caring about their subscribers. It would/could be a (short term) competitive advantage.
>> Of course their competitors will (might) adopt these changes and eliminate the advantage, BUT the subscribers will retain glow of the initial marketing for a much longer time.
>
> several of them have done so, I think someone else posted a report from one in this thread.
>
>>> But showing that their existing hardware can do it when their upstream vendor doesn't support it is going to be hard.
>>
>> Is the upstream vendor a network provider or a computing center?
>> Getting good latency from the subscriber, through the access network to the edge computing center and CDNs would be great. The CDNs would harvest the benefits. The other computing configurations would have make the change to be competitive.
>
> I'm talking about the manufacturer of the routers that the ISPs deploy at the last hop before getting to the subscriber, and the router on the subscriber end of the link (although most of those are running some variation of openWRT, so turning it on would not be significant work for the manufacturer)
>
>> We wouild have done our part at pushing the next round of adoption.
>
> Many of us have been pushing this for well over a decade. Getting Starlink's attention to address their bufferbloat issues is a major success.
>
> David Lang
>
>> Gene
>>
>>>
>>> David Lang
>>>
>>>>
>>>> We will want to show the human visible impact and not debate good or not so good measurements. If we get the business and community subscribers on our side, we win.
>>>>
>>>> Note:
>>>> Stage 1 is to show we have a pure software fix (that can work on their hardware). The fix is “so dramatic” that subscribers can experience it without debating measurements.
>>>> Stage 2 discusses why the ISP should demand that their equipment vendors add this software. (The software could already be available, but the ISP doesn’t think it is worth the trouble to enable it.) Nothing will happen unless we stay engaged. We need to keep the subscribers engaged, too.
>>>>
>>>> Should we have a conference call to discuss this?
>>>>
>>>>
>>>> Gene
>>>> ----------------------------------------------
>>>> Eugene Chang
>>>> IEEE Life Senior Member
>>>>
>>>>
>>>>
>>>>> On Apr 30, 2024, at 3:52 PM, Jim Forster <jim@connectivitycap.com> wrote:
>>>>>
>>>>> Gene, David,
>>>>> ‘m
>>>>> Agreed that the technical problem is largely solved with cake & codel.
>>>>>
>>>>> Also that demos are good. How to do one for this problem>
>>>>>
>>>>> — Jim
>>>>>
>>>>>> The bandwidth mantra has been used for so long that a technical discussion cannot unseat the mantra.
>>>>>> Some technical parties use the mantra to sell more, faster, ineffective service. Gullible customers accept that they would be happy if they could afford even more speed.
>>>>>>
>>>>>> Shouldn’t we create a demo to show the solution?
>>>>>> To show is more effective than to debate. It is impossible to explain to some people.
>>>>>> Has anyone tried to create a demo (to unseat the bandwidth mantra)?
>>>>>> Is an effective demo too complicated to create?
>>>>>> I’d be glad to participate in defining a demo and publicity campaign.
>>>>>>
>>>>>> Gene
>>>>>>
>>>>>>
>>>>>>> On Apr 30, 2024, at 2:36 PM, David Lang <david@lang.hm <mailto:david@lang.hm> <mailto:david@lang.hm <mailto:david@lang.hm>> <mailto:david@lang.hm <mailto:david@lang.hm><mailto:david@lang.hm <mailto:david@lang.hm>>>> wrote:
>>>>>>>
>>>>>>> On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
>>>>>>>
>>>>>>>> I am always surprised how complicated these discussions become. (Surprised mostly because I forgot the kind of issues this community care about.) The discussion doesn’t shed light on the following scenarios.
>>>>>>>>
>>>>>>>> While watching stream content, activating controls needed to switch content sometimes (often?) have long pauses. I attribute that to buffer bloat and high latency.
>>>>>>>>
>>>>>>>> With a happy household user watching streaming media, a second user could have terrible shopping experience with Amazon. The interactive response could be (is often) horrible. (Personally, I would be doing email and working on a shared doc. The Amazon analogy probably applies to more people.)
>>>>>>>>
>>>>>>>> How can we deliver graceful performance to both persons in a household?
>>>>>>>> Is seeking graceful performance too complicated to improve?
>>>>>>>> (I said “graceful” to allow technical flexibility.)
>>>>>>>
>>>>>>> it's largely a solved problem from a technical point of view. fq_codel and cake solve this.
>>>>>>>
>>>>>>> The solution is just not deployed widely, instead people argue that more bandwidth is needed instead.
[-- Attachment #1.2: Type: text/html, Size: 32480 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 19:18 ` David Lang
@ 2024-05-01 21:11 ` Frantisek Borsik
2024-05-01 22:10 ` Eugene Y Chang
2024-05-01 21:12 ` Eugene Y Chang
1 sibling, 1 reply; 42+ messages in thread
From: Frantisek Borsik @ 2024-05-01 21:11 UTC (permalink / raw)
To: David Lang; +Cc: Eugene Y Chang, Colin_Higbie, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 9977 bytes --]
Eugene, this is one of the ISP examples of using OpenWrt, CAKE & FQ-CoDel
to fix not only his network, but also to refurbish an old device - when the
vendor didn't give a flying F:
https://blog.nafiux.com/posts/cnpilot_r190w_openwrt_bufferbloat_fqcodel_cake/
Here is also the list of OpenWrt supported HW:
https://openwrt.org/supported_devices
If you/ISP want to go mainstream, MikroTik will be a good option.
This is a great place to start (not only for your ISP):
https://www.bufferbloat.net/projects/bloat/wiki/What_can_I_do_about_Bufferbloat/
All the best,
Frank
Frantisek (Frank) Borsik
https://www.linkedin.com/in/frantisekborsik
Signal, Telegram, WhatsApp: +421919416714
iMessage, mobile: +420775230885
Skype: casioa5302ca
frantisek.borsik@gmail.com
On Wed, May 1, 2024 at 9:18 PM David Lang via Starlink <
starlink@lists.bufferbloat.net> wrote:
> On Wed, 1 May 2024, Eugene Y Chang wrote:
>
> > Thanks David,
> >
> >
> >> On Apr 30, 2024, at 6:12 PM, David Lang <david@lang.hm> wrote:
> >>
> >> On Tue, 30 Apr 2024, Eugene Y Chang wrote:
> >>
> >>> I’m not completely up to speed on the gory details. Please humor me. I
> am pretty good on the technical marketing magic.
> >>>
> >>> What is the minimum configuration of an ISP infrastructure where we
> can show an A/B (before and after) test?
> >>> It can be a simplified scenario. The simpler, the better. We can talk
> through the issues of how minimal is adequate. Of course and ISP engineer
> will argue against simplicity.
> >>
> >> I did not see a very big improvement on a 4/.5 dsl link, but there was
> improvement.
> >
> > Would a user feel the improvement with a 10 minute session:
> > shopping on Amazon?
> > using Salesforce?
> > working with a shared Google doc?
>
> When it's only a single user, they are unlikely to notice any difference.
>
> But if you have one person on zoom, a second downloading something, and a
> third
> on Amazon, it doesn't take much to notice a difference.
>
> >> if you put openwrt on the customer router and configure cake with the
> targeted bandwith at ~80% of line speed, you will usually see a drastic
> improvement for just about any connection.
> >
> > Are you saying some of the benefits can be realized with just upgrading
> the
> > subscriber’s router? This makes adoption harder because the subscriber
> will
> > lose the ISP’s support for any connectivity issues. If a demo impresses
> the
> > subscribers, the ISP still needs to embrace this change; otherwise the
> ISP
> > will wash their hands of any subscriber problems.
>
> Yes, just upgrading the subscriber's device with cake and configuring it
> appropriately largely solves the problem (at the cost of sacraficing
> bandwith
> because cake isn't working directly on the data flowing from the ISP to
> the
> client, and so it has to work indirectly to get the Internet server to
> slow down
> instead and that's a laggy, imperect work-around. If the ISPs router does
> active
> queue management with fq_codel, then you don't have to do this.
>
> This is how we know this works, many of use have been doing this for years
> (see
> the bufferbloat mailing list and it's archives_
>
> >> If you can put fq_codel on both ends of the link, you can usually skip
> capping the bandwidth.
> >
> > This is good if this means the benefits can be achieved with just the
> CPE. This also limits the changes to subscribers that care.
>
> fq_codel on the ISPs router for downlink, and on the subscribers router
> for
> uplink.
>
> putting cake on the router on the subscriber's end and tuning it
> appropriately
> can achieve most of the benefit, but is more work to configure.
>
> >>
> >> unfortunantly, it's not possible to just add this to the ISPs existing
> hardware without having the source for the firmware there (and if they have
> their queues in ASICs it's impossible to change them.
> >
> > Is this just an alternative to having the change at the CPE?
> > Yes this is harder for routers in the network.
>
> simple fq_codel on both ends of the bottleneck connection works quite well
> without any configuration. Cake adds some additional fairness capabilities
> and
> has a mode to work around the router on the other end of the bottleneck
> not
> doing active queue management
>
> >> If you can point at the dramatic decrease in latency, with no bandwidth
> losses, that Starlink has achieved on existing hardware, that may help.
> >
> > This is good to know for the engineers. This adds confusion with the
> subscribers.
> >
> >>
> >> There are a number of ISPs around the world that have implemented
> active queue management and report very good results from doing so.
> >
> > Can we get these ISPs to publically report how they have achieved great
> latency reduction?
> > We can help them get credit for caring about their subscribers. It
> would/could be a (short term) competitive advantage.
> > Of course their competitors will (might) adopt these changes and
> eliminate the advantage, BUT the subscribers will retain glow of the
> initial marketing for a much longer time.
>
> several of them have done so, I think someone else posted a report from
> one in
> this thread.
>
> >> But showing that their existing hardware can do it when their upstream
> vendor doesn't support it is going to be hard.
> >
> > Is the upstream vendor a network provider or a computing center?
> > Getting good latency from the subscriber, through the access network to
> the edge computing center and CDNs would be great. The CDNs would harvest
> the benefits. The other computing configurations would have make the change
> to be competitive.
>
> I'm talking about the manufacturer of the routers that the ISPs deploy at
> the
> last hop before getting to the subscriber, and the router on the
> subscriber end
> of the link (although most of those are running some variation of openWRT,
> so
> turning it on would not be significant work for the manufacturer)
>
> > We wouild have done our part at pushing the next round of adoption.
>
> Many of us have been pushing this for well over a decade. Getting
> Starlink's
> attention to address their bufferbloat issues is a major success.
>
> David Lang
>
> > Gene
> >
> >>
> >> David Lang
> >>
> >>>
> >>> We will want to show the human visible impact and not debate good or
> not so good measurements. If we get the business and community subscribers
> on our side, we win.
> >>>
> >>> Note:
> >>> Stage 1 is to show we have a pure software fix (that can work on their
> hardware). The fix is “so dramatic” that subscribers can experience it
> without debating measurements.
> >>> Stage 2 discusses why the ISP should demand that their equipment
> vendors add this software. (The software could already be available, but
> the ISP doesn’t think it is worth the trouble to enable it.) Nothing will
> happen unless we stay engaged. We need to keep the subscribers engaged, too.
> >>>
> >>> Should we have a conference call to discuss this?
> >>>
> >>>
> >>> Gene
> >>> ----------------------------------------------
> >>> Eugene Chang
> >>> IEEE Life Senior Member
> >>>
> >>>
> >>>
> >>>> On Apr 30, 2024, at 3:52 PM, Jim Forster <jim@connectivitycap.com>
> wrote:
> >>>>
> >>>> Gene, David,
> >>>> ‘m
> >>>> Agreed that the technical problem is largely solved with cake & codel.
> >>>>
> >>>> Also that demos are good. How to do one for this problem>
> >>>>
> >>>> — Jim
> >>>>
> >>>>> The bandwidth mantra has been used for so long that a technical
> discussion cannot unseat the mantra.
> >>>>> Some technical parties use the mantra to sell more, faster,
> ineffective service. Gullible customers accept that they would be happy if
> they could afford even more speed.
> >>>>>
> >>>>> Shouldn’t we create a demo to show the solution?
> >>>>> To show is more effective than to debate. It is impossible to
> explain to some people.
> >>>>> Has anyone tried to create a demo (to unseat the bandwidth mantra)?
> >>>>> Is an effective demo too complicated to create?
> >>>>> I’d be glad to participate in defining a demo and publicity campaign.
> >>>>>
> >>>>> Gene
> >>>>>
> >>>>>
> >>>>>> On Apr 30, 2024, at 2:36 PM, David Lang <david@lang.hm <mailto:
> david@lang.hm> <mailto:david@lang.hm <mailto:david@lang.hm>>> wrote:
> >>>>>>
> >>>>>> On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
> >>>>>>
> >>>>>>> I am always surprised how complicated these discussions become.
> (Surprised mostly because I forgot the kind of issues this community care
> about.) The discussion doesn’t shed light on the following scenarios.
> >>>>>>>
> >>>>>>> While watching stream content, activating controls needed to
> switch content sometimes (often?) have long pauses. I attribute that to
> buffer bloat and high latency.
> >>>>>>>
> >>>>>>> With a happy household user watching streaming media, a second
> user could have terrible shopping experience with Amazon. The interactive
> response could be (is often) horrible. (Personally, I would be doing email
> and working on a shared doc. The Amazon analogy probably applies to more
> people.)
> >>>>>>>
> >>>>>>> How can we deliver graceful performance to both persons in a
> household?
> >>>>>>> Is seeking graceful performance too complicated to improve?
> >>>>>>> (I said “graceful” to allow technical flexibility.)
> >>>>>>
> >>>>>> it's largely a solved problem from a technical point of view.
> fq_codel and cake solve this.
> >>>>>>
> >>>>>> The solution is just not deployed widely, instead people argue that
> more bandwidth is needed instead.
> >
> >_______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #2: Type: text/html, Size: 13495 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 18:51 ` Eugene Y Chang
@ 2024-05-01 19:18 ` David Lang
2024-05-01 21:11 ` Frantisek Borsik
2024-05-01 21:12 ` Eugene Y Chang
0 siblings, 2 replies; 42+ messages in thread
From: David Lang @ 2024-05-01 19:18 UTC (permalink / raw)
To: Eugene Y Chang
Cc: David Lang, Jim Forster, Dave Taht via Starlink, Colin_Higbie
[-- Attachment #1: Type: text/plain, Size: 8279 bytes --]
On Wed, 1 May 2024, Eugene Y Chang wrote:
> Thanks David,
>
>
>> On Apr 30, 2024, at 6:12 PM, David Lang <david@lang.hm> wrote:
>>
>> On Tue, 30 Apr 2024, Eugene Y Chang wrote:
>>
>>> I’m not completely up to speed on the gory details. Please humor me. I am pretty good on the technical marketing magic.
>>>
>>> What is the minimum configuration of an ISP infrastructure where we can show an A/B (before and after) test?
>>> It can be a simplified scenario. The simpler, the better. We can talk through the issues of how minimal is adequate. Of course and ISP engineer will argue against simplicity.
>>
>> I did not see a very big improvement on a 4/.5 dsl link, but there was improvement.
>
> Would a user feel the improvement with a 10 minute session:
> shopping on Amazon?
> using Salesforce?
> working with a shared Google doc?
When it's only a single user, they are unlikely to notice any difference.
But if you have one person on zoom, a second downloading something, and a third
on Amazon, it doesn't take much to notice a difference.
>> if you put openwrt on the customer router and configure cake with the targeted bandwith at ~80% of line speed, you will usually see a drastic improvement for just about any connection.
>
> Are you saying some of the benefits can be realized with just upgrading the
> subscriber’s router? This makes adoption harder because the subscriber will
> lose the ISP’s support for any connectivity issues. If a demo impresses the
> subscribers, the ISP still needs to embrace this change; otherwise the ISP
> will wash their hands of any subscriber problems.
Yes, just upgrading the subscriber's device with cake and configuring it
appropriately largely solves the problem (at the cost of sacraficing bandwith
because cake isn't working directly on the data flowing from the ISP to the
client, and so it has to work indirectly to get the Internet server to slow down
instead and that's a laggy, imperect work-around. If the ISPs router does active
queue management with fq_codel, then you don't have to do this.
This is how we know this works, many of use have been doing this for years (see
the bufferbloat mailing list and it's archives_
>> If you can put fq_codel on both ends of the link, you can usually skip capping the bandwidth.
>
> This is good if this means the benefits can be achieved with just the CPE. This also limits the changes to subscribers that care.
fq_codel on the ISPs router for downlink, and on the subscribers router for
uplink.
putting cake on the router on the subscriber's end and tuning it appropriately
can achieve most of the benefit, but is more work to configure.
>>
>> unfortunantly, it's not possible to just add this to the ISPs existing hardware without having the source for the firmware there (and if they have their queues in ASICs it's impossible to change them.
>
> Is this just an alternative to having the change at the CPE?
> Yes this is harder for routers in the network.
simple fq_codel on both ends of the bottleneck connection works quite well
without any configuration. Cake adds some additional fairness capabilities and
has a mode to work around the router on the other end of the bottleneck not
doing active queue management
>> If you can point at the dramatic decrease in latency, with no bandwidth losses, that Starlink has achieved on existing hardware, that may help.
>
> This is good to know for the engineers. This adds confusion with the subscribers.
>
>>
>> There are a number of ISPs around the world that have implemented active queue management and report very good results from doing so.
>
> Can we get these ISPs to publically report how they have achieved great latency reduction?
> We can help them get credit for caring about their subscribers. It would/could be a (short term) competitive advantage.
> Of course their competitors will (might) adopt these changes and eliminate the advantage, BUT the subscribers will retain glow of the initial marketing for a much longer time.
several of them have done so, I think someone else posted a report from one in
this thread.
>> But showing that their existing hardware can do it when their upstream vendor doesn't support it is going to be hard.
>
> Is the upstream vendor a network provider or a computing center?
> Getting good latency from the subscriber, through the access network to the edge computing center and CDNs would be great. The CDNs would harvest the benefits. The other computing configurations would have make the change to be competitive.
I'm talking about the manufacturer of the routers that the ISPs deploy at the
last hop before getting to the subscriber, and the router on the subscriber end
of the link (although most of those are running some variation of openWRT, so
turning it on would not be significant work for the manufacturer)
> We wouild have done our part at pushing the next round of adoption.
Many of us have been pushing this for well over a decade. Getting Starlink's
attention to address their bufferbloat issues is a major success.
David Lang
> Gene
>
>>
>> David Lang
>>
>>>
>>> We will want to show the human visible impact and not debate good or not so good measurements. If we get the business and community subscribers on our side, we win.
>>>
>>> Note:
>>> Stage 1 is to show we have a pure software fix (that can work on their hardware). The fix is “so dramatic” that subscribers can experience it without debating measurements.
>>> Stage 2 discusses why the ISP should demand that their equipment vendors add this software. (The software could already be available, but the ISP doesn’t think it is worth the trouble to enable it.) Nothing will happen unless we stay engaged. We need to keep the subscribers engaged, too.
>>>
>>> Should we have a conference call to discuss this?
>>>
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Life Senior Member
>>>
>>>
>>>
>>>> On Apr 30, 2024, at 3:52 PM, Jim Forster <jim@connectivitycap.com> wrote:
>>>>
>>>> Gene, David,
>>>> ‘m
>>>> Agreed that the technical problem is largely solved with cake & codel.
>>>>
>>>> Also that demos are good. How to do one for this problem>
>>>>
>>>> — Jim
>>>>
>>>>> The bandwidth mantra has been used for so long that a technical discussion cannot unseat the mantra.
>>>>> Some technical parties use the mantra to sell more, faster, ineffective service. Gullible customers accept that they would be happy if they could afford even more speed.
>>>>>
>>>>> Shouldn’t we create a demo to show the solution?
>>>>> To show is more effective than to debate. It is impossible to explain to some people.
>>>>> Has anyone tried to create a demo (to unseat the bandwidth mantra)?
>>>>> Is an effective demo too complicated to create?
>>>>> I’d be glad to participate in defining a demo and publicity campaign.
>>>>>
>>>>> Gene
>>>>>
>>>>>
>>>>>> On Apr 30, 2024, at 2:36 PM, David Lang <david@lang.hm <mailto:david@lang.hm> <mailto:david@lang.hm <mailto:david@lang.hm>>> wrote:
>>>>>>
>>>>>> On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
>>>>>>
>>>>>>> I am always surprised how complicated these discussions become. (Surprised mostly because I forgot the kind of issues this community care about.) The discussion doesn’t shed light on the following scenarios.
>>>>>>>
>>>>>>> While watching stream content, activating controls needed to switch content sometimes (often?) have long pauses. I attribute that to buffer bloat and high latency.
>>>>>>>
>>>>>>> With a happy household user watching streaming media, a second user could have terrible shopping experience with Amazon. The interactive response could be (is often) horrible. (Personally, I would be doing email and working on a shared doc. The Amazon analogy probably applies to more people.)
>>>>>>>
>>>>>>> How can we deliver graceful performance to both persons in a household?
>>>>>>> Is seeking graceful performance too complicated to improve?
>>>>>>> (I said “graceful” to allow technical flexibility.)
>>>>>>
>>>>>> it's largely a solved problem from a technical point of view. fq_codel and cake solve this.
>>>>>>
>>>>>> The solution is just not deployed widely, instead people argue that more bandwidth is needed instead.
>
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 4:12 ` David Lang
2024-05-01 10:15 ` Frantisek Borsik
@ 2024-05-01 18:51 ` Eugene Y Chang
2024-05-01 19:18 ` David Lang
1 sibling, 1 reply; 42+ messages in thread
From: Eugene Y Chang @ 2024-05-01 18:51 UTC (permalink / raw)
To: David Lang
Cc: Eugene Y Chang, Jim Forster, Dave Taht via Starlink, Colin_Higbie
[-- Attachment #1.1: Type: text/plain, Size: 6408 bytes --]
Thanks David,
> On Apr 30, 2024, at 6:12 PM, David Lang <david@lang.hm> wrote:
>
> On Tue, 30 Apr 2024, Eugene Y Chang wrote:
>
>> I’m not completely up to speed on the gory details. Please humor me. I am pretty good on the technical marketing magic.
>>
>> What is the minimum configuration of an ISP infrastructure where we can show an A/B (before and after) test?
>> It can be a simplified scenario. The simpler, the better. We can talk through the issues of how minimal is adequate. Of course and ISP engineer will argue against simplicity.
>
> I did not see a very big improvement on a 4/.5 dsl link, but there was improvement.
Would a user feel the improvement with a 10 minute session:
shopping on Amazon?
using Salesforce?
working with a shared Google doc?
>
> if you put openwrt on the customer router and configure cake with the targeted bandwith at ~80% of line speed, you will usually see a drastic improvement for just about any connection.
Are you saying some of the benefits can be realized with just upgrading the subscriber’s router? This makes adoption harder because the subscriber will lose the ISP’s support for any connectivity issues. If a demo impresses the subscribers, the ISP still needs to embrace this change; otherwise the ISP will wash their hands of any subscriber problems.
>
> If you can put fq_codel on both ends of the link, you can usually skip capping the bandwidth.
This is good if this means the benefits can be achieved with just the CPE. This also limits the changes to subscribers that care.
>
> unfortunantly, it's not possible to just add this to the ISPs existing hardware without having the source for the firmware there (and if they have their queues in ASICs it's impossible to change them.
Is this just an alternative to having the change at the CPE?
Yes this is harder for routers in the network.
>
> If you can point at the dramatic decrease in latency, with no bandwidth losses, that Starlink has achieved on existing hardware, that may help.
This is good to know for the engineers. This adds confusion with the subscribers.
>
> There are a number of ISPs around the world that have implemented active queue management and report very good results from doing so.
Can we get these ISPs to publically report how they have achieved great latency reduction?
We can help them get credit for caring about their subscribers. It would/could be a (short term) competitive advantage.
Of course their competitors will (might) adopt these changes and eliminate the advantage, BUT the subscribers will retain glow of the initial marketing for a much longer time.
>
> But showing that their existing hardware can do it when their upstream vendor doesn't support it is going to be hard.
Is the upstream vendor a network provider or a computing center?
Getting good latency from the subscriber, through the access network to the edge computing center and CDNs would be great. The CDNs would harvest the benefits. The other computing configurations would have make the change to be competitive.
We wouild have done our part at pushing the next round of adoption.
Gene
>
> David Lang
>
>>
>> We will want to show the human visible impact and not debate good or not so good measurements. If we get the business and community subscribers on our side, we win.
>>
>> Note:
>> Stage 1 is to show we have a pure software fix (that can work on their hardware). The fix is “so dramatic” that subscribers can experience it without debating measurements.
>> Stage 2 discusses why the ISP should demand that their equipment vendors add this software. (The software could already be available, but the ISP doesn’t think it is worth the trouble to enable it.) Nothing will happen unless we stay engaged. We need to keep the subscribers engaged, too.
>>
>> Should we have a conference call to discuss this?
>>
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Life Senior Member
>>
>>
>>
>>> On Apr 30, 2024, at 3:52 PM, Jim Forster <jim@connectivitycap.com> wrote:
>>>
>>> Gene, David,
>>> ‘m
>>> Agreed that the technical problem is largely solved with cake & codel.
>>>
>>> Also that demos are good. How to do one for this problem>
>>>
>>> — Jim
>>>
>>>> The bandwidth mantra has been used for so long that a technical discussion cannot unseat the mantra.
>>>> Some technical parties use the mantra to sell more, faster, ineffective service. Gullible customers accept that they would be happy if they could afford even more speed.
>>>>
>>>> Shouldn’t we create a demo to show the solution?
>>>> To show is more effective than to debate. It is impossible to explain to some people.
>>>> Has anyone tried to create a demo (to unseat the bandwidth mantra)?
>>>> Is an effective demo too complicated to create?
>>>> I’d be glad to participate in defining a demo and publicity campaign.
>>>>
>>>> Gene
>>>>
>>>>
>>>>> On Apr 30, 2024, at 2:36 PM, David Lang <david@lang.hm <mailto:david@lang.hm> <mailto:david@lang.hm <mailto:david@lang.hm>>> wrote:
>>>>>
>>>>> On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
>>>>>
>>>>>> I am always surprised how complicated these discussions become. (Surprised mostly because I forgot the kind of issues this community care about.) The discussion doesn’t shed light on the following scenarios.
>>>>>>
>>>>>> While watching stream content, activating controls needed to switch content sometimes (often?) have long pauses. I attribute that to buffer bloat and high latency.
>>>>>>
>>>>>> With a happy household user watching streaming media, a second user could have terrible shopping experience with Amazon. The interactive response could be (is often) horrible. (Personally, I would be doing email and working on a shared doc. The Amazon analogy probably applies to more people.)
>>>>>>
>>>>>> How can we deliver graceful performance to both persons in a household?
>>>>>> Is seeking graceful performance too complicated to improve?
>>>>>> (I said “graceful” to allow technical flexibility.)
>>>>>
>>>>> it's largely a solved problem from a technical point of view. fq_codel and cake solve this.
>>>>>
>>>>> The solution is just not deployed widely, instead people argue that more bandwidth is needed instead.
[-- Attachment #1.2: Type: text/html, Size: 22017 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 7:40 ` David Lang
@ 2024-05-01 15:13 ` Colin_Higbie
0 siblings, 0 replies; 42+ messages in thread
From: Colin_Higbie @ 2024-05-01 15:13 UTC (permalink / raw)
To: David Lang; +Cc: starlink
David,
I'm not thinking about an urban rollout. My default perspective is rural. The closest house to my farm is about a half mile away, only 330 people in our whole town, which is geographically large. This is what drove my need for Starlink in the first place – I had previously been paying $330/mo for a bunch of DSL lines and 2 T-1s aggregated via an SD-WAN solution. Starlink gave me much more download bandwidth and a hair more on upload, lower latency, vastly improved reliability, and cut my costs by almost 3/4 (72.7%).
Then, in a surprise move, our power company rolled out a fiber network to its rural customers, which is even better on bandwidth at 1Gbps both up and down and provides comparable latency. I can say as a user that at comparable latency, the UX boost with 1Gbps U and D compared with Starlink's connection is dramatic for work. Large file uploads and downloads are nearly instant, significantly increasing productivity. I can also now video conference without worrying about disruption on the sending signal due to family members being on the Internet at the same time. I have also changed the settings on family gaming and PC systems so they can watch YouTube at full resolution, where with Starlink, to avoid congestion on bandwidth (not bufferbloat) if everyone happened to be using the Internet at the same time, I had locked everyone else down to 480p or 720p streams.
My goal in saying that it's better to do a slower rollout if needed to provide at least 25Mbps is to maximize end user experience and be efficient with constructions costs. This is my perspective because it's the perspective ISPs will have and therefore the necessary mindset to influence them. It's the perspective I have, and everyone who runs a business has, when people approach us telling us how to run our businesses. When you charge them waving data like an academic, an approach you appear to use in many of these emails (though to be fair, maybe you're different with this mailing list than you would be during a pitch to government or industry), you only alienate the audience and reduce the likelihood of anything getting done.
In rural areas in the U.S., the long term harm to rushing out low-bandwidth solutions is significant. It would be better for them to have nothing new for another year or two and then get a 25+ Mbps connection that get a 10Mbps connection now, then get no upgrades for another 10-15 years, which is the likely outcome for many. Keep in mind that in the U.S., nearly all residents already have at least dial-up access for email and other trickle-in connections and most have some form of DSL, even if sub-1Mbps. Of course, now there is also Starlink, though w/Starlink, cost can be a barrier for some.
However, and perhaps this is what you meant, I am admittedly thinking about this as a U.S. citizen. I would acknowledge that in other parts of the world where it's a not a matter of just waiting an extra couple of years to get an upgrade from dial-up or DSL, the situation may be different. Infrastructure costs at 25Mbps could be prohibitive in those markets, where a single feed to a village could be a significant upgrade from their current state of no Internet access for dozens or hundreds of miles. I accept my pushing for a recognition of 25Mbps floor for the top speed offered refers to 1st world markets where we have the luxury of being able to do it right in the first place to save money in the long run.
- Colin
-----Original Message-----
From: David Lang <david@lang.hm>
Sent: Wednesday, May 1, 2024 3:41 AM
To: Colin_Higbie <CHigbie1@Higbie.name>
Cc: David Lang <david@lang.hm>; starlink@lists.bufferbloat.net
Subject: RE: [Starlink] Itʼs the Latency, FCC
you are thinking about an urban area buildout, and in the situation your describe, I would agree with you.
I'm thinking of rural areas where houses are well separated (low single digit houses per mile, or going to miles per house). Although now that Starlink is an option, it may not be as bad to not have other options.
David Lang
On Wed, 1 May 2024, Colin_Higbie wrote:
> David,
>
> Yes, sure, if there's a choice between Internet access at 10Mbps and no Internet at all forever, 10Mbps is clearly better than nothing. But that's unlikely to be a realistic choice. A more realistic version of that is: budgeting lets us roll out at a rate of 1,000 homes per week at 25Mbps capacity or 1,500 homes per week if we can drop to 10Mbps. In that scenario, I would say that the slower rollout at the higher bandwidth is better, even though that delays some people getting access to Internet, because of the longer-term effect of having an immediately obsolete max connection speed.
>
> I have no objection to oversubscribing, providing it is done based an actual statistical analysis of usage and provided on a good-faith basis (i.e., a belief based on the data that the total capacity will support all users at some significant % of the expected bandwidth at something like 99% or 99.9% of the time). In my opinion, it is not reasonable to require an ISP to provide 100% of its users the full bandwidth they pay for 100% of the time if all users were to max out at the same time (something that never happens in the real world). That drives up costs with negligible benefit.
>
> I apologize if I've not been sufficiently clear on the 25Mbps minimum. I believe I have, but perhaps I'm mistaken. I'm arguing that any ISP building new capabilities to provision new users or enhancing its existing services for existing users should establish a 25Mbps minimum top speed. It's fine if they also offer cheaper slower speeds (not every user will care about getting 25Mbps or want to pay for it). So, every user in this market should be able to get at least 25Mbps, but it's fine that not all will. The important facet to this is that the cabling and infrastructure be able to support at least 25Mbps connections for those users willing to pay for it.
>
> I don't have the same requirement on latency (because optimal latencies are usually good enough and implementing cake for latency under load is generally a low-cost or no-cost solution), but I would support if experts from this group did have a similar max latency target and would support that this max be measured under load.
>
> Apologies for adding yet another metaphor, but I view these requirements as similar to codes on minimum of 15amp or 20amp in-wall wiring in all new and upgrade work performed by electricians. This doesn't affect existing wiring, which is grandfathered, but it ensures no new construction is already obsolete as it's being done.
>
> Cheers,
> Colin
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 4:12 ` David Lang
@ 2024-05-01 10:15 ` Frantisek Borsik
2024-05-01 18:51 ` Eugene Y Chang
1 sibling, 0 replies; 42+ messages in thread
From: Frantisek Borsik @ 2024-05-01 10:15 UTC (permalink / raw)
To: David Lang; +Cc: Eugene Y Chang, Colin_Higbie, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 5509 bytes --]
This was a nice story of overhauling good ole Cambium Networks HW
with OpenWrt, FQ-CoDel & CAKE:
https://blog.nafiux.com/posts/cnpilot_r190w_openwrt_bufferbloat_fqcodel_cake/
All the best,
Frank
Frantisek (Frank) Borsik
https://www.linkedin.com/in/frantisekborsik
Signal, Telegram, WhatsApp: +421919416714
iMessage, mobile: +420775230885
Skype: casioa5302ca
frantisek.borsik@gmail.com
On Wed, May 1, 2024 at 6:12 AM David Lang via Starlink <
starlink@lists.bufferbloat.net> wrote:
> On Tue, 30 Apr 2024, Eugene Y Chang wrote:
>
> > I’m not completely up to speed on the gory details. Please humor me. I
> am pretty good on the technical marketing magic.
> >
> > What is the minimum configuration of an ISP infrastructure where we can
> show an A/B (before and after) test?
> > It can be a simplified scenario. The simpler, the better. We can talk
> through the issues of how minimal is adequate. Of course and ISP engineer
> will argue against simplicity.
>
> I did not see a very big improvement on a 4/.5 dsl link, but there was
> improvement.
>
> if you put openwrt on the customer router and configure cake with the
> targeted
> bandwith at ~80% of line speed, you will usually see a drastic improvement
> for
> just about any connection.
>
> If you can put fq_codel on both ends of the link, you can usually skip
> capping
> the bandwidth.
>
> unfortunantly, it's not possible to just add this to the ISPs existing
> hardware
> without having the source for the firmware there (and if they have their
> queues
> in ASICs it's impossible to change them.
>
> If you can point at the dramatic decrease in latency, with no bandwidth
> losses,
> that Starlink has achieved on existing hardware, that may help.
>
> There are a number of ISPs around the world that have implemented active
> queue
> management and report very good results from doing so.
>
> But showing that their existing hardware can do it when their upstream
> vendor
> doesn't support it is going to be hard.
>
> David Lang
>
> >
> > We will want to show the human visible impact and not debate good or not
> so good measurements. If we get the business and community subscribers on
> our side, we win.
> >
> > Note:
> > Stage 1 is to show we have a pure software fix (that can work on their
> hardware). The fix is “so dramatic” that subscribers can experience it
> without debating measurements.
> > Stage 2 discusses why the ISP should demand that their equipment vendors
> add this software. (The software could already be available, but the ISP
> doesn’t think it is worth the trouble to enable it.) Nothing will happen
> unless we stay engaged. We need to keep the subscribers engaged, too.
> >
> > Should we have a conference call to discuss this?
> >
> >
> > Gene
> > ----------------------------------------------
> > Eugene Chang
> > IEEE Life Senior Member
> >
> >
> >
> >> On Apr 30, 2024, at 3:52 PM, Jim Forster <jim@connectivitycap.com>
> wrote:
> >>
> >> Gene, David,
> >> ‘m
> >> Agreed that the technical problem is largely solved with cake & codel.
> >>
> >> Also that demos are good. How to do one for this problem>
> >>
> >> — Jim
> >>
> >>> The bandwidth mantra has been used for so long that a technical
> discussion cannot unseat the mantra.
> >>> Some technical parties use the mantra to sell more, faster,
> ineffective service. Gullible customers accept that they would be happy if
> they could afford even more speed.
> >>>
> >>> Shouldn’t we create a demo to show the solution?
> >>> To show is more effective than to debate. It is impossible to explain
> to some people.
> >>> Has anyone tried to create a demo (to unseat the bandwidth mantra)?
> >>> Is an effective demo too complicated to create?
> >>> I’d be glad to participate in defining a demo and publicity campaign.
> >>>
> >>> Gene
> >>>
> >>>
> >>>> On Apr 30, 2024, at 2:36 PM, David Lang <david@lang.hm <mailto:
> david@lang.hm>> wrote:
> >>>>
> >>>> On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
> >>>>
> >>>>> I am always surprised how complicated these discussions become.
> (Surprised mostly because I forgot the kind of issues this community care
> about.) The discussion doesn’t shed light on the following scenarios.
> >>>>>
> >>>>> While watching stream content, activating controls needed to switch
> content sometimes (often?) have long pauses. I attribute that to buffer
> bloat and high latency.
> >>>>>
> >>>>> With a happy household user watching streaming media, a second user
> could have terrible shopping experience with Amazon. The interactive
> response could be (is often) horrible. (Personally, I would be doing email
> and working on a shared doc. The Amazon analogy probably applies to more
> people.)
> >>>>>
> >>>>> How can we deliver graceful performance to both persons in a
> household?
> >>>>> Is seeking graceful performance too complicated to improve?
> >>>>> (I said “graceful” to allow technical flexibility.)
> >>>>
> >>>> it's largely a solved problem from a technical point of view.
> fq_codel and cake solve this.
> >>>>
> >>>> The solution is just not deployed widely, instead people argue that
> more bandwidth is needed instead.
> >>
> >
> >_______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #2: Type: text/html, Size: 7989 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 4:16 ` Colin_Higbie
@ 2024-05-01 7:40 ` David Lang
2024-05-01 15:13 ` Colin_Higbie
0 siblings, 1 reply; 42+ messages in thread
From: David Lang @ 2024-05-01 7:40 UTC (permalink / raw)
To: Colin_Higbie; +Cc: David Lang, starlink
[-- Attachment #1: Type: text/plain, Size: 5954 bytes --]
you are thinking about an urban area buildout, and in the situation your
describe, I would agree with you.
I'm thinking of rural areas where houses are well separated (low single digit
houses per mile, or going to miles per house). Although now that Starlink is an
option, it may not be as bad to not have other options.
David Lang
On Wed, 1 May 2024, Colin_Higbie wrote:
> David,
>
> Yes, sure, if there's a choice between Internet access at 10Mbps and no Internet at all forever, 10Mbps is clearly better than nothing. But that's unlikely to be a realistic choice. A more realistic version of that is: budgeting lets us roll out at a rate of 1,000 homes per week at 25Mbps capacity or 1,500 homes per week if we can drop to 10Mbps. In that scenario, I would say that the slower rollout at the higher bandwidth is better, even though that delays some people getting access to Internet, because of the longer-term effect of having an immediately obsolete max connection speed.
>
> I have no objection to oversubscribing, providing it is done based an actual statistical analysis of usage and provided on a good-faith basis (i.e., a belief based on the data that the total capacity will support all users at some significant % of the expected bandwidth at something like 99% or 99.9% of the time). In my opinion, it is not reasonable to require an ISP to provide 100% of its users the full bandwidth they pay for 100% of the time if all users were to max out at the same time (something that never happens in the real world). That drives up costs with negligible benefit.
>
> I apologize if I've not been sufficiently clear on the 25Mbps minimum. I believe I have, but perhaps I'm mistaken. I'm arguing that any ISP building new capabilities to provision new users or enhancing its existing services for existing users should establish a 25Mbps minimum top speed. It's fine if they also offer cheaper slower speeds (not every user will care about getting 25Mbps or want to pay for it). So, every user in this market should be able to get at least 25Mbps, but it's fine that not all will. The important facet to this is that the cabling and infrastructure be able to support at least 25Mbps connections for those users willing to pay for it.
>
> I don't have the same requirement on latency (because optimal latencies are usually good enough and implementing cake for latency under load is generally a low-cost or no-cost solution), but I would support if experts from this group did have a similar max latency target and would support that this max be measured under load.
>
> Apologies for adding yet another metaphor, but I view these requirements as similar to codes on minimum of 15amp or 20amp in-wall wiring in all new and upgrade work performed by electricians. This doesn't affect existing wiring, which is grandfathered, but it ensures no new construction is already obsolete as it's being done.
>
> Cheers,
> Colin
>
>
>
>
> -----Original Message-----
> From: David Lang <david@lang.hm>
> Sent: Tuesday, April 30, 2024 11:51 PM
> To: Colin_Higbie <CHigbie1@Higbie.name>
> Cc: David Lang <david@lang.hm>; starlink@lists.bufferbloat.net
> Subject: RE: [Starlink] Itʼs the Latency, FCC
>
> On Wed, 1 May 2024, Colin_Higbie wrote:
>
>> David,
>>
>> You wrote, "I in no way advocate for the elimination of 25Mb connectivity. What I am arguing against is defining that as the minimum acceptable connectivity. i.e. pretending that anything less than that may as well not exist (ot at the very least should not be defined as 'broadband')"
>>
>> If you're simply talking about approaching an existing ISP with existing services and telling them, "Please implement cake and codel to reduce latency problems at load," then I'm with you. That's a clear win because you're fixing a latency problem without creating any new problems. Good.
>>
>> The importance of the 25Mbps minimum arises with NEW services, new construction. Specifically, where an ISP is looking to expand their geographic footprint or seeking funding to provide improvements or a new ISP is looking to enter a market, it is DESTRUCTIVE for them to roll out a new service that can't support at least 25Mbps service. This is because a new service rollout will generally not be upgraded in terms of bandwidth capacity for a period of years following the initial deployment. As stated before, it's fine if they also OFFER plans with lower top speeds because not everyone needs 25Mbps, but they must at least OFFER a minimum of a 25Mbps plan. You do more harm to Internet infrastructure and further the Internet divide if you encourage good latency for new constructions at sub-25Mbps bandwidth.
>>
>> If members of this group are touting themselves as experts and advising ISPs, then you must include the 25Mbps bandwidth as the floor for at least the top tier of service.
>
> I would rather there be an ISP serving an area with 10Mb than no ISP serving the area (no matter what the latency)
>
> for wireless Internet, it may not be possible to provide 25Mb of service to some locations, so your argument then means those people get nothing.
>
> I'm also seeing the policy folks in DC pushing for 25Mb to be the minimum for the slowest offering.
>
> So when I see people posting what I paraphrase as "if the service is slower than 25Mb, that service should not exist", I argue. I apologize if that's not what you are arguing, but up until this post (where you say "25Mb for the top tier of
> service") that seemed to be what you were saying.
>
> and then there's also the 'what does it mean to say 25Mb of service'. does that mean that the ISP upstream must have 25Mb for every subscriber? or can they oversubscribe to the point that if everyone were trying to use the service, they each get 1Mb? how do you define how much oversubscription is allowed? how do you justify where you draw the line (especially if you are arguing that anything under 25Mb is unusable)
>
> David Lang
>
>
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 3:51 ` David Lang
@ 2024-05-01 4:16 ` Colin_Higbie
2024-05-01 7:40 ` David Lang
0 siblings, 1 reply; 42+ messages in thread
From: Colin_Higbie @ 2024-05-01 4:16 UTC (permalink / raw)
To: David Lang; +Cc: starlink
David,
Yes, sure, if there's a choice between Internet access at 10Mbps and no Internet at all forever, 10Mbps is clearly better than nothing. But that's unlikely to be a realistic choice. A more realistic version of that is: budgeting lets us roll out at a rate of 1,000 homes per week at 25Mbps capacity or 1,500 homes per week if we can drop to 10Mbps. In that scenario, I would say that the slower rollout at the higher bandwidth is better, even though that delays some people getting access to Internet, because of the longer-term effect of having an immediately obsolete max connection speed.
I have no objection to oversubscribing, providing it is done based an actual statistical analysis of usage and provided on a good-faith basis (i.e., a belief based on the data that the total capacity will support all users at some significant % of the expected bandwidth at something like 99% or 99.9% of the time). In my opinion, it is not reasonable to require an ISP to provide 100% of its users the full bandwidth they pay for 100% of the time if all users were to max out at the same time (something that never happens in the real world). That drives up costs with negligible benefit.
I apologize if I've not been sufficiently clear on the 25Mbps minimum. I believe I have, but perhaps I'm mistaken. I'm arguing that any ISP building new capabilities to provision new users or enhancing its existing services for existing users should establish a 25Mbps minimum top speed. It's fine if they also offer cheaper slower speeds (not every user will care about getting 25Mbps or want to pay for it). So, every user in this market should be able to get at least 25Mbps, but it's fine that not all will. The important facet to this is that the cabling and infrastructure be able to support at least 25Mbps connections for those users willing to pay for it.
I don't have the same requirement on latency (because optimal latencies are usually good enough and implementing cake for latency under load is generally a low-cost or no-cost solution), but I would support if experts from this group did have a similar max latency target and would support that this max be measured under load.
Apologies for adding yet another metaphor, but I view these requirements as similar to codes on minimum of 15amp or 20amp in-wall wiring in all new and upgrade work performed by electricians. This doesn't affect existing wiring, which is grandfathered, but it ensures no new construction is already obsolete as it's being done.
Cheers,
Colin
-----Original Message-----
From: David Lang <david@lang.hm>
Sent: Tuesday, April 30, 2024 11:51 PM
To: Colin_Higbie <CHigbie1@Higbie.name>
Cc: David Lang <david@lang.hm>; starlink@lists.bufferbloat.net
Subject: RE: [Starlink] Itʼs the Latency, FCC
On Wed, 1 May 2024, Colin_Higbie wrote:
> David,
>
> You wrote, "I in no way advocate for the elimination of 25Mb connectivity. What I am arguing against is defining that as the minimum acceptable connectivity. i.e. pretending that anything less than that may as well not exist (ot at the very least should not be defined as 'broadband')"
>
> If you're simply talking about approaching an existing ISP with existing services and telling them, "Please implement cake and codel to reduce latency problems at load," then I'm with you. That's a clear win because you're fixing a latency problem without creating any new problems. Good.
>
> The importance of the 25Mbps minimum arises with NEW services, new construction. Specifically, where an ISP is looking to expand their geographic footprint or seeking funding to provide improvements or a new ISP is looking to enter a market, it is DESTRUCTIVE for them to roll out a new service that can't support at least 25Mbps service. This is because a new service rollout will generally not be upgraded in terms of bandwidth capacity for a period of years following the initial deployment. As stated before, it's fine if they also OFFER plans with lower top speeds because not everyone needs 25Mbps, but they must at least OFFER a minimum of a 25Mbps plan. You do more harm to Internet infrastructure and further the Internet divide if you encourage good latency for new constructions at sub-25Mbps bandwidth.
>
> If members of this group are touting themselves as experts and advising ISPs, then you must include the 25Mbps bandwidth as the floor for at least the top tier of service.
I would rather there be an ISP serving an area with 10Mb than no ISP serving the area (no matter what the latency)
for wireless Internet, it may not be possible to provide 25Mb of service to some locations, so your argument then means those people get nothing.
I'm also seeing the policy folks in DC pushing for 25Mb to be the minimum for the slowest offering.
So when I see people posting what I paraphrase as "if the service is slower than 25Mb, that service should not exist", I argue. I apologize if that's not what you are arguing, but up until this post (where you say "25Mb for the top tier of
service") that seemed to be what you were saying.
and then there's also the 'what does it mean to say 25Mb of service'. does that mean that the ISP upstream must have 25Mb for every subscriber? or can they oversubscribe to the point that if everyone were trying to use the service, they each get 1Mb? how do you define how much oversubscription is allowed? how do you justify where you draw the line (especially if you are arguing that anything under 25Mb is unusable)
David Lang
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 3:59 ` Eugene Y Chang
@ 2024-05-01 4:12 ` David Lang
2024-05-01 10:15 ` Frantisek Borsik
2024-05-01 18:51 ` Eugene Y Chang
0 siblings, 2 replies; 42+ messages in thread
From: David Lang @ 2024-05-01 4:12 UTC (permalink / raw)
To: Eugene Y Chang
Cc: Jim Forster, David Lang, Dave Taht via Starlink, Colin_Higbie
[-- Attachment #1: Type: text/plain, Size: 4461 bytes --]
On Tue, 30 Apr 2024, Eugene Y Chang wrote:
> I’m not completely up to speed on the gory details. Please humor me. I am pretty good on the technical marketing magic.
>
> What is the minimum configuration of an ISP infrastructure where we can show an A/B (before and after) test?
> It can be a simplified scenario. The simpler, the better. We can talk through the issues of how minimal is adequate. Of course and ISP engineer will argue against simplicity.
I did not see a very big improvement on a 4/.5 dsl link, but there was
improvement.
if you put openwrt on the customer router and configure cake with the targeted
bandwith at ~80% of line speed, you will usually see a drastic improvement for
just about any connection.
If you can put fq_codel on both ends of the link, you can usually skip capping
the bandwidth.
unfortunantly, it's not possible to just add this to the ISPs existing hardware
without having the source for the firmware there (and if they have their queues
in ASICs it's impossible to change them.
If you can point at the dramatic decrease in latency, with no bandwidth losses,
that Starlink has achieved on existing hardware, that may help.
There are a number of ISPs around the world that have implemented active queue
management and report very good results from doing so.
But showing that their existing hardware can do it when their upstream vendor
doesn't support it is going to be hard.
David Lang
>
> We will want to show the human visible impact and not debate good or not so good measurements. If we get the business and community subscribers on our side, we win.
>
> Note:
> Stage 1 is to show we have a pure software fix (that can work on their hardware). The fix is “so dramatic” that subscribers can experience it without debating measurements.
> Stage 2 discusses why the ISP should demand that their equipment vendors add this software. (The software could already be available, but the ISP doesn’t think it is worth the trouble to enable it.) Nothing will happen unless we stay engaged. We need to keep the subscribers engaged, too.
>
> Should we have a conference call to discuss this?
>
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Life Senior Member
>
>
>
>> On Apr 30, 2024, at 3:52 PM, Jim Forster <jim@connectivitycap.com> wrote:
>>
>> Gene, David,
>> ‘m
>> Agreed that the technical problem is largely solved with cake & codel.
>>
>> Also that demos are good. How to do one for this problem>
>>
>> — Jim
>>
>>> The bandwidth mantra has been used for so long that a technical discussion cannot unseat the mantra.
>>> Some technical parties use the mantra to sell more, faster, ineffective service. Gullible customers accept that they would be happy if they could afford even more speed.
>>>
>>> Shouldn’t we create a demo to show the solution?
>>> To show is more effective than to debate. It is impossible to explain to some people.
>>> Has anyone tried to create a demo (to unseat the bandwidth mantra)?
>>> Is an effective demo too complicated to create?
>>> I’d be glad to participate in defining a demo and publicity campaign.
>>>
>>> Gene
>>>
>>>
>>>> On Apr 30, 2024, at 2:36 PM, David Lang <david@lang.hm <mailto:david@lang.hm>> wrote:
>>>>
>>>> On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
>>>>
>>>>> I am always surprised how complicated these discussions become. (Surprised mostly because I forgot the kind of issues this community care about.) The discussion doesn’t shed light on the following scenarios.
>>>>>
>>>>> While watching stream content, activating controls needed to switch content sometimes (often?) have long pauses. I attribute that to buffer bloat and high latency.
>>>>>
>>>>> With a happy household user watching streaming media, a second user could have terrible shopping experience with Amazon. The interactive response could be (is often) horrible. (Personally, I would be doing email and working on a shared doc. The Amazon analogy probably applies to more people.)
>>>>>
>>>>> How can we deliver graceful performance to both persons in a household?
>>>>> Is seeking graceful performance too complicated to improve?
>>>>> (I said “graceful” to allow technical flexibility.)
>>>>
>>>> it's largely a solved problem from a technical point of view. fq_codel and cake solve this.
>>>>
>>>> The solution is just not deployed widely, instead people argue that more bandwidth is needed instead.
>>
>
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 1:52 ` Jim Forster
@ 2024-05-01 3:59 ` Eugene Y Chang
2024-05-01 4:12 ` David Lang
0 siblings, 1 reply; 42+ messages in thread
From: Eugene Y Chang @ 2024-05-01 3:59 UTC (permalink / raw)
To: Jim Forster
Cc: Eugene Y Chang, David Lang, Dave Taht via Starlink, Colin_Higbie
[-- Attachment #1.1: Type: text/plain, Size: 3431 bytes --]
I’m not completely up to speed on the gory details. Please humor me. I am pretty good on the technical marketing magic.
What is the minimum configuration of an ISP infrastructure where we can show an A/B (before and after) test?
It can be a simplified scenario. The simpler, the better. We can talk through the issues of how minimal is adequate. Of course and ISP engineer will argue against simplicity.
We will want to show the human visible impact and not debate good or not so good measurements. If we get the business and community subscribers on our side, we win.
Note:
Stage 1 is to show we have a pure software fix (that can work on their hardware). The fix is “so dramatic” that subscribers can experience it without debating measurements.
Stage 2 discusses why the ISP should demand that their equipment vendors add this software. (The software could already be available, but the ISP doesn’t think it is worth the trouble to enable it.) Nothing will happen unless we stay engaged. We need to keep the subscribers engaged, too.
Should we have a conference call to discuss this?
Gene
----------------------------------------------
Eugene Chang
IEEE Life Senior Member
> On Apr 30, 2024, at 3:52 PM, Jim Forster <jim@connectivitycap.com> wrote:
>
> Gene, David,
> ‘m
> Agreed that the technical problem is largely solved with cake & codel.
>
> Also that demos are good. How to do one for this problem>
>
> — Jim
>
>> The bandwidth mantra has been used for so long that a technical discussion cannot unseat the mantra.
>> Some technical parties use the mantra to sell more, faster, ineffective service. Gullible customers accept that they would be happy if they could afford even more speed.
>>
>> Shouldn’t we create a demo to show the solution?
>> To show is more effective than to debate. It is impossible to explain to some people.
>> Has anyone tried to create a demo (to unseat the bandwidth mantra)?
>> Is an effective demo too complicated to create?
>> I’d be glad to participate in defining a demo and publicity campaign.
>>
>> Gene
>>
>>
>>> On Apr 30, 2024, at 2:36 PM, David Lang <david@lang.hm <mailto:david@lang.hm>> wrote:
>>>
>>> On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
>>>
>>>> I am always surprised how complicated these discussions become. (Surprised mostly because I forgot the kind of issues this community care about.) The discussion doesn’t shed light on the following scenarios.
>>>>
>>>> While watching stream content, activating controls needed to switch content sometimes (often?) have long pauses. I attribute that to buffer bloat and high latency.
>>>>
>>>> With a happy household user watching streaming media, a second user could have terrible shopping experience with Amazon. The interactive response could be (is often) horrible. (Personally, I would be doing email and working on a shared doc. The Amazon analogy probably applies to more people.)
>>>>
>>>> How can we deliver graceful performance to both persons in a household?
>>>> Is seeking graceful performance too complicated to improve?
>>>> (I said “graceful” to allow technical flexibility.)
>>>
>>> it's largely a solved problem from a technical point of view. fq_codel and cake solve this.
>>>
>>> The solution is just not deployed widely, instead people argue that more bandwidth is needed instead.
>
[-- Attachment #1.2: Type: text/html, Size: 11530 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 2:46 ` [Starlink] Itʼs " Colin_Higbie
2024-05-01 3:18 ` David Lang
@ 2024-05-01 3:54 ` James Forster
1 sibling, 0 replies; 42+ messages in thread
From: James Forster @ 2024-05-01 3:54 UTC (permalink / raw)
To: Colin_Higbie; +Cc: David Lang, starlink
I agree.
> There is no conflict between the need to support 25Mbps Internet and this group's goal of reducing latency at load. On the other hand, you lose credibility and won't be taken seriously by your target audience if you disregard the importance of the need for every ISP rolling out a new plan to at least offer a 25Mbps bandwidth level of support.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 3:38 ` Colin_Higbie
@ 2024-05-01 3:51 ` David Lang
2024-05-01 4:16 ` Colin_Higbie
0 siblings, 1 reply; 42+ messages in thread
From: David Lang @ 2024-05-01 3:51 UTC (permalink / raw)
To: Colin_Higbie; +Cc: David Lang, starlink
[-- Attachment #1: Type: text/plain, Size: 4641 bytes --]
On Wed, 1 May 2024, Colin_Higbie wrote:
> David,
>
> You wrote, "I in no way advocate for the elimination of 25Mb connectivity. What I am arguing against is defining that as the minimum acceptable connectivity. i.e. pretending that anything less than that may as well not exist (ot at the very least should not be defined as 'broadband')"
>
> If you're simply talking about approaching an existing ISP with existing services and telling them, "Please implement cake and codel to reduce latency problems at load," then I'm with you. That's a clear win because you're fixing a latency problem without creating any new problems. Good.
>
> The importance of the 25Mbps minimum arises with NEW services, new construction. Specifically, where an ISP is looking to expand their geographic footprint or seeking funding to provide improvements or a new ISP is looking to enter a market, it is DESTRUCTIVE for them to roll out a new service that can't support at least 25Mbps service. This is because a new service rollout will generally not be upgraded in terms of bandwidth capacity for a period of years following the initial deployment. As stated before, it's fine if they also OFFER plans with lower top speeds because not everyone needs 25Mbps, but they must at least OFFER a minimum of a 25Mbps plan. You do more harm to Internet infrastructure and further the Internet divide if you encourage good latency for new constructions at sub-25Mbps bandwidth.
>
> If members of this group are touting themselves as experts and advising ISPs, then you must include the 25Mbps bandwidth as the floor for at least the top tier of service.
I would rather there be an ISP serving an area with 10Mb than no ISP serving the
area (no matter what the latency)
for wireless Internet, it may not be possible to provide 25Mb of service to some
locations, so your argument then means those people get nothing.
I'm also seeing the policy folks in DC pushing for 25Mb to be the minimum for
the slowest offering.
So when I see people posting what I paraphrase as "if the service is slower than
25Mb, that service should not exist", I argue. I apologize if that's not what
you are arguing, but up until this post (where you say "25Mb for the top tier of
service") that seemed to be what you were saying.
and then there's also the 'what does it mean to say 25Mb of service'. does that
mean that the ISP upstream must have 25Mb for every subscriber? or can they
oversubscribe to the point that if everyone were trying to use the service, they
each get 1Mb? how do you define how much oversubscription is allowed? how do you
justify where you draw the line (especially if you are arguing that anything
under 25Mb is unusable)
David Lang
>
> I don't mean to suggest 25Mbps at 1,000ms latency is better than 20Mbps at 30ms latency, but rather that assuming a reasonable latency, getting to AT LEAST 25Mbps bandwidth is important. I yield to the wisdom of this group on an equivalent max reasonable latency. In my experience anything sub 100ms would be an acceptable max latency, but I would accept if you told me the upper limit on new rollout should require nothing above 50ms or 60ms.
>
>
> -----Original Message-----
> From: David Lang <david@lang.hm>
> Sent: Tuesday, April 30, 2024 11:19 PM
> To: Colin_Higbie <CHigbie1@Higbie.name>
> Cc: David Lang <david@lang.hm>; starlink@lists.bufferbloat.net
> Subject: RE: [Starlink] Itʼs the Latency, FCC
>
> On Wed, 1 May 2024, Colin_Higbie wrote:
>
>> This is a largely black and white issue: there are a significant # of
>> users who need 4K streaming support. Period. This is a market
>> standard, like 91 octane gas, 802.11ax Wi-Fi, skim (0%) milk, 50 SPF sunblock, and 5G phones.
>> The fact that not everyone uses one of those market-established
>> standards does not mean that each is not an important standard with a
>> sizable market cohort that merits support. 25Mbps for 4K HDR streaming
>> is one such standard. That's not my opinion. That's a
>> market-established fact and the only reason I posted here – to ensure
>> this group has that information so that you can be more effective in presenting your latency arguments and solutions to the ISPs.
>
> But just because many people want those things doesn't mean that 87 octane gas, SPF 20 sunblock, 2% milk, 4G phones, etc should be eliminated.
>
> I in no way advocate for the elimination of 25Mb connectivity. What I am arguing against is defining that as the minimum acceptable connectivity. i.e. pretending that anything less than that may as well not exist (ot at the very least should not be defined as 'broadband')
>
> David Lang
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 3:18 ` David Lang
@ 2024-05-01 3:38 ` Colin_Higbie
2024-05-01 3:51 ` David Lang
0 siblings, 1 reply; 42+ messages in thread
From: Colin_Higbie @ 2024-05-01 3:38 UTC (permalink / raw)
To: David Lang; +Cc: starlink
David,
You wrote, "I in no way advocate for the elimination of 25Mb connectivity. What I am arguing against is defining that as the minimum acceptable connectivity. i.e. pretending that anything less than that may as well not exist (ot at the very least should not be defined as 'broadband')"
If you're simply talking about approaching an existing ISP with existing services and telling them, "Please implement cake and codel to reduce latency problems at load," then I'm with you. That's a clear win because you're fixing a latency problem without creating any new problems. Good.
The importance of the 25Mbps minimum arises with NEW services, new construction. Specifically, where an ISP is looking to expand their geographic footprint or seeking funding to provide improvements or a new ISP is looking to enter a market, it is DESTRUCTIVE for them to roll out a new service that can't support at least 25Mbps service. This is because a new service rollout will generally not be upgraded in terms of bandwidth capacity for a period of years following the initial deployment. As stated before, it's fine if they also OFFER plans with lower top speeds because not everyone needs 25Mbps, but they must at least OFFER a minimum of a 25Mbps plan. You do more harm to Internet infrastructure and further the Internet divide if you encourage good latency for new constructions at sub-25Mbps bandwidth.
If members of this group are touting themselves as experts and advising ISPs, then you must include the 25Mbps bandwidth as the floor for at least the top tier of service.
I don't mean to suggest 25Mbps at 1,000ms latency is better than 20Mbps at 30ms latency, but rather that assuming a reasonable latency, getting to AT LEAST 25Mbps bandwidth is important. I yield to the wisdom of this group on an equivalent max reasonable latency. In my experience anything sub 100ms would be an acceptable max latency, but I would accept if you told me the upper limit on new rollout should require nothing above 50ms or 60ms.
-----Original Message-----
From: David Lang <david@lang.hm>
Sent: Tuesday, April 30, 2024 11:19 PM
To: Colin_Higbie <CHigbie1@Higbie.name>
Cc: David Lang <david@lang.hm>; starlink@lists.bufferbloat.net
Subject: RE: [Starlink] Itʼs the Latency, FCC
On Wed, 1 May 2024, Colin_Higbie wrote:
> This is a largely black and white issue: there are a significant # of
> users who need 4K streaming support. Period. This is a market
> standard, like 91 octane gas, 802.11ax Wi-Fi, skim (0%) milk, 50 SPF sunblock, and 5G phones.
> The fact that not everyone uses one of those market-established
> standards does not mean that each is not an important standard with a
> sizable market cohort that merits support. 25Mbps for 4K HDR streaming
> is one such standard. That's not my opinion. That's a
> market-established fact and the only reason I posted here – to ensure
> this group has that information so that you can be more effective in presenting your latency arguments and solutions to the ISPs.
But just because many people want those things doesn't mean that 87 octane gas, SPF 20 sunblock, 2% milk, 4G phones, etc should be eliminated.
I in no way advocate for the elimination of 25Mb connectivity. What I am arguing against is defining that as the minimum acceptable connectivity. i.e. pretending that anything less than that may as well not exist (ot at the very least should not be defined as 'broadband')
David Lang
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 2:46 ` [Starlink] Itʼs " Colin_Higbie
@ 2024-05-01 3:18 ` David Lang
2024-05-01 3:38 ` Colin_Higbie
2024-05-01 3:54 ` James Forster
1 sibling, 1 reply; 42+ messages in thread
From: David Lang @ 2024-05-01 3:18 UTC (permalink / raw)
To: Colin_Higbie; +Cc: David Lang, starlink
[-- Attachment #1: Type: text/plain, Size: 1187 bytes --]
On Wed, 1 May 2024, Colin_Higbie wrote:
> This is a largely black and white issue: there are a significant # of users
> who need 4K streaming support. Period. This is a market standard, like 91
> octane gas, 802.11ax Wi-Fi, skim (0%) milk, 50 SPF sunblock, and 5G phones.
> The fact that not everyone uses one of those market-established standards does
> not mean that each is not an important standard with a sizable market cohort
> that merits support. 25Mbps for 4K HDR streaming is one such standard. That's
> not my opinion. That's a market-established fact and the only reason I posted
> here – to ensure this group has that information so that you can be more
> effective in presenting your latency arguments and solutions to the ISPs.
But just because many people want those things doesn't mean that 87 octane gas,
SPF 20 sunblock, 2% milk, 4G phones, etc should be eliminated.
I in no way advocate for the elimination of 25Mb connectivity. What I am arguing
against is defining that as the minimum acceptable connectivity. i.e. pretending
that anything less than that may as well not exist (ot at the very least should
not be defined as 'broadband')
David Lang
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 0:51 ` David Lang
@ 2024-05-01 2:46 ` Colin_Higbie
2024-05-01 3:18 ` David Lang
2024-05-01 3:54 ` James Forster
0 siblings, 2 replies; 42+ messages in thread
From: Colin_Higbie @ 2024-05-01 2:46 UTC (permalink / raw)
To: David Lang; +Cc: starlink
David,
Yes, poor word choice on my part to say "nearly all TVs sold today are 4K TVs." I was thinking of 4K sets in contrast to the tiny % of 8K sets when I wrote that to make the point that 8K is not about to become a market standard like 4K. Better to have said, "The market for 8K tv sets is tiny." You are correct that there are still many 1080p sets also sold at the low end.
But this misstep on my part is immaterial to the core point. I'm not sure if you are just (fairly and correctly) calling me out for that mistake or if you actually disagree that ISPs should invest (if needed) to offer at least 25Mbps service to support 4K HDR streaming, in which case you fundamentally misunderstand the market for Internet usage.
This is a largely black and white issue: there are a significant # of users who need 4K streaming support. Period. This is a market standard, like 91 octane gas, 802.11ax Wi-Fi, skim (0%) milk, 50 SPF sunblock, and 5G phones. The fact that not everyone uses one of those market-established standards does not mean that each is not an important standard with a sizable market cohort that merits support. 25Mbps for 4K HDR streaming is one such standard. That's not my opinion. That's a market-established fact and the only reason I posted here – to ensure this group has that information so that you can be more effective in presenting your latency arguments and solutions to the ISPs.
There is no conflict between the need to support 25Mbps Internet and this group's goal of reducing latency at load. On the other hand, you lose credibility and won't be taken seriously by your target audience if you disregard the importance of the need for every ISP rolling out a new plan to at least offer a 25Mbps bandwidth level of support. Arguing with me that not everyone cares about 4K is like arguing with the supermarket that they shouldn't carry skim milk or your local gas station that they should drop 91 (or 93) octane or your local Best Buy that they don't need to waste shelf space with Wi-Fi 6e or 7 routers because not everyone uses those. It's purely counterproductive and prevents your others points from being heard.
On gaming, and I suspect you know this, there are different use cases of Internet for gaming. There is competitive gaming, where bandwidth is largely meaningless (provided it is at a sufficient level of at least a few Mbps) because the games are played locally with only modest data exchanges to share location and status information. For those games, latency is king. In the extreme cases there, even 10ms latency may be pushing the limits of what gamers can tolerate, but sub 60ms or 50ms is usually sufficient (strategy and turn-based games are fine with much higher latencies, first-person shooters or driving games require lower latencies).
However, there is also a growing, but still relatively niche, game streaming market. This includes services like Xbox Cloud Gaming, GeForce Now, and Amazon Luna. GeForce Now requires 45Mbps minimum connection for 4K gaming (same latency notes as above depending on the game genre). The higher bandwidth here compared to 25Mbps for video streaming services is due to the higher framerate and less lossy compression used for to preserve the HUD which must be perfectly sharp for gaming. However, note that I would not say that this is a needed standard like support for 4K HDR streaming. Unlike 4K video streaming which is mass market, 4K game streaming is extremely niche. Xbox Cloud Gaming doesn't even offer 4K streaming at present (but they likely will add it within the next year or two and they do support 60fps at 1080p, which is double the framerate of video streaming that only runs at 30fps).
You also commented on upload bandwidth. If uploading large files, obviously higher is better, but there's no clear market requirement on this. Generally (but not always), with asymmetrical connections, a higher download bandwidth corresponds to a higher upload bandwidth. That's why I said that higher bandwidth does have advantages for remote work. I am not asserting that all remote workers need that, but clearly higher bandwidth and lower latency are both better than the alternatives.
For my remote work (and 4K video streaming), Starlink's download bandwidth (70-200Mbps) was always plenty, more than I need. However, Starlink's upload bandwidth (originally 2-10Mbps) used to frequently fall short of my needs for video conferencing while uploading media files. Improvements over the past year had largely resolved that where I saw stable upload bandwidth between 12-20Mbps.
I confess that I have shifted to only keeping Starlink for emergency backup because we recently gained access to fiber with a symmetric 1Gbps upload and download and negligible bufferbloat (a real surprise when our power company deployed that out here in the country in rural NH where we can't even get cable TV, yeah the power company, not the telephone or cable TV provider). They also offer up to 2.5Gbps in both directions and while my LAN hardware supports 2.5Gbps throughout, it's not worth paying anything extra to go beyond 1Gbps for me.
-----Original Message-----
From: David Lang <david@lang.hm>
Sent: Tuesday, April 30, 2024 8:52 PM
To: Colin Higbie <colin.higbie@scribl.com>
Cc: starlink@lists.bufferbloat.net
Subject: Re: [Starlink] Itʼs the Latency, FCC
On Tue, 30 Apr 2024, Colin Higbie via Starlink wrote:
> Great points. I mostly agree and especially take your point on lower latency increasing the effective "range" of remote sites/services that become accessible based on their own added latency due to distance. That's a great point I was not considering. As an American, I tend to think of everything as "close enough" that no sites have latency problems, but you are clearly correct that's not fair to project as a universal perspective, especially to users in other countries who have to reach servers in the U.S. (and I'm sure some users in the U.S. also need to reach servers in other countries too, just not very often in my experience).
>
> However, two points of slight disagreement, or at least desire to get
> in the last word ;-)
> the fact remains that 4K services are a standard and a growing one as
> nearly all TVs sold today are 4K TVs.
really? then why was walmart still have almost 200 tv models at 1080p or below
> 2. Using a Remote Desktop can be a great solution to solve some of the
> bandwidth problems if the files can start and end on the remote systems.
> However, even then it's a very specific solution and not always an option.
> Just to use myself as an example, I do media work with a team of
> people. We are geographically dispersed. There are common servers for
> storing the data (OneDrive and Amazon S3 and RDS and our own custom
> Linux systems to run various media adjustments automatically), but they are not local to anyone.
> The bulk of the work must be done locally. I suppose you could work on
> a media file via a remote desktop function with sufficient bandwidth
> to provide real-time 4K or 5K streaming and super low latency (often
> need to make changes in fraction-of-a-second increments, hit
> pause/play at exactly the right moment, etc.). But even if you had the
> high speeds already to do that remotely, you would still need to
> upload the file in the first place from wherever you are with the
> microphones and cameras. Raw video files, which are still uncompressed
> or, at best, compressed using lossless compression, are HUGE (many
> GBs). Even raw straight audio files are typically in the hundreds of
> MBs and sometimes a few GBs. Further, if the mics and cameras are
> connected directly to the computer, there are many real-time changes that can be made DURING the recording, which would be impossible on a remote system.
hardly your typical remote worker.
We are not saying that nobody needs higher bandwidth, just that pointing at things like this and saying that the minimum connection should support them is not reasonable.
> Same applies for anyone who wants to post to YouTube. Many will to do
> most of their video editing locally before uploading to YouTube. Those
> that do their editing in the cloud still have to upload it to the
> cloud. An active YouTuber might upload multiple many-GB video files every day.
you do realize that a lot of '25Mb connections' only upload at 10Mb or slower, right? I pay for a 600Mb cablemodem and that only gives me 30Mb upload?
> Similarly, for gaming, yes, with high enough bandwidth and low-enough
> latency, you could pay a monthly fee for a game service and get decent
> graphics and latency, but it's still not as good as a high-end system
> locally (or if you just want to avoid the monthly fee). There's always
> a mushiness to the latency and generally some compression artifacts in
> the graphics that are not there when the screen is being rendered
> locally with no compression using a multi-Gbps connection between system and monitor/TV.
so let gamers pay for higher bandwidth if they need it? but is the problem really bandwidth, or is it latency being papered over with bandwidth?
> In another case, we built an electronic medical records system for physicians.
> The performance using remote desktop to a virtual machine even on the
> same LAN was too slow. It wasn't unusably slow, but it was slow enough
> that users who had opted that route to save money routinely upgraded
> to more performant Windows tablets for the performance boost of the
> native UI. To be fair, this was back at 820.11n, when Wi-Fi LAN speeds
> topped out at about 40Mbps from most locations in the offices (higher
> if really close to the AP, but those ideal conditions were rare). When
> the tablets the doctors used ran the UI natively, not as dumb
> terminals, performance UX and customer feedback was far better. When
> using a pen/stylus, even a 20ms lag is enough to make the writing feel
> wrong. A few tens of ms to redraw the screen when trying to shuttle
> through pages of notes in a few seconds to skim for a specific piece of data is painfully slow.
again, not a minimal home user experience.
David Lang
> Cheers,
> Colin
>
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 1:30 ` [Starlink] Itʼs " Eugene Y Chang
@ 2024-05-01 1:52 ` Jim Forster
2024-05-01 3:59 ` Eugene Y Chang
2024-05-02 19:17 ` Michael Richardson
1 sibling, 1 reply; 42+ messages in thread
From: Jim Forster @ 2024-05-01 1:52 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: David Lang, Dave Taht via Starlink, Colin_Higbie
[-- Attachment #1: Type: text/plain, Size: 2084 bytes --]
Gene, David,
Agreed that the technical problem is largely solved with cake & codel.
Also that demos are good. How to do one for this problem>
— Jim
> The bandwidth mantra has been used for so long that a technical discussion cannot unseat the mantra.
> Some technical parties use the mantra to sell more, faster, ineffective service. Gullible customers accept that they would be happy if they could afford even more speed.
>
> Shouldn’t we create a demo to show the solution?
> To show is more effective than to debate. It is impossible to explain to some people.
> Has anyone tried to create a demo (to unseat the bandwidth mantra)?
> Is an effective demo too complicated to create?
> I’d be glad to participate in defining a demo and publicity campaign.
>
> Gene
>
>
>> On Apr 30, 2024, at 2:36 PM, David Lang <david@lang.hm <mailto:david@lang.hm>> wrote:
>>
>> On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
>>
>>> I am always surprised how complicated these discussions become. (Surprised mostly because I forgot the kind of issues this community care about.) The discussion doesn’t shed light on the following scenarios.
>>>
>>> While watching stream content, activating controls needed to switch content sometimes (often?) have long pauses. I attribute that to buffer bloat and high latency.
>>>
>>> With a happy household user watching streaming media, a second user could have terrible shopping experience with Amazon. The interactive response could be (is often) horrible. (Personally, I would be doing email and working on a shared doc. The Amazon analogy probably applies to more people.)
>>>
>>> How can we deliver graceful performance to both persons in a household?
>>> Is seeking graceful performance too complicated to improve?
>>> (I said “graceful” to allow technical flexibility.)
>>
>> it's largely a solved problem from a technical point of view. fq_codel and cake solve this.
>>
>> The solution is just not deployed widely, instead people argue that more bandwidth is needed instead.
[-- Attachment #2: Type: text/html, Size: 6197 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-05-01 0:36 ` David Lang
@ 2024-05-01 1:30 ` Eugene Y Chang
2024-05-01 1:52 ` Jim Forster
2024-05-02 19:17 ` Michael Richardson
0 siblings, 2 replies; 42+ messages in thread
From: Eugene Y Chang @ 2024-05-01 1:30 UTC (permalink / raw)
To: David Lang; +Cc: Eugene Y Chang, Colin_Higbie, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 38793 bytes --]
David,
The bandwidth mantra has been used for so long that a technical discussion cannot unseat the mantra.
Some technical parties use the mantra to sell more, faster, ineffective service. Gullible customers accept that they would be happy if they could afford even more speed.
Shouldn’t we create a demo to show the solution?
To show is more effective than to debate. It is impossible to explain to some people.
Has anyone tried to create a demo (to unseat the bandwidth mantra)?
Is an effective demo too complicated to create?
I’d be glad to participate in defining a demo and publicity campaign.
Gene
----------------------------------------------
Eugene Chang
IEEE Life Senior Member
> On Apr 30, 2024, at 2:36 PM, David Lang <david@lang.hm> wrote:
>
> On Tue, 30 Apr 2024, Eugene Y Chang via Starlink wrote:
>
>> I am always surprised how complicated these discussions become. (Surprised mostly because I forgot the kind of issues this community care about.) The discussion doesn’t shed light on the following scenarios.
>>
>> While watching stream content, activating controls needed to switch content sometimes (often?) have long pauses. I attribute that to buffer bloat and high latency.
>>
>> With a happy household user watching streaming media, a second user could have terrible shopping experience with Amazon. The interactive response could be (is often) horrible. (Personally, I would be doing email and working on a shared doc. The Amazon analogy probably applies to more people.)
>>
>> How can we deliver graceful performance to both persons in a household?
>> Is seeking graceful performance too complicated to improve?
>> (I said “graceful” to allow technical flexibility.)
>
> it's largely a solved problem from a technical point of view. fq_codel and cake solve this.
>
> The solution is just not deployed widely, instead people argue that more bandwidth is needed instead.
>
> David Lang
>
>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>>
>>
>>> On Apr 30, 2024, at 8:05 AM, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>
>>> [SM] How that? Capacity and latency are largely independent... think a semi truck full of harddisks from NYC to LA has decent capacity/'bandwidth' but lousy latency...
>>>
>>>
>>> Sebastian, nothing but agreement with you that capacity and latency are largely independent (my old dial-up modem connections 25 years ago at ~50kbps had much lower latencies than my original geostationary satellite connections with higher bandwidth). I also agree that both are important in their own ways. I had originally responded (this thread seems to have come back to life from a few months ago) to a point about 10Mbps capacity being sufficient, and that as long as a user has a 10Mbps connection, latency improvements would provide more benefit to most users at that point than further bandwidth increases. I responded that the minimum "sufficient" metric should be higher than 10Mpbs, probably at 25Mbps to support 4K HDR, which is the streaming standard today and likely will be for the foreseeable future.
>>>
>>> I have not seen any responses that provided a sound argument against that conclusion. A lot of responses like "but 8K is coming" (it's not, only experimental YouTube videos showcase these resolutions to the general public, no studio is making 8K content and no streaming service offers anything in 8K or higher) and "I don't need to watch 4K, 1080p is sufficient for me, so it should be for everyone else too" (personal preference should never be a substitute for market data). Neither of those arguments refutes objective industry standards: 25Mbps is the minimum required bandwidth for multiple of the biggest streaming services.
>>>
>>> None of this intends to suggest that we should ease off pressure on ISPs to provide low latency connections that don't falter under load. Just want to be sure we all recognize that the floor bandwidth should be set no lower than 25Mbps.
>>>
>>> However, I would say that depending on usage, for a typical family use, where 25Mbps is "sufficient" for any single stream, even 50ms latency (not great, but much better than a system will have with bad bufferbloat problems that can easily fall to the hundreds of milliseconds) is also "sufficient" for all but specialized applications or competitive gaming. I would also say that if you already have latency at or below 20ms, further gains on latency will be imperceptible to almost all users, where bandwidth increases will at least allow for more simultaneous connections, even if any given stream doesn't really benefit much beyond about 25Mbps.
>>>
>>> I would also say that for working remotely, for those of us who work with large audio or video files, the ability to transfer multi-hundred MB files from a 1Gbps connection in several seconds instead of several minutes for a 25Mbps connection is a meaningful boost to work effectiveness and productivity, where a latency reduction from 50ms to 10ms wouldn't really yield any material changes to our work.
>>>
>>> Is 100Mbps and 10ms latency better than 25Mbps and 50ms latency? Of course. Moving to ever more capacity and lower latencies is a good thing on both fronts, but where hardware and engineering costs tend to scale non-linearly as you start pushing against current limits, "sufficiency" is an important metric to keep in mind. Cost matters.
>>>
>>> Cheers,
>>> Colin
>>>
>>>
>>> -----Original Message-----
>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> On Behalf Of starlink-request@lists.bufferbloat.net
>>> Sent: Tuesday, April 30, 2024 10:41 AM
>>> To: starlink@lists.bufferbloat.net
>>> Subject: Starlink Digest, Vol 37, Issue 11
>>>
>>>
>>> ----------------------------------------------------------------------
>>>
>>> Message: 1
>>> Date: Tue, 30 Apr 2024 16:32:51 +0200
>>> From: Sebastian Moeller <moeller0@gmx.de>
>>> To: Alexandre Petrescu <alexandre.petrescu@gmail.com>
>>> Cc: Hesham ElBakoury via Starlink <starlink@lists.bufferbloat.net>
>>> Subject: Re: [Starlink] It’s the Latency, FCC
>>> Message-ID: <D3B2FA53-589F-4F35-958C-4679EC4414D9@gmx.de>
>>> Content-Type: text/plain; charset=utf-8
>>>
>>> Hi Alexandre,
>>>
>>>
>>>
>>>> On 30. Apr 2024, at 16:25, Alexandre Petrescu via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>
>>>> Colin,
>>>> 8K usefulness over 4K: the higher the resolution the more it will be possible to zoom in into paused images. It is one of the advantages. People dont do that a lot these days but why not in the future.
>>>
>>> [SM] Because that is how in the past we envisioned the future, see here h++ps://www.youtube.com/watch?v=hHwjceFcF2Q 'enhance'...
>>>
>>>> Spotify lower quality than CD and still usable: one would check not Spotify, but other services for audiophiles; some of these use 'DSD' formats which go way beyond the so called high-def audio of 384khz sampling freqs. They dont 'stream' but download. It is these higher-than-384khz sampling rates equivalent (e.g. DSD1024 is the equivalent of, I think of something like 10 times CD quality, I think). If Spotify is the king of streamers, in the future other companies might become the kings of something else than 'streaming', a name yet to be invented.
>>>> For each of them, it is true, normal use will not expose any more advantage than the previous version (no advantage of 8K over 4K, no advantage of 88KHz DVD audio over CD, etc) - yet the progress is ongoing on and on, and nobody comes back to CD or to DVD audio or to SD (standard definition video).
>>>> Finally, 8K and DSD per se are requirements of just bandwidth. The need of latency should be exposed there, and that is not straightforward. But higher bandwidths will come with lower latencies anyways.
>>>
>>> [SM] How that? Capacity and latency are largely independent... think a semi truck full of harddisks from NYC to LA has decent capacity/'bandwidth' but lousy latency...
>>>
>>>
>>>> The quest of latency requirements might be, in fact, a quest to see how one could use that low latency technology that is possible and available anyways.
>>>> Alex
>>>> Le 30/04/2024 à 16:00, Colin_Higbie via Starlink a écrit :
>>>>> David Fernández, those bitrates are safe numbers, but many streams could get by with less at those resolutions. H.265 compression is at a variable bit rate with simpler scenes requiring less bandwidth. Note that 4K with HDR (30 bits per pixel rather than 24) consistently also fits within 25Mbps.
>>>>>
>>>>> David Lang, HDR is a requirement for 4K programming. That is not to say that all 4K streams are in HDR, but in setting a required bandwidth, because 4K signals can include HDR, the required bandwidth must accommodate and allow for HDR. That said, I believe all modern 4K programming on Netflix and Amazon Prime is HDR. Note David Fernández' point that Spain independently reached the same conclusion as the US streaming services of 25Mbps requirement for 4K.
>>>>>
>>>>> Visually, to a person watching and assuming an OLED (or microLED) display capable of showing the full color and contrast gamut of HDR (LCD can't really do it justice, even with miniLED backlighting), the move to HDR from SDR is more meaningful in most situations than the move from 1080p to 4K. I don't believe going to further resolutions, scenes beyond 4K (e.g., 8K), will add anything meaningful to a movie or television viewer over 4K. Video games could benefit from the added resolution, but lens aberration in cameras along with focal length and limited depth of field render blurriness of even a sharp picture greater than the pixel size in most scenes beyond about 4K - 5.5K. Video games don’t suffer this problem because those scenes are rendered, eliminating problems from camera lenses. So video games may still benefit from 8K resolution, but streaming programming won’t.
>>>>>
>>>>> There is precedent for this in the audio streaming world: audio streaming bitrates have retracted from prior peaks. Even though 48kHz and higher bitrate audio available on DVD is superior to the audio quality of 44.1kHz CDs, Spotify and Apple and most other streaming services stream music at LOWER quality than CD. It’s good enough for most people to not notice the difference. I don’t see much push in the foreseeable future for programming beyond UHD (4K + HDR). That’s not to say never, but there’s no real benefit to it with current camera tech and screen sizes.
>>>>>
>>>>> Conclusion: for video streaming needs over the next decade or so, 25Mbps should be appropriate. As David Fernández rightly points out, H.266 and other future protocols will improve compression capabilities and reduce bandwidth needs at any given resolution and color bit depth, adding a bit more headroom for small improvements.
>>>>>
>>>>> Cheers,
>>>>> Colin
>>>>>
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> On Behalf Of
>>>>> starlink-request@lists.bufferbloat.net
>>>>> Sent: Tuesday, April 30, 2024 9:31 AM
>>>>> To: starlink@lists.bufferbloat.net
>>>>> Subject: Starlink Digest, Vol 37, Issue 9
>>>>>
>>>>>
>>>>>
>>>>> Message: 2
>>>>> Date: Tue, 30 Apr 2024 11:54:20 +0200
>>>>> From: David Fernández <davidfdzp@gmail.com>
>>>>> To: starlink <starlink@lists.bufferbloat.net>
>>>>> Subject: Re: [Starlink] It’s the Latency, FCC
>>>>> Message-ID:
>>>>> <CAC=tZ0rrmWJUNLvGupw6K8ogADcYLq-eyW7Bjb209oNDWGfVSA@mail.gmail.com>
>>>>> Content-Type: text/plain; charset="utf-8"
>>>>>
>>>>> Last February, TV broadcasting in Spain left behind SD definitively and moved to HD as standard quality, also starting to regularly broadcast a channel with 4K quality.
>>>>>
>>>>> A 4K video (2160p) at 30 frames per second, handled with the HEVC compression codec (H.265), and using 24 bits per pixel, requires 25 Mbit/s.
>>>>>
>>>>> Full HD video (1080p) requires 10 Mbit/s.
>>>>>
>>>>> For lots of 4K video encoded at < 20 Mbit/s, it may be hard to distinguish it visually from the HD version of the same video (this was also confirmed by SBTVD Forum Tests).
>>>>>
>>>>> Then, 8K will come, eventually, requiring a minimum of ~32 Mbit/s:
>>>>> https://dvb.org/news/new-generation-of-terrestrial-services-taking-sh
>>>>> ape-in-europe
>>>>>
>>>>> The latest codec VVC (H.266) may reduce the required data rates by at least 27%, at the expense of more computing power required, but somehow it is claimed it will be more energy efficient.
>>>>> https://dvb.org/news/dvb-prepares-the-way-for-advanced-4k-and-8k-broa
>>>>> dcast-and-broadband-television
>>>>>
>>>>> Regards,
>>>>>
>>>>> David
>>>>>
>>>>> Date: Mon, 29 Apr 2024 19:16:27 -0700 (PDT)
>>>>> From: David Lang <david@lang.hm>
>>>>> To: Colin_Higbie <CHigbie1@Higbie.name>
>>>>> Cc: David Lang <david@lang.hm>, "starlink@lists.bufferbloat.net"
>>>>> <starlink@lists.bufferbloat.net>
>>>>> Subject: Re: [Starlink] Itʼs the Latency, FCC
>>>>> Message-ID: <srss5qrq-7973-5q87-823p-30pn7o308608@ynat.uz>
>>>>> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>>>>>
>>>>> Amazon, youtube set explicitly to 4k (I didn't say HDR)
>>>>>
>>>>> David Lang
>>>>>
>>>>> On Tue, 30 Apr 2024, Colin_Higbie wrote:
>>>>>
>>>>>
>>>>>> Date: Tue, 30 Apr 2024 01:30:21 +0000
>>>>>> From: Colin_Higbie <CHigbie1@Higbie.name>
>>>>>> To: David Lang <david@lang.hm>
>>>>>> Cc: "starlink@lists.bufferbloat.net"
>>>>>> <starlink@lists.bufferbloat.net>
>>>>>> Subject: RE: [Starlink] Itʼs the Latency, FCC
>>>>>>
>>>>>> Was that 4K HDR (not SDR) using the standard protocols that
>>>>>> streaming
>>>>>>
>>>>> services use (Netflix, Amazon Prime, Disney+, etc.) or was it just some YouTube 4K SDR videos? YouTube will show "HDR" on the gear icon for content that's 4K HDR. If it only shows "4K" instead of "HDR," then means it's SDR.
>>>>> Note that if YouTube, if left to the default of Auto for streaming resolution it will also automatically drop the quality to something that fits within the bandwidth and most of the "4K" content on YouTube is low-quality and not true UHD content (even beyond missing HDR). For example, many smartphones will record 4K video, but their optics are not sufficient to actually have distinct per-pixel image detail, meaning it compresses down to a smaller image with no real additional loss in picture quality, but only because it's really a 4K UHD stream to begin with.
>>>>>
>>>>>> Note that 4K video compression codecs are lossy, so the lower
>>>>>> quality the
>>>>>>
>>>>> initial image, the lower the bandwidth needed to convey the stream w/o additional quality loss. The needed bandwidth also changes with scene complexity. Falling confetti, like on Newy Year's Eve or at the Super Bowl make for one of the most demanding scenes. Lots of detailed fire and explosions with fast-moving fast panning full dynamic backgrounds are also tough for a compressed signal to preserve (but not as hard as a screen full of falling confetti).
>>>>>
>>>>>> I'm dubious that 8Mbps can handle that except for some of the
>>>>>> simplest
>>>>>>
>>>>> video, like cartoons or fairly static scenes like the news. Those scenes don't require much data, but that's not the case for all 4K HDR scenes by any means.
>>>>>
>>>>>> It's obviously in Netflix and the other streaming services' interest
>>>>>> to
>>>>>>
>>>>> be able to sell their more expensive 4K HDR service to as many people as possible. There's a reason they won't offer it to anyone with less than 25Mbps – they don't want the complaints and service calls. Now, to be fair, 4K HDR definitely doesn’t typically require 25Mbps, but it's to their credit that they do include a small bandwidth buffer. In my experience monitoring bandwidth usage for 4K HDR streaming, 15Mbps is the minimum if doing nothing else and that will frequently fall short, depending on the 4K HDR content.
>>>>>
>>>>>> Cheers,
>>>>>> Colin
>>>>>>
>>>>>>
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: David Lang <david@lang.hm>
>>>>>> Sent: Monday, April 29, 2024 8:40 PM
>>>>>> To: Colin Higbie <colin.higbie@scribl.com>
>>>>>> Cc: starlink@lists.bufferbloat.net
>>>>>> Subject: Re: [Starlink] Itʼs the Latency, FCC
>>>>>>
>>>>>> hmm, before my DSL got disconnected (the carrier decided they didn't
>>>>>> want
>>>>>>
>>>>> to support it any more), I could stream 4k at 8Mb down if there
>>>>> wasn't too much other activity on the network (doing so at 2x speed
>>>>> was a problem)
>>>>>
>>>>>> David Lang
>>>>>>
>>>>>>
>>>>>> On Fri, 15 Mar 2024, Colin Higbie via Starlink wrote:
>>>>>>
>>>>>>
>>>>>>> Date: Fri, 15 Mar 2024 18:32:36 +0000
>>>>>>> From: Colin Higbie via Starlink <starlink@lists.bufferbloat.net>
>>>>>>> Reply-To: Colin Higbie <colin.higbie@scribl.com>
>>>>>>> To: "starlink@lists.bufferbloat.net"
>>>>>>> <starlink@lists.bufferbloat.net>
>>>>>>> Subject: Re: [Starlink] It’s the Latency, FCC
>>>>>>>
>>>>>>>
>>>>>>>> I have now been trying to break the common conflation that
>>>>>>>> download
>>>>>>>>
>>>>> "speed"
>>>>>
>>>>>>>> means anything at all for day to day, minute to minute, second to
>>>>>>>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>>>>>>>> succeeding? I lost the 25/10 battle, and keep pointing at really
>>>>>>>> terrible latency under load and wifi weirdnesses for many existing
>>>>>>>>
>>>>> 100/20 services today.
>>>>>
>>>>>>> While I completely agree that latency has bigger impact on how
>>>>>>>
>>>>> responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>>>>>
>>>>>>> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming.
>>>>>>> 100/20
>>>>>>>
>>>>> would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>>>>>
>>>>>>> For me, not claiming any special expertise on market needs, just my
>>>>>>> own
>>>>>>>
>>>>> personal assessment on what typical families will need and care about:
>>>>>
>>>>>>> Latency: below 50ms under load always feels good except for some
>>>>>>> intensive gaming (I don't see any benefit to getting loaded latency
>>>>>>> further below ~20ms for typical applications, with an exception for
>>>>>>> cloud-based gaming that benefits with lower latency all the way
>>>>>>> down to about 5ms for young, really fast players, the rest of us
>>>>>>> won't be able to tell the difference)
>>>>>>>
>>>>>>> Download Bandwidth: 10Mbps good enough if not doing UHD video
>>>>>>> streaming
>>>>>>>
>>>>>>> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
>>>>>>> depending on # of streams or if wanting to be ready for 8k
>>>>>>>
>>>>>>> Upload Bandwidth: 10Mbps good enough for quality video
>>>>>>> conferencing, higher only needed for multiple concurrent outbound
>>>>>>> streams
>>>>>>>
>>>>>>> So, for example (and ignoring upload for this), I would rather have
>>>>>>>
>>>>> latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>>>>>
>>>>>>> Note that Starlink handles all of this well, including kids
>>>>>>> watching
>>>>>>>
>>>>> YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>>>>>
>>>>>>> Cheers,
>>>>>>> Colin
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>
>>>>>>
>>>>> -------------- next part -------------- An HTML attachment was
>>>>> scrubbed...
>>>>> URL:
>>>>> <https://lists.bufferbloat.net/pipermail/starlink/attachments/2024043
>>>>> 0/5572b78b/attachment-0001.html>
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 2
>>> Date: Tue, 30 Apr 2024 16:40:58 +0200
>>> From: Alexandre Petrescu <alexandre.petrescu@gmail.com>
>>> To: Sebastian Moeller <moeller0@gmx.de>
>>> Cc: Hesham ElBakoury via Starlink <starlink@lists.bufferbloat.net>
>>> Subject: Re: [Starlink] It’s the Latency, FCC
>>> Message-ID: <727b07d9-9dc3-43b7-8e17-50b6b7a4444a@gmail.com>
>>> Content-Type: text/plain; charset=UTF-8; format=flowed
>>>
>>>
>>> Le 30/04/2024 à 16:32, Sebastian Moeller a écrit :
>>>> Hi Alexandre,
>>>>
>>>>
>>>>
>>>>> On 30. Apr 2024, at 16:25, Alexandre Petrescu via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>>
>>>>> Colin,
>>>>> 8K usefulness over 4K: the higher the resolution the more it will be possible to zoom in into paused images. It is one of the advantages. People dont do that a lot these days but why not in the future.
>>>> [SM] Because that is how in the past we envisioned the future, see here h++ps://www.youtube.com/watch?v=hHwjceFcF2Q 'enhance'...
>>>>
>>>>> Spotify lower quality than CD and still usable: one would check not Spotify, but other services for audiophiles; some of these use 'DSD' formats which go way beyond the so called high-def audio of 384khz sampling freqs. They dont 'stream' but download. It is these higher-than-384khz sampling rates equivalent (e.g. DSD1024 is the equivalent of, I think of something like 10 times CD quality, I think). If Spotify is the king of streamers, in the future other companies might become the kings of something else than 'streaming', a name yet to be invented.
>>>>> For each of them, it is true, normal use will not expose any more advantage than the previous version (no advantage of 8K over 4K, no advantage of 88KHz DVD audio over CD, etc) - yet the progress is ongoing on and on, and nobody comes back to CD or to DVD audio or to SD (standard definition video).
>>>>> Finally, 8K and DSD per se are requirements of just bandwidth. The need of latency should be exposed there, and that is not straightforward. But higher bandwidths will come with lower latencies anyways.
>>>> [SM] How that? Capacity and latency are largely independent... think a semi truck full of harddisks from NYC to LA has decent capacity/'bandwidth' but lousy latency...
>>>
>>> I agree with you: two distinct parameters, bandwidth and latency. But they evolve simultenously, relatively bound by a constant relationship. For any particular link technology (satcom is one) the bandwidth and latency are in a constant relationship. One grows, the other diminishes. There are exceptions too, in some details.
>>>
>>> (as for the truck full of harddisks, and jumbo jets full of DVDs - they are just concepts: striking good examples of how enormous bandwidths are possible, but still to see in practice; physicsts also talked about a train transported by a train transported by a train and so on, to overcome the speed of light: another striking example, but not in practice).
>>>
>>> Alex
>>>
>>>>
>>>>
>>>>> The quest of latency requirements might be, in fact, a quest to see how one could use that low latency technology that is possible and available anyways.
>>>>> Alex
>>>>> Le 30/04/2024 à 16:00, Colin_Higbie via Starlink a écrit :
>>>>>> David Fernández, those bitrates are safe numbers, but many streams could get by with less at those resolutions. H.265 compression is at a variable bit rate with simpler scenes requiring less bandwidth. Note that 4K with HDR (30 bits per pixel rather than 24) consistently also fits within 25Mbps.
>>>>>>
>>>>>> David Lang, HDR is a requirement for 4K programming. That is not to say that all 4K streams are in HDR, but in setting a required bandwidth, because 4K signals can include HDR, the required bandwidth must accommodate and allow for HDR. That said, I believe all modern 4K programming on Netflix and Amazon Prime is HDR. Note David Fernández' point that Spain independently reached the same conclusion as the US streaming services of 25Mbps requirement for 4K.
>>>>>>
>>>>>> Visually, to a person watching and assuming an OLED (or microLED) display capable of showing the full color and contrast gamut of HDR (LCD can't really do it justice, even with miniLED backlighting), the move to HDR from SDR is more meaningful in most situations than the move from 1080p to 4K. I don't believe going to further resolutions, scenes beyond 4K (e.g., 8K), will add anything meaningful to a movie or television viewer over 4K. Video games could benefit from the added resolution, but lens aberration in cameras along with focal length and limited depth of field render blurriness of even a sharp picture greater than the pixel size in most scenes beyond about 4K - 5.5K. Video games don’t suffer this problem because those scenes are rendered, eliminating problems from camera lenses. So video games may still benefit from 8K resolution, but streaming programming won’t.
>>>>>>
>>>>>> There is precedent for this in the audio streaming world: audio streaming bitrates have retracted from prior peaks. Even though 48kHz and higher bitrate audio available on DVD is superior to the audio quality of 44.1kHz CDs, Spotify and Apple and most other streaming services stream music at LOWER quality than CD. It’s good enough for most people to not notice the difference. I don’t see much push in the foreseeable future for programming beyond UHD (4K + HDR). That’s not to say never, but there’s no real benefit to it with current camera tech and screen sizes.
>>>>>>
>>>>>> Conclusion: for video streaming needs over the next decade or so, 25Mbps should be appropriate. As David Fernández rightly points out, H.266 and other future protocols will improve compression capabilities and reduce bandwidth needs at any given resolution and color bit depth, adding a bit more headroom for small improvements.
>>>>>>
>>>>>> Cheers,
>>>>>> Colin
>>>>>>
>>>>>>
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> On Behalf Of
>>>>>> starlink-request@lists.bufferbloat.net
>>>>>> Sent: Tuesday, April 30, 2024 9:31 AM
>>>>>> To: starlink@lists.bufferbloat.net
>>>>>> Subject: Starlink Digest, Vol 37, Issue 9
>>>>>>
>>>>>>
>>>>>>
>>>>>> Message: 2
>>>>>> Date: Tue, 30 Apr 2024 11:54:20 +0200
>>>>>> From: David Fernández <davidfdzp@gmail.com>
>>>>>> To: starlink <starlink@lists.bufferbloat.net>
>>>>>> Subject: Re: [Starlink] It’s the Latency, FCC
>>>>>> Message-ID:
>>>>>> <CAC=tZ0rrmWJUNLvGupw6K8ogADcYLq-eyW7Bjb209oNDWGfVSA@mail.gmail.com>
>>>>>> Content-Type: text/plain; charset="utf-8"
>>>>>>
>>>>>> Last February, TV broadcasting in Spain left behind SD definitively and moved to HD as standard quality, also starting to regularly broadcast a channel with 4K quality.
>>>>>>
>>>>>> A 4K video (2160p) at 30 frames per second, handled with the HEVC compression codec (H.265), and using 24 bits per pixel, requires 25 Mbit/s.
>>>>>>
>>>>>> Full HD video (1080p) requires 10 Mbit/s.
>>>>>>
>>>>>> For lots of 4K video encoded at < 20 Mbit/s, it may be hard to distinguish it visually from the HD version of the same video (this was also confirmed by SBTVD Forum Tests).
>>>>>>
>>>>>> Then, 8K will come, eventually, requiring a minimum of ~32 Mbit/s:
>>>>>> https://dvb.org/news/new-generation-of-terrestrial-services-taking-s
>>>>>> hape-in-europe
>>>>>>
>>>>>> The latest codec VVC (H.266) may reduce the required data rates by at least 27%, at the expense of more computing power required, but somehow it is claimed it will be more energy efficient.
>>>>>> https://dvb.org/news/dvb-prepares-the-way-for-advanced-4k-and-8k-bro
>>>>>> adcast-and-broadband-television
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> David
>>>>>>
>>>>>> Date: Mon, 29 Apr 2024 19:16:27 -0700 (PDT)
>>>>>> From: David Lang <david@lang.hm>
>>>>>> To: Colin_Higbie <CHigbie1@Higbie.name>
>>>>>> Cc: David Lang <david@lang.hm>, "starlink@lists.bufferbloat.net"
>>>>>> <starlink@lists.bufferbloat.net>
>>>>>> Subject: Re: [Starlink] Itʼs the Latency, FCC
>>>>>> Message-ID: <srss5qrq-7973-5q87-823p-30pn7o308608@ynat.uz>
>>>>>> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>>>>>>
>>>>>> Amazon, youtube set explicitly to 4k (I didn't say HDR)
>>>>>>
>>>>>> David Lang
>>>>>>
>>>>>> On Tue, 30 Apr 2024, Colin_Higbie wrote:
>>>>>>
>>>>>>
>>>>>>> Date: Tue, 30 Apr 2024 01:30:21 +0000
>>>>>>> From: Colin_Higbie <CHigbie1@Higbie.name>
>>>>>>> To: David Lang <david@lang.hm>
>>>>>>> Cc: "starlink@lists.bufferbloat.net"
>>>>>>> <starlink@lists.bufferbloat.net>
>>>>>>> Subject: RE: [Starlink] Itʼs the Latency, FCC
>>>>>>>
>>>>>>> Was that 4K HDR (not SDR) using the standard protocols that
>>>>>>> streaming
>>>>>>>
>>>>>> services use (Netflix, Amazon Prime, Disney+, etc.) or was it just some YouTube 4K SDR videos? YouTube will show "HDR" on the gear icon for content that's 4K HDR. If it only shows "4K" instead of "HDR," then means it's SDR.
>>>>>> Note that if YouTube, if left to the default of Auto for streaming resolution it will also automatically drop the quality to something that fits within the bandwidth and most of the "4K" content on YouTube is low-quality and not true UHD content (even beyond missing HDR). For example, many smartphones will record 4K video, but their optics are not sufficient to actually have distinct per-pixel image detail, meaning it compresses down to a smaller image with no real additional loss in picture quality, but only because it's really a 4K UHD stream to begin with.
>>>>>>
>>>>>>> Note that 4K video compression codecs are lossy, so the lower
>>>>>>> quality the
>>>>>>>
>>>>>> initial image, the lower the bandwidth needed to convey the stream w/o additional quality loss. The needed bandwidth also changes with scene complexity. Falling confetti, like on Newy Year's Eve or at the Super Bowl make for one of the most demanding scenes. Lots of detailed fire and explosions with fast-moving fast panning full dynamic backgrounds are also tough for a compressed signal to preserve (but not as hard as a screen full of falling confetti).
>>>>>>
>>>>>>> I'm dubious that 8Mbps can handle that except for some of the
>>>>>>> simplest
>>>>>>>
>>>>>> video, like cartoons or fairly static scenes like the news. Those scenes don't require much data, but that's not the case for all 4K HDR scenes by any means.
>>>>>>
>>>>>>> It's obviously in Netflix and the other streaming services'
>>>>>>> interest to
>>>>>>>
>>>>>> be able to sell their more expensive 4K HDR service to as many people as possible. There's a reason they won't offer it to anyone with less than 25Mbps – they don't want the complaints and service calls. Now, to be fair, 4K HDR definitely doesn’t typically require 25Mbps, but it's to their credit that they do include a small bandwidth buffer. In my experience monitoring bandwidth usage for 4K HDR streaming, 15Mbps is the minimum if doing nothing else and that will frequently fall short, depending on the 4K HDR content.
>>>>>>
>>>>>>> Cheers,
>>>>>>> Colin
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: David Lang <david@lang.hm>
>>>>>>> Sent: Monday, April 29, 2024 8:40 PM
>>>>>>> To: Colin Higbie <colin.higbie@scribl.com>
>>>>>>> Cc: starlink@lists.bufferbloat.net
>>>>>>> Subject: Re: [Starlink] Itʼs the Latency, FCC
>>>>>>>
>>>>>>> hmm, before my DSL got disconnected (the carrier decided they
>>>>>>> didn't want
>>>>>>>
>>>>>> to support it any more), I could stream 4k at 8Mb down if there
>>>>>> wasn't too much other activity on the network (doing so at 2x speed
>>>>>> was a problem)
>>>>>>
>>>>>>> David Lang
>>>>>>>
>>>>>>>
>>>>>>> On Fri, 15 Mar 2024, Colin Higbie via Starlink wrote:
>>>>>>>
>>>>>>>
>>>>>>>> Date: Fri, 15 Mar 2024 18:32:36 +0000
>>>>>>>> From: Colin Higbie via Starlink <starlink@lists.bufferbloat.net>
>>>>>>>> Reply-To: Colin Higbie <colin.higbie@scribl.com>
>>>>>>>> To: "starlink@lists.bufferbloat.net"
>>>>>>>> <starlink@lists.bufferbloat.net>
>>>>>>>> Subject: Re: [Starlink] It’s the Latency, FCC
>>>>>>>>
>>>>>>>>
>>>>>>>>> I have now been trying to break the common conflation that
>>>>>>>>> download
>>>>>>>>>
>>>>>> "speed"
>>>>>>
>>>>>>>>> means anything at all for day to day, minute to minute, second to
>>>>>>>>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>>>>>>>>> succeeding? I lost the 25/10 battle, and keep pointing at really
>>>>>>>>> terrible latency under load and wifi weirdnesses for many
>>>>>>>>> existing
>>>>>>>>>
>>>>>> 100/20 services today.
>>>>>>
>>>>>>>> While I completely agree that latency has bigger impact on how
>>>>>>>>
>>>>>> responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>>>>>>
>>>>>>>> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming.
>>>>>>>> 100/20
>>>>>>>>
>>>>>> would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>>>>>>
>>>>>>>> For me, not claiming any special expertise on market needs, just
>>>>>>>> my own
>>>>>>>>
>>>>>> personal assessment on what typical families will need and care about:
>>>>>>
>>>>>>>> Latency: below 50ms under load always feels good except for some
>>>>>>>> intensive gaming (I don't see any benefit to getting loaded
>>>>>>>> latency further below ~20ms for typical applications, with an
>>>>>>>> exception for cloud-based gaming that benefits with lower latency
>>>>>>>> all the way down to about 5ms for young, really fast players, the
>>>>>>>> rest of us won't be able to tell the difference)
>>>>>>>>
>>>>>>>> Download Bandwidth: 10Mbps good enough if not doing UHD video
>>>>>>>> streaming
>>>>>>>>
>>>>>>>> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
>>>>>>>> depending on # of streams or if wanting to be ready for 8k
>>>>>>>>
>>>>>>>> Upload Bandwidth: 10Mbps good enough for quality video
>>>>>>>> conferencing, higher only needed for multiple concurrent outbound
>>>>>>>> streams
>>>>>>>>
>>>>>>>> So, for example (and ignoring upload for this), I would rather
>>>>>>>> have
>>>>>>>>
>>>>>> latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>>>>>>
>>>>>>>> Note that Starlink handles all of this well, including kids
>>>>>>>> watching
>>>>>>>>
>>>>>> YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>>>>>>
>>>>>>>> Cheers,
>>>>>>>> Colin
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>
>>>>>> -------------- next part -------------- An HTML attachment was
>>>>>> scrubbed...
>>>>>> URL:
>>>>>> <https://lists.bufferbloat.net/pipermail/starlink/attachments/202404
>>>>>> 30/5572b78b/attachment-0001.html>
>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>>
>>> ------------------------------
>>>
>>> Subject: Digest Footer
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>>
>>> ------------------------------
>>>
>>> End of Starlink Digest, Vol 37, Issue 11
>>> ****************************************
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #1.2: Type: text/html, Size: 49535 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
end of thread, other threads:[~2024-05-02 19:17 UTC | newest]
Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <mailman.11.1710518402.17089.starlink@lists.bufferbloat.net>
2024-03-15 18:32 ` [Starlink] It’s the Latency, FCC Colin Higbie
2024-03-15 18:41 ` Colin_Higbie
2024-03-15 19:53 ` Spencer Sevilla
2024-03-15 20:31 ` Colin_Higbie
2024-03-16 17:18 ` Alexandre Petrescu
2024-03-16 17:21 ` Alexandre Petrescu
2024-03-16 17:36 ` Sebastian Moeller
2024-03-16 22:51 ` David Lang
2024-03-15 23:07 ` David Lang
2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie
2024-03-16 19:09 ` Sebastian Moeller
2024-03-16 19:26 ` Colin_Higbie
2024-03-16 19:45 ` Sebastian Moeller
2024-03-16 23:05 ` David Lang
2024-03-17 15:47 ` [Starlink] It’s " Colin_Higbie
2024-03-17 16:17 ` [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Dave Collier-Brown
2024-03-16 18:51 ` [Starlink] It?s the Latency, FCC Gert Doering
2024-03-16 23:08 ` David Lang
2024-04-30 0:39 ` [Starlink] It’s " David Lang
2024-04-30 1:30 ` [Starlink] Itʼs " Colin_Higbie
2024-04-30 2:16 ` David Lang
[not found] <mailman.2773.1714488060.1074.starlink@lists.bufferbloat.net>
2024-04-30 18:05 ` [Starlink] It’s " Colin_Higbie
2024-04-30 19:04 ` Eugene Y Chang
2024-05-01 0:36 ` David Lang
2024-05-01 1:30 ` [Starlink] Itʼs " Eugene Y Chang
2024-05-01 1:52 ` Jim Forster
2024-05-01 3:59 ` Eugene Y Chang
2024-05-01 4:12 ` David Lang
2024-05-01 10:15 ` Frantisek Borsik
2024-05-01 18:51 ` Eugene Y Chang
2024-05-01 19:18 ` David Lang
2024-05-01 21:11 ` Frantisek Borsik
2024-05-01 22:10 ` Eugene Y Chang
2024-05-01 21:12 ` Eugene Y Chang
2024-05-01 21:27 ` Sebastian Moeller
2024-05-01 22:19 ` Eugene Y Chang
2024-05-02 19:17 ` Michael Richardson
[not found] <mailman.2785.1714507537.1074.starlink@lists.bufferbloat.net>
2024-04-30 20:48 ` [Starlink] It’s " Colin Higbie
2024-05-01 0:51 ` David Lang
2024-05-01 2:46 ` [Starlink] Itʼs " Colin_Higbie
2024-05-01 3:18 ` David Lang
2024-05-01 3:38 ` Colin_Higbie
2024-05-01 3:51 ` David Lang
2024-05-01 4:16 ` Colin_Higbie
2024-05-01 7:40 ` David Lang
2024-05-01 15:13 ` Colin_Higbie
2024-05-01 3:54 ` James Forster
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox