* Re: [Starlink] It’s the Latency, FCC
[not found] <mailman.11.1710518402.17089.starlink@lists.bufferbloat.net>
@ 2024-03-15 18:32 ` Colin Higbie
2024-03-15 18:41 ` Colin_Higbie
2024-04-30 0:39 ` [Starlink] It’s " David Lang
0 siblings, 2 replies; 27+ messages in thread
From: Colin Higbie @ 2024-03-15 18:32 UTC (permalink / raw)
To: starlink
> I have now been trying to break the common conflation that download "speed"
> means anything at all for day to day, minute to minute, second to second, use,
> once you crack 10mbit, now, for over 14 years. Am I succeeding? I lost the 25/10
> battle, and keep pointing at really terrible latency under load and wifi weirdnesses
> for many existing 100/20 services today.
While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
Latency: below 50ms under load always feels good except for some intensive gaming
(I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference)
Download Bandwidth: 10Mbps good enough if not doing UHD video streaming
Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k
Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams
So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
Cheers,
Colin
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-15 18:32 ` [Starlink] It’s the Latency, FCC Colin Higbie
@ 2024-03-15 18:41 ` Colin_Higbie
2024-03-15 19:53 ` Spencer Sevilla
2024-04-30 0:39 ` [Starlink] It’s " David Lang
1 sibling, 1 reply; 27+ messages in thread
From: Colin_Higbie @ 2024-03-15 18:41 UTC (permalink / raw)
To: starlink
> I have now been trying to break the common conflation that download "speed"
> means anything at all for day to day, minute to minute, second to
> second, use, once you crack 10mbit, now, for over 14 years. Am I
> succeeding? I lost the 25/10 battle, and keep pointing at really
> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
Latency: below 50ms under load always feels good except for some intensive gaming (I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference)
Download Bandwidth: 10Mbps good enough if not doing UHD video streaming
Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k
Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams
So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
Cheers,
Colin
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-15 18:41 ` Colin_Higbie
@ 2024-03-15 19:53 ` Spencer Sevilla
2024-03-15 20:31 ` Colin_Higbie
2024-03-15 23:07 ` David Lang
0 siblings, 2 replies; 27+ messages in thread
From: Spencer Sevilla @ 2024-03-15 19:53 UTC (permalink / raw)
To: Colin_Higbie; +Cc: Dave Taht via Starlink
Your comment about 4k HDR TVs got me thinking about the bandwidth “arms race” between infrastructure and its clients. It’s a particular pet peeve of mine that as any resource (bandwidth in this case, but the same can be said for memory) becomes more plentiful, software engineers respond by wasting it until it’s scarce enough to require optimization again. Feels like an awkward kind of malthusian inflation that ends up forcing us to buy newer/faster/better devices to perform the same basic functions, which haven’t changed almost at all.
I completely agree that no one “needs” 4K UHDR, but when we say this I think we generally mean as opposed to a slightly lower codec, like regular HDR or 1080p. In practice, I’d be willing to bet that there’s at least one poorly programmed TV out there that doesn’t downgrade well or at all, so the tradeoff becomes "4K UHDR or endless stuttering/buffering.” Under this (totally unnecessarily forced upon us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise arms raced ourselves into a TV that can’t really stream anything. A technical downgrade from literally the 1960s.
See also: The endless march of “smart appliances” and TVs/gaming systems that require endless humongous software updates. My stove requires natural gas and 120VAC, and I like it that way. Other stoves require… how many Mbps to function regularly? Other food for thought, I wonder how increasing minimum broadband speed requirements across the country will encourage or discourage this behavior among network engineers. I sincerely don’t look forward to a future in which we all require 10Gbps to the house but can’t do much with it cause it’s all taken up by lightbulb software updates every evening /rant.
> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>
>> I have now been trying to break the common conflation that download "speed"
>> means anything at all for day to day, minute to minute, second to
>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>> succeeding? I lost the 25/10 battle, and keep pointing at really
>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>
> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>
> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>
> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>
> Latency: below 50ms under load always feels good except for some intensive gaming (I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference)
>
> Download Bandwidth: 10Mbps good enough if not doing UHD video streaming
>
> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k
>
> Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams
>
> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>
> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>
> Cheers,
> Colin
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-15 19:53 ` Spencer Sevilla
@ 2024-03-15 20:31 ` Colin_Higbie
2024-03-16 17:18 ` Alexandre Petrescu
2024-03-15 23:07 ` David Lang
1 sibling, 1 reply; 27+ messages in thread
From: Colin_Higbie @ 2024-03-15 20:31 UTC (permalink / raw)
To: Dave Taht via Starlink
Spencer, great point. We certainly see that with RAM, CPU, and graphics power that the software just grows to fill up the space. I do think that there are still enough users with bandwidth constraints (millions of users limited to DSL and 7Mbps DL speeds) that it provides some pressure against streaming and other services requiring huge swaths of data for basic functions, but, to your point, if there were a mandate that everyone would have 100Mbps connection, I agree that would then quickly become saturated so everyone would need more.
Fortunately, the video compression codecs have improved dramatically over the past couple of decades from MPEG-1 to MPEG-2 to H.264 to VP9 and H.265. There's still room for further improvements, but I think we're probably getting to a point of diminishing returns on further compression improvements. Even with further improvements, I don't think we'll see bandwidth needs drop so much as improved quality at the same bandwidth, but this does offset the natural bloat-to-fill-available-capacity movement we see.
-----Original Message-----
From: Spencer Sevilla
Sent: Friday, March 15, 2024 3:54 PM
To: Colin_Higbie
Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: Re: [Starlink] It’s the Latency, FCC
Your comment about 4k HDR TVs got me thinking about the bandwidth “arms race” between infrastructure and its clients. It’s a particular pet peeve of mine that as any resource (bandwidth in this case, but the same can be said for memory) becomes more plentiful, software engineers respond by wasting it until it’s scarce enough to require optimization again. Feels like an awkward kind of malthusian inflation that ends up forcing us to buy newer/faster/better devices to perform the same basic functions, which haven’t changed almost at all.
I completely agree that no one “needs” 4K UHDR, but when we say this I think we generally mean as opposed to a slightly lower codec, like regular HDR or 1080p. In practice, I’d be willing to bet that there’s at least one poorly programmed TV out there that doesn’t downgrade well or at all, so the tradeoff becomes "4K UHDR or endless stuttering/buffering.” Under this (totally unnecessarily forced upon us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise arms raced ourselves into a TV that can’t really stream anything. A technical downgrade from literally the 1960s.
See also: The endless march of “smart appliances” and TVs/gaming systems that require endless humongous software updates. My stove requires natural gas and 120VAC, and I like it that way. Other stoves require… how many Mbps to function regularly? Other food for thought, I wonder how increasing minimum broadband speed requirements across the country will encourage or discourage this behavior among network engineers. I sincerely don’t look forward to a future in which we all require 10Gbps to the house but can’t do much with it cause it’s all taken up by lightbulb software updates every evening /rant.
> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>
>> I have now been trying to break the common conflation that download "speed"
>> means anything at all for day to day, minute to minute, second to
>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>> succeeding? I lost the 25/10 battle, and keep pointing at really
>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>
> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>
> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>
> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>
> Latency: below 50ms under load always feels good except for some
> intensive gaming (I don't see any benefit to getting loaded latency
> further below ~20ms for typical applications, with an exception for
> cloud-based gaming that benefits with lower latency all the way down
> to about 5ms for young, really fast players, the rest of us won't be
> able to tell the difference)
>
> Download Bandwidth: 10Mbps good enough if not doing UHD video
> streaming
>
> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
> depending on # of streams or if wanting to be ready for 8k
>
> Upload Bandwidth: 10Mbps good enough for quality video conferencing,
> higher only needed for multiple concurrent outbound streams
>
> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>
> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>
> Cheers,
> Colin
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-15 19:53 ` Spencer Sevilla
2024-03-15 20:31 ` Colin_Higbie
@ 2024-03-15 23:07 ` David Lang
2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie
2024-03-16 18:51 ` [Starlink] It?s the Latency, FCC Gert Doering
1 sibling, 2 replies; 27+ messages in thread
From: David Lang @ 2024-03-15 23:07 UTC (permalink / raw)
To: Spencer Sevilla; +Cc: Colin_Higbie, Dave Taht via Starlink
[-- Attachment #1: Type: TEXT/PLAIN, Size: 6025 bytes --]
one person's 'wasteful resolution' is another person's 'large enhancement'
going from 1080p to 4k video is not being wasteful, it's opting to use the
bandwidth in a different way.
saying that it's wasteful for someone to choose to do something is saying that
you know better what their priorities should be.
I agree that increasing resources allow programmers to be lazier and write apps
that are bigger, but they are also writing them in less time.
What right do you have to say that the programmer's time is less important than
the ram/bandwidth used?
I agree that it would be nice to have more people write better code, but
everything, including this, has trade-offs.
David Lang
On Fri, 15 Mar 2024, Spencer Sevilla via Starlink wrote:
> Your comment about 4k HDR TVs got me thinking about the bandwidth “arms race” between infrastructure and its clients. It’s a particular pet peeve of mine that as any resource (bandwidth in this case, but the same can be said for memory) becomes more plentiful, software engineers respond by wasting it until it’s scarce enough to require optimization again. Feels like an awkward kind of malthusian inflation that ends up forcing us to buy newer/faster/better devices to perform the same basic functions, which haven’t changed almost at all.
>
> I completely agree that no one “needs” 4K UHDR, but when we say this I think we generally mean as opposed to a slightly lower codec, like regular HDR or 1080p. In practice, I’d be willing to bet that there’s at least one poorly programmed TV out there that doesn’t downgrade well or at all, so the tradeoff becomes "4K UHDR or endless stuttering/buffering.” Under this (totally unnecessarily forced upon us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise arms raced ourselves into a TV that can’t really stream anything. A technical downgrade from literally the 1960s.
>
> See also: The endless march of “smart appliances” and TVs/gaming systems that require endless humongous software updates. My stove requires natural gas and 120VAC, and I like it that way. Other stoves require… how many Mbps to function regularly? Other food for thought, I wonder how increasing minimum broadband speed requirements across the country will encourage or discourage this behavior among network engineers. I sincerely don’t look forward to a future in which we all require 10Gbps to the house but can’t do much with it cause it’s all taken up by lightbulb software updates every evening /rant.
>
>> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>>> I have now been trying to break the common conflation that download "speed"
>>> means anything at all for day to day, minute to minute, second to
>>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>>> succeeding? I lost the 25/10 battle, and keep pointing at really
>>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>>
>> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>>
>> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>>
>> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>>
>> Latency: below 50ms under load always feels good except for some intensive gaming (I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference)
>>
>> Download Bandwidth: 10Mbps good enough if not doing UHD video streaming
>>
>> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k
>>
>> Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams
>>
>> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>>
>> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>>
>> Cheers,
>> Colin
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-15 20:31 ` Colin_Higbie
@ 2024-03-16 17:18 ` Alexandre Petrescu
2024-03-16 17:21 ` Alexandre Petrescu
2024-03-16 17:36 ` Sebastian Moeller
0 siblings, 2 replies; 27+ messages in thread
From: Alexandre Petrescu @ 2024-03-16 17:18 UTC (permalink / raw)
To: starlink
Le 15/03/2024 à 21:31, Colin_Higbie via Starlink a écrit :
> Spencer, great point. We certainly see that with RAM, CPU, and graphics power that the software just grows to fill up the space. I do think that there are still enough users with bandwidth constraints (millions of users limited to DSL and 7Mbps DL speeds) that it provides some pressure against streaming and other services requiring huge swaths of data for basic functions, but, to your point, if there were a mandate that everyone would have 100Mbps connection, I agree that would then quickly become saturated so everyone would need more.
>
> Fortunately, the video compression codecs have improved dramatically over the past couple of decades from MPEG-1 to MPEG-2 to H.264 to VP9 and H.265. There's still room for further improvements, but I think we're probably getting to a point of diminishing returns on further compression improvements. Even with further improvements, I don't think we'll see bandwidth needs drop so much as improved quality at the same bandwidth, but this does offset the natural bloat-to-fill-available-capacity movement we see.
I think the 4K-latency discussion is a bit difficult, regardless of how
great the codecs are.
For one, 4K can be considered outdated for those who look forward to 8K
and why not 16K; so we should forget 4K. 8K is delivered from space
already by a japanese provider, but not on IP. So, if we discuss TV
resolutions we should look at these (8K, 16K, and why not 3D 16K for
ever more strength testing).
Second, 4K etc. are for TV. In TV the latency is rarely if ever an
issue. There are some rare cases where latency is very important in TV
(I could think of betting in sports, time synch of clocks) but they dont
look at such low latency as in our typical visioconference or remote
surgery or group music playing use-cases on Internet starlink.
So, I dont know how much 4K, 8K, 16K might be imposing any new latency
requirement on starlink.
Alex
>
>
> -----Original Message-----
> From: Spencer Sevilla
> Sent: Friday, March 15, 2024 3:54 PM
> To: Colin_Higbie
> Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] It’s the Latency, FCC
>
> Your comment about 4k HDR TVs got me thinking about the bandwidth “arms race” between infrastructure and its clients. It’s a particular pet peeve of mine that as any resource (bandwidth in this case, but the same can be said for memory) becomes more plentiful, software engineers respond by wasting it until it’s scarce enough to require optimization again. Feels like an awkward kind of malthusian inflation that ends up forcing us to buy newer/faster/better devices to perform the same basic functions, which haven’t changed almost at all.
>
> I completely agree that no one “needs” 4K UHDR, but when we say this I think we generally mean as opposed to a slightly lower codec, like regular HDR or 1080p. In practice, I’d be willing to bet that there’s at least one poorly programmed TV out there that doesn’t downgrade well or at all, so the tradeoff becomes "4K UHDR or endless stuttering/buffering.” Under this (totally unnecessarily forced upon us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise arms raced ourselves into a TV that can’t really stream anything. A technical downgrade from literally the 1960s.
>
> See also: The endless march of “smart appliances” and TVs/gaming systems that require endless humongous software updates. My stove requires natural gas and 120VAC, and I like it that way. Other stoves require… how many Mbps to function regularly? Other food for thought, I wonder how increasing minimum broadband speed requirements across the country will encourage or discourage this behavior among network engineers. I sincerely don’t look forward to a future in which we all require 10Gbps to the house but can’t do much with it cause it’s all taken up by lightbulb software updates every evening /rant.
>
>> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>>> I have now been trying to break the common conflation that download "speed"
>>> means anything at all for day to day, minute to minute, second to
>>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>>> succeeding? I lost the 25/10 battle, and keep pointing at really
>>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>>
>> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>>
>> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>>
>> Latency: below 50ms under load always feels good except for some
>> intensive gaming (I don't see any benefit to getting loaded latency
>> further below ~20ms for typical applications, with an exception for
>> cloud-based gaming that benefits with lower latency all the way down
>> to about 5ms for young, really fast players, the rest of us won't be
>> able to tell the difference)
>>
>> Download Bandwidth: 10Mbps good enough if not doing UHD video
>> streaming
>>
>> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
>> depending on # of streams or if wanting to be ready for 8k
>>
>> Upload Bandwidth: 10Mbps good enough for quality video conferencing,
>> higher only needed for multiple concurrent outbound streams
>>
>> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>>
>> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>>
>> Cheers,
>> Colin
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-16 17:18 ` Alexandre Petrescu
@ 2024-03-16 17:21 ` Alexandre Petrescu
2024-03-16 17:36 ` Sebastian Moeller
1 sibling, 0 replies; 27+ messages in thread
From: Alexandre Petrescu @ 2024-03-16 17:21 UTC (permalink / raw)
To: starlink
I retract the message, sorry, it is true that some teleoperation and
visioconf also use 4K. So the latency is important there too.
A visioconf with 8K and 3D 16K might need latency reqs too.
Le 16/03/2024 à 18:18, Alexandre Petrescu via Starlink a écrit :
>
> Le 15/03/2024 à 21:31, Colin_Higbie via Starlink a écrit :
>> Spencer, great point. We certainly see that with RAM, CPU, and
>> graphics power that the software just grows to fill up the space. I
>> do think that there are still enough users with bandwidth constraints
>> (millions of users limited to DSL and 7Mbps DL speeds) that it
>> provides some pressure against streaming and other services requiring
>> huge swaths of data for basic functions, but, to your point, if there
>> were a mandate that everyone would have 100Mbps connection, I agree
>> that would then quickly become saturated so everyone would need more.
>>
>> Fortunately, the video compression codecs have improved dramatically
>> over the past couple of decades from MPEG-1 to MPEG-2 to H.264 to VP9
>> and H.265. There's still room for further improvements, but I think
>> we're probably getting to a point of diminishing returns on further
>> compression improvements. Even with further improvements, I don't
>> think we'll see bandwidth needs drop so much as improved quality at
>> the same bandwidth, but this does offset the natural
>> bloat-to-fill-available-capacity movement we see.
>
> I think the 4K-latency discussion is a bit difficult, regardless of
> how great the codecs are.
>
> For one, 4K can be considered outdated for those who look forward to
> 8K and why not 16K; so we should forget 4K. 8K is delivered from
> space already by a japanese provider, but not on IP. So, if we
> discuss TV resolutions we should look at these (8K, 16K, and why not
> 3D 16K for ever more strength testing).
>
> Second, 4K etc. are for TV. In TV the latency is rarely if ever an
> issue. There are some rare cases where latency is very important in
> TV (I could think of betting in sports, time synch of clocks) but they
> dont look at such low latency as in our typical visioconference or
> remote surgery or group music playing use-cases on Internet starlink.
>
> So, I dont know how much 4K, 8K, 16K might be imposing any new latency
> requirement on starlink.
>
> Alex
>
>>
>>
>> -----Original Message-----
>> From: Spencer Sevilla
>> Sent: Friday, March 15, 2024 3:54 PM
>> To: Colin_Higbie
>> Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
>> Subject: Re: [Starlink] It’s the Latency, FCC
>>
>> Your comment about 4k HDR TVs got me thinking about the bandwidth
>> “arms race” between infrastructure and its clients. It’s a particular
>> pet peeve of mine that as any resource (bandwidth in this case, but
>> the same can be said for memory) becomes more plentiful, software
>> engineers respond by wasting it until it’s scarce enough to require
>> optimization again. Feels like an awkward kind of malthusian
>> inflation that ends up forcing us to buy newer/faster/better devices
>> to perform the same basic functions, which haven’t changed almost at
>> all.
>>
>> I completely agree that no one “needs” 4K UHDR, but when we say this
>> I think we generally mean as opposed to a slightly lower codec, like
>> regular HDR or 1080p. In practice, I’d be willing to bet that there’s
>> at least one poorly programmed TV out there that doesn’t downgrade
>> well or at all, so the tradeoff becomes "4K UHDR or endless
>> stuttering/buffering.” Under this (totally unnecessarily forced upon
>> us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise
>> arms raced ourselves into a TV that can’t really stream anything. A
>> technical downgrade from literally the 1960s.
>>
>> See also: The endless march of “smart appliances” and TVs/gaming
>> systems that require endless humongous software updates. My stove
>> requires natural gas and 120VAC, and I like it that way. Other stoves
>> require… how many Mbps to function regularly? Other food for thought,
>> I wonder how increasing minimum broadband speed requirements across
>> the country will encourage or discourage this behavior among network
>> engineers. I sincerely don’t look forward to a future in which we all
>> require 10Gbps to the house but can’t do much with it cause it’s all
>> taken up by lightbulb software updates every evening /rant.
>>
>>> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink
>>> <starlink@lists.bufferbloat.net> wrote:
>>>
>>>> I have now been trying to break the common conflation that download
>>>> "speed"
>>>> means anything at all for day to day, minute to minute, second to
>>>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>>>> succeeding? I lost the 25/10 battle, and keep pointing at really
>>>> terrible latency under load and wifi weirdnesses for many existing
>>>> 100/20 services today.
>>> While I completely agree that latency has bigger impact on how
>>> responsive the Internet feels to use, I do think that 10Mbit is too
>>> low for some standard applications regardless of latency: with the
>>> more recent availability of 4K and higher streaming, that does
>>> require a higher minimum bandwidth to work at all. One could argue
>>> that no one NEEDS 4K streaming, but many families would view this as
>>> an important part of what they do with their Internet (Starlink
>>> makes this reliably possible at our farmhouse). 4K HDR-supporting
>>> TV's are among the most popular TVs being purchased in the U.S.
>>> today. Netflix, Amazon, Max, Disney and other streaming services
>>> provide a substantial portion of 4K HDR content.
>>>
>>> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming.
>>> 100/20 would provide plenty of bandwidth for multiple concurrent 4K
>>> users or a 1-2 8K streams.
>>>
>>> For me, not claiming any special expertise on market needs, just my
>>> own personal assessment on what typical families will need and care
>>> about:
>>>
>>> Latency: below 50ms under load always feels good except for some
>>> intensive gaming (I don't see any benefit to getting loaded latency
>>> further below ~20ms for typical applications, with an exception for
>>> cloud-based gaming that benefits with lower latency all the way down
>>> to about 5ms for young, really fast players, the rest of us won't be
>>> able to tell the difference)
>>>
>>> Download Bandwidth: 10Mbps good enough if not doing UHD video
>>> streaming
>>>
>>> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
>>> depending on # of streams or if wanting to be ready for 8k
>>>
>>> Upload Bandwidth: 10Mbps good enough for quality video conferencing,
>>> higher only needed for multiple concurrent outbound streams
>>>
>>> So, for example (and ignoring upload for this), I would rather have
>>> latency at 50ms (under load) and DL bandwidth of 25Mbps than latency
>>> of 1ms with a max bandwidth of 10Mbps, because the super-low latency
>>> doesn't solve the problem with insufficient bandwidth to watch 4K
>>> HDR content. But, I'd also rather have latency of 20ms with 100Mbps
>>> DL, then latency that exceeds 100ms under load with 1Gbps DL
>>> bandwidth. I think the important thing is to reach "good enough" on
>>> both, not just excel at one while falling short of "good enough" on
>>> the other.
>>>
>>> Note that Starlink handles all of this well, including kids watching
>>> YouTube while my wife and I watch 4K UHD Netflix, except the upload
>>> speed occasionally tops at under 3Mbps for me, causing quality
>>> degradation for outbound video calls (or used to, it seems to have
>>> gotten better in recent months – no problems since sometime in 2023).
>>>
>>> Cheers,
>>> Colin
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-16 17:18 ` Alexandre Petrescu
2024-03-16 17:21 ` Alexandre Petrescu
@ 2024-03-16 17:36 ` Sebastian Moeller
2024-03-16 22:51 ` David Lang
1 sibling, 1 reply; 27+ messages in thread
From: Sebastian Moeller @ 2024-03-16 17:36 UTC (permalink / raw)
To: Alexandre Petrescu; +Cc: Dave Taht via Starlink
Hi Alex...
> On 16. Mar 2024, at 18:18, Alexandre Petrescu via Starlink <starlink@lists.bufferbloat.net> wrote:
>
>
> Le 15/03/2024 à 21:31, Colin_Higbie via Starlink a écrit :
>> Spencer, great point. We certainly see that with RAM, CPU, and graphics power that the software just grows to fill up the space. I do think that there are still enough users with bandwidth constraints (millions of users limited to DSL and 7Mbps DL speeds) that it provides some pressure against streaming and other services requiring huge swaths of data for basic functions, but, to your point, if there were a mandate that everyone would have 100Mbps connection, I agree that would then quickly become saturated so everyone would need more.
>>
>> Fortunately, the video compression codecs have improved dramatically over the past couple of decades from MPEG-1 to MPEG-2 to H.264 to VP9 and H.265. There's still room for further improvements, but I think we're probably getting to a point of diminishing returns on further compression improvements. Even with further improvements, I don't think we'll see bandwidth needs drop so much as improved quality at the same bandwidth, but this does offset the natural bloat-to-fill-available-capacity movement we see.
>
> I think the 4K-latency discussion is a bit difficult, regardless of how great the codecs are.
>
> For one, 4K can be considered outdated for those who look forward to 8K and why not 16K; so we should forget 4K.
[SM] Mmmh, numerically that might make sense, however increasing the resolution of video material brings diminishing returns in perceived quality (the human optical system has limits...).... I remember well how the steps from QVGA, to VGA/SD to HD (720) to FullHD (1080) each resulted in an easily noticeable improvement in quality. However now I have a hard time seeing an improvement (heck even just noticing) if I see fullHD of 4K material on our 43" screen from a normal distance (I need to do immediate A?B comparisons from short distance)....
I am certainly not super sensitive/picky, but I guess others will reach the same point maybe after 4K or after 8K. My point is the potential for growth in resolution is limited by psychophysics (ultimately driven by the visual arc covered by individual photoreceptors in the fovea). And I am not sure whether for normal screen sizes and distances we do not already have past that point at 4K....
> 8K is delivered from space already by a japanese provider, but not on IP. So, if we discuss TV resolutions we should look at these (8K, 16K, and why not 3D 16K for ever more strength testing).
[SM] This might work as a gimmick for marketing, but if 16K does not deliver clearly superior experience why bother putting in the extra effort and energy (and storage size and network capacity) to actually deliver something like that?
>
> Second, 4K etc. are for TV. In TV the latency is rarely if ever an issue. There are some rare cases where latency is very important in TV (I could think of betting in sports, time synch of clocks) but they dont look at such low latency as in our typical visioconference or remote surgery
[SM] Can we please bury this example please, "remote surgery" over the best effort internet, is really really a stupid idea, or something that should only ever be attempted as the very last effort. As a society we already failed if we need to rely somthing like that.
> or group music playing use-cases
[SM] That IMHO is a great example for a realistic low latency use-case (exactly because the failure mode is not as catastrophic as in tele surgery, so this seems acceptable for a best effort network).
> on Internet starlink.
>
> So, I dont know how much 4K, 8K, 16K might be imposing any new latency requirement on starlink.
>
> Alex
>
>>
>>
>> -----Original Message-----
>> From: Spencer Sevilla
>> Sent: Friday, March 15, 2024 3:54 PM
>> To: Colin_Higbie
>> Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
>> Subject: Re: [Starlink] It’s the Latency, FCC
>>
>> Your comment about 4k HDR TVs got me thinking about the bandwidth “arms race” between infrastructure and its clients. It’s a particular pet peeve of mine that as any resource (bandwidth in this case, but the same can be said for memory) becomes more plentiful, software engineers respond by wasting it until it’s scarce enough to require optimization again. Feels like an awkward kind of malthusian inflation that ends up forcing us to buy newer/faster/better devices to perform the same basic functions, which haven’t changed almost at all.
>>
>> I completely agree that no one “needs” 4K UHDR, but when we say this I think we generally mean as opposed to a slightly lower codec, like regular HDR or 1080p. In practice, I’d be willing to bet that there’s at least one poorly programmed TV out there that doesn’t downgrade well or at all, so the tradeoff becomes "4K UHDR or endless stuttering/buffering.” Under this (totally unnecessarily forced upon us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise arms raced ourselves into a TV that can’t really stream anything. A technical downgrade from literally the 1960s.
>>
>> See also: The endless march of “smart appliances” and TVs/gaming systems that require endless humongous software updates. My stove requires natural gas and 120VAC, and I like it that way. Other stoves require… how many Mbps to function regularly? Other food for thought, I wonder how increasing minimum broadband speed requirements across the country will encourage or discourage this behavior among network engineers. I sincerely don’t look forward to a future in which we all require 10Gbps to the house but can’t do much with it cause it’s all taken up by lightbulb software updates every evening /rant.
>>
>>> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>
>>>> I have now been trying to break the common conflation that download "speed"
>>>> means anything at all for day to day, minute to minute, second to
>>>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>>>> succeeding? I lost the 25/10 battle, and keep pointing at really
>>>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>>> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>>>
>>> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>>>
>>> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>>>
>>> Latency: below 50ms under load always feels good except for some
>>> intensive gaming (I don't see any benefit to getting loaded latency
>>> further below ~20ms for typical applications, with an exception for
>>> cloud-based gaming that benefits with lower latency all the way down
>>> to about 5ms for young, really fast players, the rest of us won't be
>>> able to tell the difference)
>>>
>>> Download Bandwidth: 10Mbps good enough if not doing UHD video
>>> streaming
>>>
>>> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
>>> depending on # of streams or if wanting to be ready for 8k
>>>
>>> Upload Bandwidth: 10Mbps good enough for quality video conferencing,
>>> higher only needed for multiple concurrent outbound streams
>>>
>>> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>>>
>>> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>>>
>>> Cheers,
>>> Colin
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-03-15 23:07 ` David Lang
@ 2024-03-16 18:45 ` Colin_Higbie
2024-03-16 19:09 ` Sebastian Moeller
2024-03-16 23:05 ` David Lang
2024-03-16 18:51 ` [Starlink] It?s the Latency, FCC Gert Doering
1 sibling, 2 replies; 27+ messages in thread
From: Colin_Higbie @ 2024-03-16 18:45 UTC (permalink / raw)
To: David Lang, Dave Taht via Starlink
Beautifully said, David Lang. I completely agree.
At the same time, I do think if you give people tools where latency is rarely an issue (say a 10x improvement, so perception of 1/10 the latency), developers will be less efficient UNTIL that inefficiency begins to yield poor UX. For example, if I know I can rely on latency being 10ms and users don't care until total lag exceeds 500ms, I might design something that uses a lot of back-and-forth between client and server. As long as there are fewer than 50 iterations (500 / 10), users will be happy. But if I need to do 100 iterations to get the result, then I'll do some bundling of the operations to keep the total observable lag at or below that 500ms.
I remember programming computer games in the 1980s and the typical RAM users had increased. Before that, I had to contort my code to get it to run in 32kB. After the increase, I could stretch out and use 48kB and stop wasting time shoehorning my code or loading-in segments from floppy disk into the limited RAM. To your point: yes, this made things faster for me as a developer, just as the latency improvements ease the burden on the client-server application developer who needs to ensure a maximum lag below 500ms.
In terms of user experience (UX), I think of there as being "good enough" plateaus based on different use-cases. For example, when web browsing, even 1,000ms of latency is barely noticeable. So any web browser application that comes in under 1,000ms will be "good enough." For VoIP, the "good enough" figure is probably more like 100ms. For video conferencing, maybe it's 80ms (the ability to see the person's facial expression likely increases the expectation of reactions and reduces the tolerance for lag). For some forms of cloud gaming, the "good enough" figure may be as low as 5ms.
That's not to say that 20ms isn't better for VoIP than 100 or 500ms isn't better than 1,000 for web browsing, just that the value for each further incremental reduction in latency drops significantly once you get to that good-enough point. However, those further improvements may open entirely new applications, such as enabling VoIP where before maybe it was only "good enough" for web browsing (think geosynchronous satellites).
In other words, more important than just chasing ever lower latency, it's important to provide SUFFICIENTLY LOW latency for users to perform their intended applications. Getting even lower is still great for opening things up to new applications we never considered before, just like faster CPU's, more RAM, better graphics, etc. have always done since the first computer. But if we're talking about measuring what people need today, this can be done fairly easily based on intended applications.
Bandwidth scales a little differently. There's still a "good enough" level driven by time for a web page to load of about 5s (as web pages become ever more complex and dynamic, this means that bandwidth needs increase), 1Mbps for VoIP, 7Mbps UL/DL for video conferencing, 20Mbps DL for 4K streaming, etc. In addition, there's also a linear scaling to the number of concurrent users. If 1 user needs 15Mbps to stream 4K, 3 users in the household will need about 45Mbps to all stream 4K at the same time, a very real-world scenario at 7pm in a home. This differs from the latency hit of multiple users. I don't know how latency is affected by users, but I know if it's 20ms with 1 user, it's NOT 40ms with 2 users, 60ms with 3, etc. With the bufferbloat improvements created and put forward by members of this group, I think latency doesn't increase by much with multiple concurrent streams.
So all taken together, there can be fairly straightforward descriptions of latency and bandwidth based on expected usage. These are not mysterious attributes. It can be easily calculated per user based on expected use cases.
Cheers,
Colin
-----Original Message-----
From: David Lang <david@lang.hm>
Sent: Friday, March 15, 2024 7:08 PM
To: Spencer Sevilla <spencer.builds.networks@gmail.com>
Cc: Colin_Higbie <CHigbie1@Higbie.name>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: Re: [Starlink] Itʼs the Latency, FCC
one person's 'wasteful resolution' is another person's 'large enhancement'
going from 1080p to 4k video is not being wasteful, it's opting to use the bandwidth in a different way.
saying that it's wasteful for someone to choose to do something is saying that you know better what their priorities should be.
I agree that increasing resources allow programmers to be lazier and write apps that are bigger, but they are also writing them in less time.
What right do you have to say that the programmer's time is less important than the ram/bandwidth used?
I agree that it would be nice to have more people write better code, but everything, including this, has trade-offs.
David Lang
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] It?s the Latency, FCC
2024-03-15 23:07 ` David Lang
2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie
@ 2024-03-16 18:51 ` Gert Doering
2024-03-16 23:08 ` David Lang
1 sibling, 1 reply; 27+ messages in thread
From: Gert Doering @ 2024-03-16 18:51 UTC (permalink / raw)
To: David Lang; +Cc: Spencer Sevilla, Dave Taht via Starlink, Colin_Higbie
Hi,
On Fri, Mar 15, 2024 at 04:07:54PM -0700, David Lang via Starlink wrote:
> What right do you have to say that the programmer's time is less important
> than the ram/bandwidth used?
A single computer programmer saving some two hours in not optimizing code,
costing 1 extra minute for each of a million of users.
Bad trade-off in lifetime well-spent.
Gert Doering
-- NetMaster
--
have you enabled IPv6 on something today...?
SpaceNet AG Vorstand: Sebastian v. Bomhard, Michael Emmer,
Ingo Lalla, Karin Schuler
Joseph-Dollinger-Bogen 14 Aufsichtsratsvors.: A. Grundner-Culemann
D-80807 Muenchen HRB: 136055 (AG Muenchen)
Tel: +49 (0)89/32356-444 USt-IdNr.: DE813185279
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie
@ 2024-03-16 19:09 ` Sebastian Moeller
2024-03-16 19:26 ` Colin_Higbie
2024-03-16 23:05 ` David Lang
1 sibling, 1 reply; 27+ messages in thread
From: Sebastian Moeller @ 2024-03-16 19:09 UTC (permalink / raw)
To: Colin_Higbie; +Cc: David Lang, Dave Taht via Starlink
> On 16. Mar 2024, at 19:45, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> Beautifully said, David Lang. I completely agree.
>
> At the same time, I do think if you give people tools where latency is rarely an issue (say a 10x improvement, so perception of 1/10 the latency), developers will be less efficient UNTIL that inefficiency begins to yield poor UX. For example, if I know I can rely on latency being 10ms and users don't care until total lag exceeds 500ms, I might design something that uses a lot of back-and-forth between client and server. As long as there are fewer than 50 iterations (500 / 10), users will be happy. But if I need to do 100 iterations to get the result, then I'll do some bundling of the operations to keep the total observable lag at or below that 500ms.
>
> I remember programming computer games in the 1980s and the typical RAM users had increased. Before that, I had to contort my code to get it to run in 32kB. After the increase, I could stretch out and use 48kB and stop wasting time shoehorning my code or loading-in segments from floppy disk into the limited RAM. To your point: yes, this made things faster for me as a developer, just as the latency improvements ease the burden on the client-server application developer who needs to ensure a maximum lag below 500ms.
>
> In terms of user experience (UX), I think of there as being "good enough" plateaus based on different use-cases. For example, when web browsing, even 1,000ms of latency is barely noticeable. So any web browser application that comes in under 1,000ms will be "good enough." For VoIP, the "good enough" figure is probably more like 100ms. For video conferencing, maybe it's 80ms (the ability to see the person's facial expression likely increases the expectation of reactions and reduces the tolerance for lag). For some forms of cloud gaming, the "good enough" figure may be as low as 5ms.
>
> That's not to say that 20ms isn't better for VoIP than 100 or 500ms isn't better than 1,000 for web browsing, just that the value for each further incremental reduction in latency drops significantly once you get to that good-enough point. However, those further improvements may open entirely new applications, such as enabling VoIP where before maybe it was only "good enough" for web browsing (think geosynchronous satellites).
>
> In other words, more important than just chasing ever lower latency, it's important to provide SUFFICIENTLY LOW latency for users to perform their intended applications. Getting even lower is still great for opening things up to new applications we never considered before, just like faster CPU's, more RAM, better graphics, etc. have always done since the first computer. But if we're talking about measuring what people need today, this can be done fairly easily based on intended applications.
>
> Bandwidth scales a little differently. There's still a "good enough" level driven by time for a web page to load of about 5s (as web pages become ever more complex and dynamic, this means that bandwidth needs increase), 1Mbps for VoIP, 7Mbps UL/DL for video conferencing, 20Mbps DL for 4K streaming, etc. In addition, there's also a linear scaling to the number of concurrent users. If 1 user needs 15Mbps to stream 4K, 3 users in the household will need about 45Mbps to all stream 4K at the same time, a very real-world scenario at 7pm in a home. This differs from the latency hit of multiple users. I don't know how latency is affected by users, but I know if it's 20ms with 1 user, it's NOT 40ms with 2 users, 60ms with 3, etc. With the bufferbloat improvements created and put forward by members of this group, I think latency doesn't increase by much with multiple concurrent streams.
>
> So all taken together, there can be fairly straightforward descriptions of latency and bandwidth based on expected usage. These are not mysterious attributes. It can be easily calculated per user based on expected use cases.
Well, for most applications there is an absolute lower capacity limit below which it does not work, and for most there is also an upper limit beyond which any additional capacity will not result in noticeable improvements. Latency tends to work differently, instead of a hard cliff there tends to be a slow increasing degradation...
And latency over the internet is never guaranteed, just as network paths outside a single AS rarely are guaranteed...
Now for different applications there are different amounts of delay that users find acceptable, for reaction time gates games this will be lower, for correspondence chess with one move per day this will be higher. Conceptually this can be thought of as a latency budget that one can spend on different components (access latency, transport latency, latency variation buffers...), and and latency in the immediate access network will eat into this budget irrevocably ... and that e.g. restricts the "conus" of the world that can be reached/communicated within the latency budget. But due to the lack of a hard cliff, it is always easy to argue that any latency number is good enough and hard to claim that any random latency number is too large.
Regards
Sebastian
>
> Cheers,
> Colin
>
> -----Original Message-----
> From: David Lang <david@lang.hm>
> Sent: Friday, March 15, 2024 7:08 PM
> To: Spencer Sevilla <spencer.builds.networks@gmail.com>
> Cc: Colin_Higbie <CHigbie1@Higbie.name>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] Itʼs the Latency, FCC
>
> one person's 'wasteful resolution' is another person's 'large enhancement'
>
> going from 1080p to 4k video is not being wasteful, it's opting to use the bandwidth in a different way.
>
> saying that it's wasteful for someone to choose to do something is saying that you know better what their priorities should be.
>
> I agree that increasing resources allow programmers to be lazier and write apps that are bigger, but they are also writing them in less time.
>
> What right do you have to say that the programmer's time is less important than the ram/bandwidth used?
>
> I agree that it would be nice to have more people write better code, but everything, including this, has trade-offs.
>
> David Lang
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-03-16 19:09 ` Sebastian Moeller
@ 2024-03-16 19:26 ` Colin_Higbie
2024-03-16 19:45 ` Sebastian Moeller
0 siblings, 1 reply; 27+ messages in thread
From: Colin_Higbie @ 2024-03-16 19:26 UTC (permalink / raw)
To: Sebastian Moeller, Dave Taht via Starlink
Sebastian,
Not sure if we're saying the same thing here or not. While I would agree with the statement that all else being equal, lower latency is always better than higher latency, there are definitely latency (and bandwidth) requirements, where if the latency is higher (or the bandwidth lower) than those requirements, the application becomes non-viable. That's what I mean when I say it falls short of being "good enough."
For example, you cannot have a pleasant phone call with someone if latency exceeds 1s. Yes, the call may go through, but it's a miserable UX. For VoIP, I would suggest the latency ceiling, even under load, is 100ms. That's a bit arbitrary and I'd accept any number roughly in that ballpark. If a provider's latency gets worse than that for more than a few percent of packets, then that provider should not be able to claim that their Internet supports VoIP.
If the goal is to ensure that Internet providers, including Starlink, are going to provide Internet to meet the needs of users, it is essential to understand the good-enough levels for the expected use cases of those users.
And we can do that based on the most common end-user applications:
Web browsing
VoIP
Video Conferencing
HD Streaming
4K HDR Streaming
Typical Gaming
Competitive Gaming
And maybe throw in to help users: time to DL and UL a 1GB file
Similarly, if we're going to evaluate the merits of government policy for defining latency and bandwidth requirements to qualify for earning taxpayer support, that comes down essentially to understanding those good-enough levels.
Cheers,
Colin
-----Original Message-----
From: Sebastian Moeller <moeller0@gmx.de>
Sent: Saturday, March 16, 2024 3:10 PM
To: Colin_Higbie <CHigbie1@Higbie.name>
Cc: David Lang <david@lang.hm>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: Re: [Starlink] Itʼs the Latency, FCC
>...
Well, for most applications there is an absolute lower capacity limit below which it does not work, and for most there is also an upper limit beyond which any additional capacity will not result in noticeable improvements. Latency tends to work differently, instead of a hard cliff there tends to be a slow increasing degradation...
And latency over the internet is never guaranteed, just as network paths outside a single AS rarely are guaranteed...
Now for different applications there are different amounts of delay that users find acceptable, for reaction time gates games this will be lower, for correspondence chess with one move per day this will be higher. Conceptually this can be thought of as a latency budget that one can spend on different components (access latency, transport latency, latency variation buffers...), and and latency in the immediate access network will eat into this budget irrevocably ... and that e.g. restricts the "conus" of the world that can be reached/communicated within the latency budget. But due to the lack of a hard cliff, it is always easy to argue that any latency number is good enough and hard to claim that any random latency number is too large.
Regards
Sebastian
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-03-16 19:26 ` Colin_Higbie
@ 2024-03-16 19:45 ` Sebastian Moeller
0 siblings, 0 replies; 27+ messages in thread
From: Sebastian Moeller @ 2024-03-16 19:45 UTC (permalink / raw)
To: Colin_Higbie; +Cc: Dave Taht via Starlink
Hi Colin,
> On 16. Mar 2024, at 20:26, Colin_Higbie <CHigbie1@Higbie.name> wrote:
>
> Sebastian,
>
> Not sure if we're saying the same thing here or not. While I would agree with the statement that all else being equal, lower latency is always better than higher latency, there are definitely latency (and bandwidth) requirements, where if the latency is higher (or the bandwidth lower) than those requirements, the application becomes non-viable. That's what I mean when I say it falls short of being "good enough."
>
> For example, you cannot have a pleasant phone call with someone if latency exceeds 1s. Yes, the call may go through, but it's a miserable UX.
[SM] That is my point, miserable but still funktional, while running a 100Kbps constant bitrate VoIP stream over a 50 Kbps link where the loss will make thinks completely imperceivable...
> For VoIP, I would suggest the latency ceiling, even under load, is 100ms. That's a bit arbitrary and I'd accept any number roughly in that ballpark. If a provider's latency gets worse than that for more than a few percent of packets, then that provider should not be able to claim that their Internet supports VoIP.
[SM] Interestingly the ITU claims that one way mouth-to-ear delay up to 200ms (aka 400ms RTT) results in very satisfied and up to 300ms OWD in satisfied telephony customers (ITU-T Rec. G.114 (05/2003)). That is considerably above your 100ms RTT. Now, I am still trying to find the actual psychophysics studies the ITU used to come to that conclusion (as I do not believe the numbers are showing what they are claimed to show), but policy makers still look at these numbers an take them as valid references.
> If the goal is to ensure that Internet providers, including Starlink, are going to provide Internet to meet the needs of users, it is essential to understand the good-enough levels for the expected use cases of those users.
>
> And we can do that based on the most common end-user applications:
>
> Web browsing
> VoIP
> Video Conferencing
> HD Streaming
> 4K HDR Streaming
> Typical Gaming
> Competitive Gaming
> And maybe throw in to help users: time to DL and UL a 1GB file
[SM] Only if we think of latency as a budget, if I can competitively play with a latency up to 100ms, any millisecond of delay I am spending on the access link is not going to restrict my "cone" of players I can match radius with by approximately 100Km...
> Similarly, if we're going to evaluate the merits of government policy for defining latency and bandwidth requirements to qualify for earning taxpayer support, that comes down essentially to understanding those good-enough levels.
[SM] Here is the rub, for 100 Kbps VoIP it is pretty simple to understand that it needs capacity >= 100 Kbps, but if competitive gaming needs an RTT <= 100 ms, what is an ecceptable split between access link and distance?
>
> Cheers,
> Colin
>
> -----Original Message-----
> From: Sebastian Moeller <moeller0@gmx.de>
> Sent: Saturday, March 16, 2024 3:10 PM
> To: Colin_Higbie <CHigbie1@Higbie.name>
> Cc: David Lang <david@lang.hm>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] Itʼs the Latency, FCC
>
>> ...
>
> Well, for most applications there is an absolute lower capacity limit below which it does not work, and for most there is also an upper limit beyond which any additional capacity will not result in noticeable improvements. Latency tends to work differently, instead of a hard cliff there tends to be a slow increasing degradation...
> And latency over the internet is never guaranteed, just as network paths outside a single AS rarely are guaranteed...
> Now for different applications there are different amounts of delay that users find acceptable, for reaction time gates games this will be lower, for correspondence chess with one move per day this will be higher. Conceptually this can be thought of as a latency budget that one can spend on different components (access latency, transport latency, latency variation buffers...), and and latency in the immediate access network will eat into this budget irrevocably ... and that e.g. restricts the "conus" of the world that can be reached/communicated within the latency budget. But due to the lack of a hard cliff, it is always easy to argue that any latency number is good enough and hard to claim that any random latency number is too large.
>
> Regards
> Sebastian
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-16 17:36 ` Sebastian Moeller
@ 2024-03-16 22:51 ` David Lang
0 siblings, 0 replies; 27+ messages in thread
From: David Lang @ 2024-03-16 22:51 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Alexandre Petrescu, Dave Taht via Starlink
[-- Attachment #1: Type: TEXT/PLAIN, Size: 2613 bytes --]
On Sat, 16 Mar 2024, Sebastian Moeller via Starlink wrote:
> Hi Alex...
>
>> On 16. Mar 2024, at 18:18, Alexandre Petrescu via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>>
>> Le 15/03/2024 à 21:31, Colin_Higbie via Starlink a écrit :
>>> Spencer, great point. We certainly see that with RAM, CPU, and graphics power that the software just grows to fill up the space. I do think that there are still enough users with bandwidth constraints (millions of users limited to DSL and 7Mbps DL speeds) that it provides some pressure against streaming and other services requiring huge swaths of data for basic functions, but, to your point, if there were a mandate that everyone would have 100Mbps connection, I agree that would then quickly become saturated so everyone would need more.
>>>
>>> Fortunately, the video compression codecs have improved dramatically over the past couple of decades from MPEG-1 to MPEG-2 to H.264 to VP9 and H.265. There's still room for further improvements, but I think we're probably getting to a point of diminishing returns on further compression improvements. Even with further improvements, I don't think we'll see bandwidth needs drop so much as improved quality at the same bandwidth, but this does offset the natural bloat-to-fill-available-capacity movement we see.
>>
>> I think the 4K-latency discussion is a bit difficult, regardless of how great the codecs are.
>>
>> For one, 4K can be considered outdated for those who look forward to 8K and why not 16K; so we should forget 4K.
>
> [SM] Mmmh, numerically that might make sense, however increasing the resolution of video material brings diminishing returns in perceived quality (the human optical system has limits...).... I remember well how the steps from QVGA, to VGA/SD to HD (720) to FullHD (1080) each resulted in an easily noticeable improvement in quality. However now I have a hard time seeing an improvement (heck even just noticing) if I see fullHD of 4K material on our 43" screen from a normal distance (I need to do immediate A?B comparisons from short distance)....
> I am certainly not super sensitive/picky, but I guess others will reach the same point maybe after 4K or after 8K. My point is the potential for growth in resolution is limited by psychophysics (ultimately driven by the visual arc covered by individual photoreceptors in the fovea). And I am not sure whether for normal screen sizes and distances we do not already have past that point at 4K....
true, but go to a 70" screen, or use it for a computer display instead of a TV
and you notice it much easier.
David Lang
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie
2024-03-16 19:09 ` Sebastian Moeller
@ 2024-03-16 23:05 ` David Lang
2024-03-17 15:47 ` [Starlink] It’s " Colin_Higbie
1 sibling, 1 reply; 27+ messages in thread
From: David Lang @ 2024-03-16 23:05 UTC (permalink / raw)
To: Colin_Higbie; +Cc: Dave Taht via Starlink
On Sat, 16 Mar 2024, Colin_Higbie wrote:
> At the same time, I do think if you give people tools where latency is rarely
> an issue (say a 10x improvement, so perception of 1/10 the latency),
> developers will be less efficient UNTIL that inefficiency begins to yield poor
> UX. For example, if I know I can rely on latency being 10ms and users don't
> care until total lag exceeds 500ms, I might design something that uses a lot
> of back-and-forth between client and server. As long as there are fewer than
> 50 iterations (500 / 10), users will be happy. But if I need to do 100
> iterations to get the result, then I'll do some bundling of the operations to
> keep the total observable lag at or below that 500ms.
I don't think developers think about latency at all (as a general rule)
they develop and test over their local lan, and assume it will 'just work' over
the Internet.
> In terms of user experience (UX), I think of there as being "good enough"
> plateaus based on different use-cases. For example, when web browsing, even
> 1,000ms of latency is barely noticeable. So any web browser application that
> comes in under 1,000ms will be "good enough." For VoIP, the "good enough"
> figure is probably more like 100ms. For video conferencing, maybe it's 80ms
> (the ability to see the person's facial expression likely increases the
> expectation of reactions and reduces the tolerance for lag). For some forms of
> cloud gaming, the "good enough" figure may be as low as 5ms.
1 second for the page to load is acceptable (ot nice), but one second delay in
reacting to a clip is unacceptable.
As I understand it, below 100ms is considered 'instantanious response' for most
people.
> That's not to say that 20ms isn't better for VoIP than 100 or 500ms isn't
> better than 1,000 for web browsing, just that the value for each further
> incremental reduction in latency drops significantly once you get to that
> good-enough point. However, those further improvements may open entirely new
> applications, such as enabling VoIP where before maybe it was only "good
> enough" for web browsing (think geosynchronous satellites).
the problem is that latency stacks, you click on the web page, you do a dns
lookup for the page, then a http request for the page contents, which triggers a
http request for a css page, and possibly multiple dns/http requests for
libraries
so a 100ms latency on the network can result in multiple second page load times
for the user (even if all of the content ends up being cached already)
<snip a bunch of good discussion>
> So all taken together, there can be fairly straightforward descriptions of
> latency and bandwidth based on expected usage. These are not mysterious
> attributes. It can be easily calculated per user based on expected use cases.
however, the lag between new uses showing up and changes to the network driven
by those new uses is multiple years long, so the network operators and engineers
need to be proactive, not reactive.
don't wait until the users are complaining before upgrading bandwidth/latency
David Lang
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] It?s the Latency, FCC
2024-03-16 18:51 ` [Starlink] It?s the Latency, FCC Gert Doering
@ 2024-03-16 23:08 ` David Lang
0 siblings, 0 replies; 27+ messages in thread
From: David Lang @ 2024-03-16 23:08 UTC (permalink / raw)
To: Gert Doering; +Cc: Spencer Sevilla, Dave Taht via Starlink, Colin_Higbie
if that one programmer's code is used by millions of users, you are correct, but
if it's used by dozens of users, not so much.
David Lang
On Sat, 16 Mar 2024, Gert Doering wrote:
> Hi,
>
> On Fri, Mar 15, 2024 at 04:07:54PM -0700, David Lang via Starlink wrote:
>> What right do you have to say that the programmer's time is less important
>> than the ram/bandwidth used?
>
> A single computer programmer saving some two hours in not optimizing code,
> costing 1 extra minute for each of a million of users.
>
> Bad trade-off in lifetime well-spent.
>
> Gert Doering
> -- NetMaster
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-16 23:05 ` David Lang
@ 2024-03-17 15:47 ` Colin_Higbie
2024-03-17 16:17 ` [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Dave Collier-Brown
0 siblings, 1 reply; 27+ messages in thread
From: Colin_Higbie @ 2024-03-17 15:47 UTC (permalink / raw)
To: David Lang, Dave Taht via Starlink
David,
Just on that one point that you "don't think developers think about latency at all," what developers (en masse, and as managed by their employers) care about is the user experience. If they don't think latency is an important part of the UX, then indeed they won't think about it. However, if latency is vital to the UX, such as in gaming or voice and video calling, it will be a focus.
Standard QA will include use cases that they believe reflect the majority of their users. We have done testing with artificially high latencies to simulate geosynchronous satellite users, back when they represented a notable portion of our userbase. They no longer do (thanks to services like Starlink and recent proliferation of FTTH and even continued spreading of slower cable and DSL availability into more rural areas), so we no longer include those high latencies in our testing. This does indeed mean that our services will probably become less tolerant of higher latencies (and if we still have any geosynchronous satellite customers, they may resent this possible degradation in service). Some could call this lazy on our part, but it's just doing what's cost effective for most of our users.
I'm estimating, but I think probably about 3 sigma of our users have typical latency (unloaded) of under 120ms. You or others on this list probably know better than I what fraction of our users will suffer severe enough bufferbloat to push a perceptible % of their transactions beyond 200ms.
Fortunately, in our case, even high latency shouldn't be too terrible, but as you rightly point out, if there are many iterations, 1s minimum latency could yield a several second lag, which would be poor UX for almost any application. Since we're no longer testing for that on the premise that 1s minimum latency is no longer a common real-world scenario, it's possible those painful lags could creep into our system without our knowledge.
This is rational and what we should expect and want application and solution developers to do. We would not want developers to spend time, and thereby increase costs, focusing on areas that are not particularly important to their users and customers.
Cheers,
Colin
-----Original Message-----
From: David Lang <david@lang.hm>
Sent: Saturday, March 16, 2024 7:06 PM
To: Colin_Higbie <CHigbie1@Higbie.name>
Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: RE: [Starlink] It’s the Latency, FCC
On Sat, 16 Mar 2024, Colin_Higbie wrote:
> At the same time, I do think if you give people tools where latency is
> rarely an issue (say a 10x improvement, so perception of 1/10 the
> latency), developers will be less efficient UNTIL that inefficiency
> begins to yield poor UX. For example, if I know I can rely on latency
> being 10ms and users don't care until total lag exceeds 500ms, I might
> design something that uses a lot of back-and-forth between client and
> server. As long as there are fewer than
> 50 iterations (500 / 10), users will be happy. But if I need to do 100
> iterations to get the result, then I'll do some bundling of the
> operations to keep the total observable lag at or below that 500ms.
I don't think developers think about latency at all (as a general rule)
^ permalink raw reply [flat|nested] 27+ messages in thread
* [Starlink] Sidebar to It’s the Latency, FCC: Measure it?
2024-03-17 15:47 ` [Starlink] It’s " Colin_Higbie
@ 2024-03-17 16:17 ` Dave Collier-Brown
0 siblings, 0 replies; 27+ messages in thread
From: Dave Collier-Brown @ 2024-03-17 16:17 UTC (permalink / raw)
To: starlink
On 2024-03-17 11:47, Colin_Higbie via Starlink wrote:
> Fortunately, in our case, even high latency shouldn't be too terrible, but as you rightly point out, if there are many iterations, 1s minimum latency could yield a several second lag, which would be poor UX for almost any application. Since we're no longer testing for that on the premise that 1s minimum latency is no longer a common real-world scenario, it's possible those painful lags could creep into our system without our knowledge.
Does that suggest that you should have an easy way to see if you're
unexpectedly delivering a slow service? A tool that reports your RTT to
customers and an alert on it being high for a significant period might
be something all ISPs want, even ones like mine, who just want it to be
able to tell a customer "you don't have a network problem" (;-))
And the FCC might find the data illuminating
--dave
--
David Collier-Brown, | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
dave.collier-brown@indexexchange.com | -- Mark Twain
CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] It’s the Latency, FCC
2024-03-15 18:32 ` [Starlink] It’s the Latency, FCC Colin Higbie
2024-03-15 18:41 ` Colin_Higbie
@ 2024-04-30 0:39 ` David Lang
2024-04-30 1:30 ` [Starlink] Itʼs " Colin_Higbie
1 sibling, 1 reply; 27+ messages in thread
From: David Lang @ 2024-04-30 0:39 UTC (permalink / raw)
To: Colin Higbie; +Cc: starlink
[-- Attachment #1: Type: text/plain, Size: 3755 bytes --]
hmm, before my DSL got disconnected (the carrier decided they didn't want to
support it any more), I could stream 4k at 8Mb down if there wasn't too much
other activity on the network (doing so at 2x speed was a problem)
David Lang
On Fri, 15 Mar 2024, Colin Higbie via Starlink wrote:
> Date: Fri, 15 Mar 2024 18:32:36 +0000
> From: Colin Higbie via Starlink <starlink@lists.bufferbloat.net>
> Reply-To: Colin Higbie <colin.higbie@scribl.com>
> To: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] It’s the Latency, FCC
>
>> I have now been trying to break the common conflation that download "speed"
>> means anything at all for day to day, minute to minute, second to second, use,
>> once you crack 10mbit, now, for over 14 years. Am I succeeding? I lost the 25/10
>> battle, and keep pointing at really terrible latency under load and wifi weirdnesses
>> for many existing 100/20 services today.
>
> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>
> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>
> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>
> Latency: below 50ms under load always feels good except for some intensive gaming
> (I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference)
>
> Download Bandwidth: 10Mbps good enough if not doing UHD video streaming
>
> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k
>
> Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams
>
> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>
> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>
> Cheers,
> Colin
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-04-30 0:39 ` [Starlink] It’s " David Lang
@ 2024-04-30 1:30 ` Colin_Higbie
2024-04-30 2:16 ` David Lang
0 siblings, 1 reply; 27+ messages in thread
From: Colin_Higbie @ 2024-04-30 1:30 UTC (permalink / raw)
To: David Lang; +Cc: starlink
Was that 4K HDR (not SDR) using the standard protocols that streaming services use (Netflix, Amazon Prime, Disney+, etc.) or was it just some YouTube 4K SDR videos? YouTube will show "HDR" on the gear icon for content that's 4K HDR. If it only shows "4K" instead of "HDR," then means it's SDR. Note that if YouTube, if left to the default of Auto for streaming resolution it will also automatically drop the quality to something that fits within the bandwidth and most of the "4K" content on YouTube is low-quality and not true UHD content (even beyond missing HDR). For example, many smartphones will record 4K video, but their optics are not sufficient to actually have distinct per-pixel image detail, meaning it compresses down to a smaller image with no real additional loss in picture quality, but only because it's really a 4K UHD stream to begin with.
Note that 4K video compression codecs are lossy, so the lower quality the initial image, the lower the bandwidth needed to convey the stream w/o additional quality loss. The needed bandwidth also changes with scene complexity. Falling confetti, like on Newy Year's Eve or at the Super Bowl make for one of the most demanding scenes. Lots of detailed fire and explosions with fast-moving fast panning full dynamic backgrounds are also tough for a compressed signal to preserve (but not as hard as a screen full of falling confetti).
I'm dubious that 8Mbps can handle that except for some of the simplest video, like cartoons or fairly static scenes like the news. Those scenes don't require much data, but that's not the case for all 4K HDR scenes by any means.
It's obviously in Netflix and the other streaming services' interest to be able to sell their more expensive 4K HDR service to as many people as possible. There's a reason they won't offer it to anyone with less than 25Mbps – they don't want the complaints and service calls. Now, to be fair, 4K HDR definitely doesn’t typically require 25Mbps, but it's to their credit that they do include a small bandwidth buffer. In my experience monitoring bandwidth usage for 4K HDR streaming, 15Mbps is the minimum if doing nothing else and that will frequently fall short, depending on the 4K HDR content.
Cheers,
Colin
-----Original Message-----
From: David Lang <david@lang.hm>
Sent: Monday, April 29, 2024 8:40 PM
To: Colin Higbie <colin.higbie@scribl.com>
Cc: starlink@lists.bufferbloat.net
Subject: Re: [Starlink] Itʼs the Latency, FCC
hmm, before my DSL got disconnected (the carrier decided they didn't want to support it any more), I could stream 4k at 8Mb down if there wasn't too much other activity on the network (doing so at 2x speed was a problem)
David Lang
On Fri, 15 Mar 2024, Colin Higbie via Starlink wrote:
> Date: Fri, 15 Mar 2024 18:32:36 +0000
> From: Colin Higbie via Starlink <starlink@lists.bufferbloat.net>
> Reply-To: Colin Higbie <colin.higbie@scribl.com>
> To: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] It’s the Latency, FCC
>
>> I have now been trying to break the common conflation that download "speed"
>> means anything at all for day to day, minute to minute, second to
>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>> succeeding? I lost the 25/10 battle, and keep pointing at really
>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>
> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>
> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>
> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>
> Latency: below 50ms under load always feels good except for some
> intensive gaming (I don't see any benefit to getting loaded latency
> further below ~20ms for typical applications, with an exception for
> cloud-based gaming that benefits with lower latency all the way down
> to about 5ms for young, really fast players, the rest of us won't be
> able to tell the difference)
>
> Download Bandwidth: 10Mbps good enough if not doing UHD video
> streaming
>
> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
> depending on # of streams or if wanting to be ready for 8k
>
> Upload Bandwidth: 10Mbps good enough for quality video conferencing,
> higher only needed for multiple concurrent outbound streams
>
> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>
> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>
> Cheers,
> Colin
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC
2024-04-30 1:30 ` [Starlink] Itʼs " Colin_Higbie
@ 2024-04-30 2:16 ` David Lang
0 siblings, 0 replies; 27+ messages in thread
From: David Lang @ 2024-04-30 2:16 UTC (permalink / raw)
To: Colin_Higbie; +Cc: David Lang, starlink
[-- Attachment #1: Type: text/plain, Size: 6673 bytes --]
Amazon, youtube set explicitly to 4k (I didn't say HDR)
David Lang
On Tue, 30 Apr 2024, Colin_Higbie wrote:
> Date: Tue, 30 Apr 2024 01:30:21 +0000
> From: Colin_Higbie <CHigbie1@Higbie.name>
> To: David Lang <david@lang.hm>
> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> Subject: RE: [Starlink] Itʼs the Latency, FCC
>
> Was that 4K HDR (not SDR) using the standard protocols that streaming services use (Netflix, Amazon Prime, Disney+, etc.) or was it just some YouTube 4K SDR videos? YouTube will show "HDR" on the gear icon for content that's 4K HDR. If it only shows "4K" instead of "HDR," then means it's SDR. Note that if YouTube, if left to the default of Auto for streaming resolution it will also automatically drop the quality to something that fits within the bandwidth and most of the "4K" content on YouTube is low-quality and not true UHD content (even beyond missing HDR). For example, many smartphones will record 4K video, but their optics are not sufficient to actually have distinct per-pixel image detail, meaning it compresses down to a smaller image with no real additional loss in picture quality, but only because it's really a 4K UHD stream to begin with.
>
> Note that 4K video compression codecs are lossy, so the lower quality the initial image, the lower the bandwidth needed to convey the stream w/o additional quality loss. The needed bandwidth also changes with scene complexity. Falling confetti, like on Newy Year's Eve or at the Super Bowl make for one of the most demanding scenes. Lots of detailed fire and explosions with fast-moving fast panning full dynamic backgrounds are also tough for a compressed signal to preserve (but not as hard as a screen full of falling confetti).
>
> I'm dubious that 8Mbps can handle that except for some of the simplest video, like cartoons or fairly static scenes like the news. Those scenes don't require much data, but that's not the case for all 4K HDR scenes by any means.
>
> It's obviously in Netflix and the other streaming services' interest to be able to sell their more expensive 4K HDR service to as many people as possible. There's a reason they won't offer it to anyone with less than 25Mbps – they don't want the complaints and service calls. Now, to be fair, 4K HDR definitely doesn’t typically require 25Mbps, but it's to their credit that they do include a small bandwidth buffer. In my experience monitoring bandwidth usage for 4K HDR streaming, 15Mbps is the minimum if doing nothing else and that will frequently fall short, depending on the 4K HDR content.
>
> Cheers,
> Colin
>
>
>
> -----Original Message-----
> From: David Lang <david@lang.hm>
> Sent: Monday, April 29, 2024 8:40 PM
> To: Colin Higbie <colin.higbie@scribl.com>
> Cc: starlink@lists.bufferbloat.net
> Subject: Re: [Starlink] Itʼs the Latency, FCC
>
> hmm, before my DSL got disconnected (the carrier decided they didn't want to support it any more), I could stream 4k at 8Mb down if there wasn't too much other activity on the network (doing so at 2x speed was a problem)
>
> David Lang
>
>
> On Fri, 15 Mar 2024, Colin Higbie via Starlink wrote:
>
>> Date: Fri, 15 Mar 2024 18:32:36 +0000
>> From: Colin Higbie via Starlink <starlink@lists.bufferbloat.net>
>> Reply-To: Colin Higbie <colin.higbie@scribl.com>
>> To: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>> Subject: Re: [Starlink] It’s the Latency, FCC
>>
>>> I have now been trying to break the common conflation that download "speed"
>>> means anything at all for day to day, minute to minute, second to
>>> second, use, once you crack 10mbit, now, for over 14 years. Am I
>>> succeeding? I lost the 25/10 battle, and keep pointing at really
>>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today.
>>
>> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content.
>>
>> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams.
>>
>> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about:
>>
>> Latency: below 50ms under load always feels good except for some
>> intensive gaming (I don't see any benefit to getting loaded latency
>> further below ~20ms for typical applications, with an exception for
>> cloud-based gaming that benefits with lower latency all the way down
>> to about 5ms for young, really fast players, the rest of us won't be
>> able to tell the difference)
>>
>> Download Bandwidth: 10Mbps good enough if not doing UHD video
>> streaming
>>
>> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming,
>> depending on # of streams or if wanting to be ready for 8k
>>
>> Upload Bandwidth: 10Mbps good enough for quality video conferencing,
>> higher only needed for multiple concurrent outbound streams
>>
>> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other.
>>
>> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023).
>>
>> Cheers,
>> Colin
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] Sidebar to It’s the Latency, FCC: Measure it?
2024-03-18 20:00 ` David Lang
@ 2024-03-19 16:06 ` David Lang
0 siblings, 0 replies; 27+ messages in thread
From: David Lang @ 2024-03-19 16:06 UTC (permalink / raw)
To: David Lang; +Cc: Sebastian Moeller, Colin_Higbie, starlink
On Mon, 18 Mar 2024, David Lang wrote:
> On Mon, 18 Mar 2024, Sebastian Moeller wrote:
>
>>> I'll point out that professional still cameras (DSLRs and the new
>>> mirrorless ones) also seem to have stalled with the top-of-the-line Canon
>>> and Nikon topping out at around 20-24 mp (after selling some models that
>>> went to 30p or so), Sony has some models at 45 mp.
corretion to my earlier post, 8k video is ~30 megapixels, 4k video is about 8
megapixels. So cameras and lenses can easily handle 8k video (in terms of
quality), beyond that it seems that even for professional photographers who's
work is going to be blown up to big posters seldom bother going to higher
resolutions.
David Lang
>> One of the issues is cost, Zour sensor pixels need to be large enough
>> to capture a sufficient amount of photons in a short enough amount of time
>> to be useful, and that puts a (soft) lower limit on how small you can make
>> your pixels... Once your divided up your sensor are into the smalles
>> reasonable pixel size all you can do iso is increase sensor size and hence
>> cost... especially if I am correct in assuming that at one point you also
>> need to increase the diameter of your optics to "feed" the sensor properly.
>> At which point it is not only cost but also size...
>
> I'm talking about full frame high-end professional cameras (the ones where
> the body with no lens costs $8k or so). This has been consistant for over a
> decade. So I don't think it's a cost/manufacturing limit in place here.
>
> There are a lot of cameras made with smaller sensors in similar resolution,
> but very little at much higher resolutions.
>
> at the low end, you will see some small, higher resolution sensors, but those
> are for fixed lens cameras (like phones) where you use digital zoom
>
> David Lang
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] Sidebar to It’s the Latency, FCC: Measure it?
2024-03-18 19:52 ` Sebastian Moeller
@ 2024-03-18 20:00 ` David Lang
2024-03-19 16:06 ` David Lang
0 siblings, 1 reply; 27+ messages in thread
From: David Lang @ 2024-03-18 20:00 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: David Lang, Colin_Higbie, starlink
On Mon, 18 Mar 2024, Sebastian Moeller wrote:
>> I'll point out that professional still cameras (DSLRs and the new mirrorless
>> ones) also seem to have stalled with the top-of-the-line Canon and Nikon
>> topping out at around 20-24 mp (after selling some models that went to 30p or
>> so), Sony has some models at 45 mp.
>
> One of the issues is cost, Zour sensor pixels need to be large enough to
> capture a sufficient amount of photons in a short enough amount of time to be
> useful, and that puts a (soft) lower limit on how small you can make your
> pixels... Once your divided up your sensor are into the smalles reasonable
> pixel size all you can do iso is increase sensor size and hence cost...
> especially if I am correct in assuming that at one point you also need to
> increase the diameter of your optics to "feed" the sensor properly. At which
> point it is not only cost but also size...
I'm talking about full frame high-end professional cameras (the ones where the
body with no lens costs $8k or so). This has been consistant for over a decade.
So I don't think it's a cost/manufacturing limit in place here.
There are a lot of cameras made with smaller sensors in similar resolution, but
very little at much higher resolutions.
at the low end, you will see some small, higher resolution sensors, but those
are for fixed lens cameras (like phones) where you use digital zoom
David Lang
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] Sidebar to It’s the Latency, FCC: Measure it?
2024-03-18 19:32 ` David Lang
@ 2024-03-18 19:52 ` Sebastian Moeller
2024-03-18 20:00 ` David Lang
0 siblings, 1 reply; 27+ messages in thread
From: Sebastian Moeller @ 2024-03-18 19:52 UTC (permalink / raw)
To: David Lang; +Cc: Colin_Higbie, starlink
Hi David,
> On 18. Mar 2024, at 20:32, David Lang via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> On Mon, 18 Mar 2024, Colin_Higbie via Starlink wrote:
>
>> Will that 25Mbps requirement change in the future? Probably. It will probably go up even though 4K HDR streaming will probably be achievable with less bandwidth in the future due to further improvements in compression algorithms. This is because, yeah, eventually maybe 8K or higher resolutions will be a standard, or maybe there will be a higher bit depth HDR (that seems slightly more likely to me). It's not at all clear though that's the case. At some point, you reach a state where there is no benefit to higher resolutions. Phones hit that point a few years ago and have stopped moving to higher resolution displays. There is currently 0% of content from any major provider that's in 8K (just some experimental YouTube videos), and a person viewing 8K would be unlikely to report any visual advantage over 4K (SD -> HD is huge, HD -> 4K is noticeable, 4K -> 8K is imperceptible for camera-recording scenes on any standard size viewing experience).
>
> I'll point out that professional still cameras (DSLRs and the new mirrorless ones) also seem to have stalled with the top-of-the-line Canon and Nikon topping out at around 20-24 mp (after selling some models that went to 30p or so), Sony has some models at 45 mp.
One of the issues is cost, Zour sensor pixels need to be large enough to capture a sufficient amount of photons in a short enough amount of time to be useful, and that puts a (soft) lower limit on how small you can make your pixels... Once your divided up your sensor are into the smalles reasonable pixel size all you can do iso is increase sensor size and hence cost... especially if I am correct in assuming that at one point you also need to increase the diameter of your optics to "feed" the sensor properly. At which point it is not only cost but also size...
Regards
Sebastian
>
> 8k video is in the ballpark of 30mp
>
> David Lang
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] Sidebar to It’s the Latency, FCC: Measure it?
2024-03-18 16:41 ` [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Colin_Higbie
2024-03-18 16:49 ` Dave Taht
@ 2024-03-18 19:32 ` David Lang
2024-03-18 19:52 ` Sebastian Moeller
1 sibling, 1 reply; 27+ messages in thread
From: David Lang @ 2024-03-18 19:32 UTC (permalink / raw)
To: Colin_Higbie; +Cc: starlink
On Mon, 18 Mar 2024, Colin_Higbie via Starlink wrote:
> Will that 25Mbps requirement change in the future? Probably. It will probably
> go up even though 4K HDR streaming will probably be achievable with less
> bandwidth in the future due to further improvements in compression algorithms.
> This is because, yeah, eventually maybe 8K or higher resolutions will be a
> standard, or maybe there will be a higher bit depth HDR (that seems slightly
> more likely to me). It's not at all clear though that's the case. At some
> point, you reach a state where there is no benefit to higher resolutions.
> Phones hit that point a few years ago and have stopped moving to higher
> resolution displays. There is currently 0% of content from any major provider
> that's in 8K (just some experimental YouTube videos), and a person viewing 8K
> would be unlikely to report any visual advantage over 4K (SD -> HD is huge, HD
> -> 4K is noticeable, 4K -> 8K is imperceptible for camera-recording scenes on
> any standard size viewing experience).
I'll point out that professional still cameras (DSLRs and the new mirrorless
ones) also seem to have stalled with the top-of-the-line Canon and Nikon topping
out at around 20-24 mp (after selling some models that went to 30p or so), Sony
has some models at 45 mp.
8k video is in the ballpark of 30mp
David Lang
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Starlink] Sidebar to It’s the Latency, FCC: Measure it?
2024-03-18 16:41 ` [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Colin_Higbie
@ 2024-03-18 16:49 ` Dave Taht
2024-03-18 19:32 ` David Lang
1 sibling, 0 replies; 27+ messages in thread
From: Dave Taht @ 2024-03-18 16:49 UTC (permalink / raw)
To: Colin_Higbie; +Cc: starlink
I am curious what the real world bandwidth requirements are for live
sports, streaming? I imagine during episodes of high motion, encoders
struggle.
On Mon, Mar 18, 2024 at 12:42 PM Colin_Higbie via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
> To the comments and question from Dave Collier-Brown in response to my saying that we test latency for UX and Alex on 8K screens, both of these seem to take more academic view than I can address on what I view as commercial subjects. By that, I mean that they seem to assume budget and market preferences are secondary considerations rather than the primary driving forces they are to me.
>
> From my perspective, end user/customer experience is ultimately the only important metric, where all others are just tools to help convert UX into something more measurable and quantifiable. To be clear, I fully respect the importance of being able to quantify these things, so those metrics have value, but they should always serve as ways to proxy the UX, not a target unto themselves. If you're designing a system that needs minimal lag for testing your new quantum computer or to use in place of synchronized clocks for those amazing x-ray photos of black holes, then your needs may be different, but if you're talking about how Internet providers measure their latency and bandwidth for sales to millions or billions of homes and businesses, then UX based on mainstream applications is what matters.
>
> To the specifics:
>
> No, we (our company) don't have a detailed latency testing method. We test purely for UX. If users or our QA team report a lag, that's bad and we work to fix it. If QA and users are happy with the that and negative feedback is in other areas unrelated to lag (typically the case), then we deem our handling of latency as "good enough" and focus our engineering efforts on the problem areas or on adding new features. Now, I should acknowledge, this is largely because our application is not particularly latency-sensitive. If it were, we probably would have a lag check as part of our standard automated test bed. For us, as long as our application starts to provide our users with streaming access to our data within a second or so, that's good enough.
>
> I realize good-enough is not a hard metric by itself, but it's ultimately the only factor that matters to most users. The exception would be some very specific use cases where 1ms of latency delta makes a difference, like for some stock market transactions and competitive e-sports.
>
> To convert the nebulous term "good enough" into actual metrics that ISP's and other providers can use to quantify their service, I stand by my prior point that the industry could establish needed metrics per application. VoIP has stricter latency needs than web browsing. Cloud-based gaming has still stricter latency requirements. There would be some disagreement on what exactly is "good enough" for each of those, but I'm confident we could reach numbers for them, whether by survey and selecting the median, by reported complaints based on service to establish a minimum acceptable level, or by some other method. I doubt there's significant variance on what qualifies as good-enough for each application.
>
> 4K vs Higher Resolution as Standard
> And regarding 4K TV as a standard, I'm surprised this is controversial. 4K is THE high-end standard that defines bandwidth needs today. It is NOT 8K or anything higher (similarly, in spite of those other capabilities you mentioned, CD's are also still 44.1kHz (48hKz is for DVD), with musical fidelity at a commercial level having DECREASED, not increased, where most sales and streaming occurs using lower quality MP3 files). That's not a subjective statement; that is a fact. By "fact" I don't mean that no one thinks 8K is nice or that higher isn't better, but that there is an established industry standard that has already settled this. Netflix defines it as 25Mbps. The other big streamers, Disney+, Max, and Paramount+ all agree. 25Mbps is higher than is usually needed for 4K HDR content (10-15Mbps can generally hit it, depending on the nature of the scenes where slow scenes with a lot of solid background color like cartoons compress into less data than fast moving visually complex scenes), but it it's a good figure to use because it includes a safety margin and, more importantly, it's what the industry has already defined as the requirement. To me, this one is very black and white and clear cut, even more so than latency. IF you're an Internet provider and want to claim that your Internet supports modern viewing standards for streaming, you must provide 25Mbps. I'm generally happy to debate anything and acknowledge other points of view are just as valid as my own, but I don't see this particular point as debatable, because it's a defined fact by the industry. It's effectively too late to challenge this. At best, you'd be fighting customers and content providers alike and to what purpose?
>
> Will that 25Mbps requirement change in the future? Probably. It will probably go up even though 4K HDR streaming will probably be achievable with less bandwidth in the future due to further improvements in compression algorithms. This is because, yeah, eventually maybe 8K or higher resolutions will be a standard, or maybe there will be a higher bit depth HDR (that seems slightly more likely to me). It's not at all clear though that's the case. At some point, you reach a state where there is no benefit to higher resolutions. Phones hit that point a few years ago and have stopped moving to higher resolution displays. There is currently 0% of content from any major provider that's in 8K (just some experimental YouTube videos), and a person viewing 8K would be unlikely to report any visual advantage over 4K (SD -> HD is huge, HD -> 4K is noticeable, 4K -> 8K is imperceptible for camera-recording scenes on any standard size viewing experience).
>
> Where 8K+ could make a difference would primarily be in rendered content (and the handful of 8K sets sold today play to this market). Standard camera lenses just don't capture a sharp enough picture to benefit from the extra pixels (they can in some cases, but depth of field and human error render these successes isolated to specific kinds of largely static landscape scenes). If the innate fuzziness or blurriness in the image exceeds the size of a pixel, then more pixels don't add any value. However, in a rendered image, like in a video game, those are pixel perfect, so at least there it's possible to benefit from a higher resolution display. But for that, even the top of the line graphics today (Nvidia RTX 4090, now over a year old) can barely generate 4K HDR content with path tracing active at reasonable framerates (60 frames per second), and because of their high cost, those make up only 0.23% of the market as of the most recent data I've seen (this will obviously increase over time).
>
> I could also imagine AI may be able to reduce blurriness in captured video in the future and sharpen it before sending it out to viewers, but we're not there yet. For all these reasons, 8K will remain niche for the time being. There's just no good reason for it. When the Super Bowl (one of the first to offer 4K viewing) advertises that it can be viewed in 8K, that's when you know it's approaching a mainstream option.
>
> On OLED screens and upcoming microLED displays that can achieve higher contrast ratios than LCD, HDR is far more impactful to the UX and viewing experience than further pixel density increases. Current iterations of LCD can't handle this, even though they claim to support HDR, which has given many consumers the wrong impression that HDR is not a big deal. It is not on LCD's because they cannot achieve the contrast rations needed for impactful HDR. At least not with today's technology, and probably never, just because the advantages to microLED outweigh the benefits I would expect you could get by improving LCD.
>
> So maybe we go from the current 10-bit/color HDR to something like 12 or 16 bit HDR. That could also increase bandwidth needs at the same 4K display size. Or, maybe the next generation displays won't be screens but will be entire walls built of microLED fabric that justify going to 16K displays at hundreds of inches. At this point, you'd be close to displays that duplicate a window to the outside world (but still far from the brightness of the sun shining through). But there is nothing at that size that will be at consumer scale in the next 10 years. It's at least that far out (12+-bit HDR might land before that on 80-110" screens), and I suspect quite a bit further. It's one thing to move to a larger TV, because there's already infrastructure for that. On the other hand, to go to entire walls made of a display material would need an entirely different supply chain, different manufacturers, installers, cultural change in how we watch and use it, etc. Those kinds of changes take decades.
>
> Cheers,
> Colin
>
>
> Date: Sun, 17 Mar 2024 12:17:11 -0400
> From: Dave Collier-Brown <dave.collier-Brown@indexexchange.com>
> To: starlink@lists.bufferbloat.net
> Subject: [Starlink] Sidebar to It’s the Latency, FCC: Measure it?
> Message-ID: <e0f9affe-f205-4f01-9ff5-3dc93abc31ca@indexexchange.com>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> On 2024-03-17 11:47, Colin_Higbie via Starlink wrote:
>
> > Fortunately, in our case, even high latency shouldn't be too terrible, but as you rightly point out, if there are many iterations, 1s minimum latency could yield a several second lag, which would be poor UX for almost any application. Since we're no longer testing for that on the premise that 1s minimum latency is no longer a common real-world scenario, it's possible those painful lags could creep into our system without our knowledge.
>
> Does that suggest that you should have an easy way to see if you're unexpectedly delivering a slow service? A tool that reports your RTT to customers and an alert on it being high for a significant period might be something all ISPs want, even ones like mine, who just want it to be able to tell a customer "you don't have a network problem" (;-))
>
> And the FCC might find the data illuminating
>
> --dave
>
> --
> David Collier-Brown, | Always do right. This will gratify
> System Programmer and Author | some people and astonish the rest
> dave.collier-brown@indexexchange.com | -- Mark Twain
>
>
> CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory.
>
>
> ------------------------------
>
> Message: 2
> Date: Sun, 17 Mar 2024 18:00:42 +0100
> From: Alexandre Petrescu <alexandre.petrescu@gmail.com>
> To: starlink@lists.bufferbloat.net
> Subject: Re: [Starlink] It’s the Latency, FCC
> Message-ID: <b0b5db3c-baf4-425a-a2c6-38ebc4296e56@gmail.com>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
>
> Le 16/03/2024 à 20:10, Colin_Higbie via Starlink a écrit :
> > Just to be clear: 4K is absolutely a standard in streaming, with that being the most popular TV being sold today. 8K is not and likely won't be until 80+" TVs become the norm.
>
> I can agree screen size is one aspect pushing the higher resolutions to acceptance, but there are some more signs indicating that 8K is just round the corner, and 16K right after it.
>
> The recording consumer devices (cameras) already do 8K recording cheaply, since a couple of years.
>
> New acronyms beyond simply resolutions are always ready to come up. HDR (high dynamic range) was such an acronym accompanying 4K, so for 8K there might be another, bringing more than just resolution, maybe even more dynamic range, blacker blacks, wider gamut,-for goggles, etc. for a same screen size.
>
> 8K and 16K playing devices might not have a surface to exhibit their entire power, but when such surfaces become available, these 8K and 16K playing devices will be ready for them, whereas 4K no.
>
> A similar evolution is witnessed by sound and by crypto: 44KHz CD was enough for all, until SACD 88KHz came about, then DSD64, DSD128 and today DSD 1024, which means DSD 2048 tomorrow. And the Dolby Atmos and
> 11.1 outputs. These too dont yet have the speakers nor the ears to take advantage of, but in the future they might. In crypto, the 'post-quantum' algorithms are designed to resist brute force by computers that dont exist publicly (a few hundred qubit range exists, but 20.000 qubit range computer is needed) but when they will, these crypto algos will be ready.
>
> Given that, one could imagine the bandwidth and latency by a 3D 16K
> DSD1024 quantum-resistant ciphered multi-party visio-conference with gloves, goggles and other interacting devices, with low latency over starlink.
>
> The growth trends (4K...) can be identified and the needed latency numbers can be projected.
>
> Alex
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
--
https://www.youtube.com/watch?v=N0Tmvv5jJKs Epik Mellon Podcast
Dave Täht CSO, LibreQos
^ permalink raw reply [flat|nested] 27+ messages in thread
* [Starlink] Sidebar to It’s the Latency, FCC: Measure it?
[not found] <mailman.2503.1710703654.1074.starlink@lists.bufferbloat.net>
@ 2024-03-18 16:41 ` Colin_Higbie
2024-03-18 16:49 ` Dave Taht
2024-03-18 19:32 ` David Lang
0 siblings, 2 replies; 27+ messages in thread
From: Colin_Higbie @ 2024-03-18 16:41 UTC (permalink / raw)
To: starlink
To the comments and question from Dave Collier-Brown in response to my saying that we test latency for UX and Alex on 8K screens, both of these seem to take more academic view than I can address on what I view as commercial subjects. By that, I mean that they seem to assume budget and market preferences are secondary considerations rather than the primary driving forces they are to me.
From my perspective, end user/customer experience is ultimately the only important metric, where all others are just tools to help convert UX into something more measurable and quantifiable. To be clear, I fully respect the importance of being able to quantify these things, so those metrics have value, but they should always serve as ways to proxy the UX, not a target unto themselves. If you're designing a system that needs minimal lag for testing your new quantum computer or to use in place of synchronized clocks for those amazing x-ray photos of black holes, then your needs may be different, but if you're talking about how Internet providers measure their latency and bandwidth for sales to millions or billions of homes and businesses, then UX based on mainstream applications is what matters.
To the specifics:
No, we (our company) don't have a detailed latency testing method. We test purely for UX. If users or our QA team report a lag, that's bad and we work to fix it. If QA and users are happy with the that and negative feedback is in other areas unrelated to lag (typically the case), then we deem our handling of latency as "good enough" and focus our engineering efforts on the problem areas or on adding new features. Now, I should acknowledge, this is largely because our application is not particularly latency-sensitive. If it were, we probably would have a lag check as part of our standard automated test bed. For us, as long as our application starts to provide our users with streaming access to our data within a second or so, that's good enough.
I realize good-enough is not a hard metric by itself, but it's ultimately the only factor that matters to most users. The exception would be some very specific use cases where 1ms of latency delta makes a difference, like for some stock market transactions and competitive e-sports.
To convert the nebulous term "good enough" into actual metrics that ISP's and other providers can use to quantify their service, I stand by my prior point that the industry could establish needed metrics per application. VoIP has stricter latency needs than web browsing. Cloud-based gaming has still stricter latency requirements. There would be some disagreement on what exactly is "good enough" for each of those, but I'm confident we could reach numbers for them, whether by survey and selecting the median, by reported complaints based on service to establish a minimum acceptable level, or by some other method. I doubt there's significant variance on what qualifies as good-enough for each application.
4K vs Higher Resolution as Standard
And regarding 4K TV as a standard, I'm surprised this is controversial. 4K is THE high-end standard that defines bandwidth needs today. It is NOT 8K or anything higher (similarly, in spite of those other capabilities you mentioned, CD's are also still 44.1kHz (48hKz is for DVD), with musical fidelity at a commercial level having DECREASED, not increased, where most sales and streaming occurs using lower quality MP3 files). That's not a subjective statement; that is a fact. By "fact" I don't mean that no one thinks 8K is nice or that higher isn't better, but that there is an established industry standard that has already settled this. Netflix defines it as 25Mbps. The other big streamers, Disney+, Max, and Paramount+ all agree. 25Mbps is higher than is usually needed for 4K HDR content (10-15Mbps can generally hit it, depending on the nature of the scenes where slow scenes with a lot of solid background color like cartoons compress into less data than fast moving visually complex scenes), but it it's a good figure to use because it includes a safety margin and, more importantly, it's what the industry has already defined as the requirement. To me, this one is very black and white and clear cut, even more so than latency. IF you're an Internet provider and want to claim that your Internet supports modern viewing standards for streaming, you must provide 25Mbps. I'm generally happy to debate anything and acknowledge other points of view are just as valid as my own, but I don't see this particular point as debatable, because it's a defined fact by the industry. It's effectively too late to challenge this. At best, you'd be fighting customers and content providers alike and to what purpose?
Will that 25Mbps requirement change in the future? Probably. It will probably go up even though 4K HDR streaming will probably be achievable with less bandwidth in the future due to further improvements in compression algorithms. This is because, yeah, eventually maybe 8K or higher resolutions will be a standard, or maybe there will be a higher bit depth HDR (that seems slightly more likely to me). It's not at all clear though that's the case. At some point, you reach a state where there is no benefit to higher resolutions. Phones hit that point a few years ago and have stopped moving to higher resolution displays. There is currently 0% of content from any major provider that's in 8K (just some experimental YouTube videos), and a person viewing 8K would be unlikely to report any visual advantage over 4K (SD -> HD is huge, HD -> 4K is noticeable, 4K -> 8K is imperceptible for camera-recording scenes on any standard size viewing experience).
Where 8K+ could make a difference would primarily be in rendered content (and the handful of 8K sets sold today play to this market). Standard camera lenses just don't capture a sharp enough picture to benefit from the extra pixels (they can in some cases, but depth of field and human error render these successes isolated to specific kinds of largely static landscape scenes). If the innate fuzziness or blurriness in the image exceeds the size of a pixel, then more pixels don't add any value. However, in a rendered image, like in a video game, those are pixel perfect, so at least there it's possible to benefit from a higher resolution display. But for that, even the top of the line graphics today (Nvidia RTX 4090, now over a year old) can barely generate 4K HDR content with path tracing active at reasonable framerates (60 frames per second), and because of their high cost, those make up only 0.23% of the market as of the most recent data I've seen (this will obviously increase over time).
I could also imagine AI may be able to reduce blurriness in captured video in the future and sharpen it before sending it out to viewers, but we're not there yet. For all these reasons, 8K will remain niche for the time being. There's just no good reason for it. When the Super Bowl (one of the first to offer 4K viewing) advertises that it can be viewed in 8K, that's when you know it's approaching a mainstream option.
On OLED screens and upcoming microLED displays that can achieve higher contrast ratios than LCD, HDR is far more impactful to the UX and viewing experience than further pixel density increases. Current iterations of LCD can't handle this, even though they claim to support HDR, which has given many consumers the wrong impression that HDR is not a big deal. It is not on LCD's because they cannot achieve the contrast rations needed for impactful HDR. At least not with today's technology, and probably never, just because the advantages to microLED outweigh the benefits I would expect you could get by improving LCD.
So maybe we go from the current 10-bit/color HDR to something like 12 or 16 bit HDR. That could also increase bandwidth needs at the same 4K display size. Or, maybe the next generation displays won't be screens but will be entire walls built of microLED fabric that justify going to 16K displays at hundreds of inches. At this point, you'd be close to displays that duplicate a window to the outside world (but still far from the brightness of the sun shining through). But there is nothing at that size that will be at consumer scale in the next 10 years. It's at least that far out (12+-bit HDR might land before that on 80-110" screens), and I suspect quite a bit further. It's one thing to move to a larger TV, because there's already infrastructure for that. On the other hand, to go to entire walls made of a display material would need an entirely different supply chain, different manufacturers, installers, cultural change in how we watch and use it, etc. Those kinds of changes take decades.
Cheers,
Colin
Date: Sun, 17 Mar 2024 12:17:11 -0400
From: Dave Collier-Brown <dave.collier-Brown@indexexchange.com>
To: starlink@lists.bufferbloat.net
Subject: [Starlink] Sidebar to It’s the Latency, FCC: Measure it?
Message-ID: <e0f9affe-f205-4f01-9ff5-3dc93abc31ca@indexexchange.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
On 2024-03-17 11:47, Colin_Higbie via Starlink wrote:
> Fortunately, in our case, even high latency shouldn't be too terrible, but as you rightly point out, if there are many iterations, 1s minimum latency could yield a several second lag, which would be poor UX for almost any application. Since we're no longer testing for that on the premise that 1s minimum latency is no longer a common real-world scenario, it's possible those painful lags could creep into our system without our knowledge.
Does that suggest that you should have an easy way to see if you're unexpectedly delivering a slow service? A tool that reports your RTT to customers and an alert on it being high for a significant period might be something all ISPs want, even ones like mine, who just want it to be able to tell a customer "you don't have a network problem" (;-))
And the FCC might find the data illuminating
--dave
--
David Collier-Brown, | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
dave.collier-brown@indexexchange.com | -- Mark Twain
CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory.
------------------------------
Message: 2
Date: Sun, 17 Mar 2024 18:00:42 +0100
From: Alexandre Petrescu <alexandre.petrescu@gmail.com>
To: starlink@lists.bufferbloat.net
Subject: Re: [Starlink] It’s the Latency, FCC
Message-ID: <b0b5db3c-baf4-425a-a2c6-38ebc4296e56@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Le 16/03/2024 à 20:10, Colin_Higbie via Starlink a écrit :
> Just to be clear: 4K is absolutely a standard in streaming, with that being the most popular TV being sold today. 8K is not and likely won't be until 80+" TVs become the norm.
I can agree screen size is one aspect pushing the higher resolutions to acceptance, but there are some more signs indicating that 8K is just round the corner, and 16K right after it.
The recording consumer devices (cameras) already do 8K recording cheaply, since a couple of years.
New acronyms beyond simply resolutions are always ready to come up. HDR (high dynamic range) was such an acronym accompanying 4K, so for 8K there might be another, bringing more than just resolution, maybe even more dynamic range, blacker blacks, wider gamut,-for goggles, etc. for a same screen size.
8K and 16K playing devices might not have a surface to exhibit their entire power, but when such surfaces become available, these 8K and 16K playing devices will be ready for them, whereas 4K no.
A similar evolution is witnessed by sound and by crypto: 44KHz CD was enough for all, until SACD 88KHz came about, then DSD64, DSD128 and today DSD 1024, which means DSD 2048 tomorrow. And the Dolby Atmos and
11.1 outputs. These too dont yet have the speakers nor the ears to take advantage of, but in the future they might. In crypto, the 'post-quantum' algorithms are designed to resist brute force by computers that dont exist publicly (a few hundred qubit range exists, but 20.000 qubit range computer is needed) but when they will, these crypto algos will be ready.
Given that, one could imagine the bandwidth and latency by a 3D 16K
DSD1024 quantum-resistant ciphered multi-party visio-conference with gloves, goggles and other interacting devices, with low latency over starlink.
The growth trends (4K...) can be identified and the needed latency numbers can be projected.
Alex
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2024-04-30 2:16 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <mailman.11.1710518402.17089.starlink@lists.bufferbloat.net>
2024-03-15 18:32 ` [Starlink] It’s the Latency, FCC Colin Higbie
2024-03-15 18:41 ` Colin_Higbie
2024-03-15 19:53 ` Spencer Sevilla
2024-03-15 20:31 ` Colin_Higbie
2024-03-16 17:18 ` Alexandre Petrescu
2024-03-16 17:21 ` Alexandre Petrescu
2024-03-16 17:36 ` Sebastian Moeller
2024-03-16 22:51 ` David Lang
2024-03-15 23:07 ` David Lang
2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie
2024-03-16 19:09 ` Sebastian Moeller
2024-03-16 19:26 ` Colin_Higbie
2024-03-16 19:45 ` Sebastian Moeller
2024-03-16 23:05 ` David Lang
2024-03-17 15:47 ` [Starlink] It’s " Colin_Higbie
2024-03-17 16:17 ` [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Dave Collier-Brown
2024-03-16 18:51 ` [Starlink] It?s the Latency, FCC Gert Doering
2024-03-16 23:08 ` David Lang
2024-04-30 0:39 ` [Starlink] It’s " David Lang
2024-04-30 1:30 ` [Starlink] Itʼs " Colin_Higbie
2024-04-30 2:16 ` David Lang
[not found] <mailman.2503.1710703654.1074.starlink@lists.bufferbloat.net>
2024-03-18 16:41 ` [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Colin_Higbie
2024-03-18 16:49 ` Dave Taht
2024-03-18 19:32 ` David Lang
2024-03-18 19:52 ` Sebastian Moeller
2024-03-18 20:00 ` David Lang
2024-03-19 16:06 ` David Lang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox