* [Starlink] Sidebar to It’s the Latency, FCC: Measure it? [not found] <mailman.2503.1710703654.1074.starlink@lists.bufferbloat.net> @ 2024-03-18 16:41 ` Colin_Higbie 2024-03-18 16:49 ` Dave Taht 2024-03-18 19:32 ` David Lang 0 siblings, 2 replies; 7+ messages in thread From: Colin_Higbie @ 2024-03-18 16:41 UTC (permalink / raw) To: starlink To the comments and question from Dave Collier-Brown in response to my saying that we test latency for UX and Alex on 8K screens, both of these seem to take more academic view than I can address on what I view as commercial subjects. By that, I mean that they seem to assume budget and market preferences are secondary considerations rather than the primary driving forces they are to me. From my perspective, end user/customer experience is ultimately the only important metric, where all others are just tools to help convert UX into something more measurable and quantifiable. To be clear, I fully respect the importance of being able to quantify these things, so those metrics have value, but they should always serve as ways to proxy the UX, not a target unto themselves. If you're designing a system that needs minimal lag for testing your new quantum computer or to use in place of synchronized clocks for those amazing x-ray photos of black holes, then your needs may be different, but if you're talking about how Internet providers measure their latency and bandwidth for sales to millions or billions of homes and businesses, then UX based on mainstream applications is what matters. To the specifics: No, we (our company) don't have a detailed latency testing method. We test purely for UX. If users or our QA team report a lag, that's bad and we work to fix it. If QA and users are happy with the that and negative feedback is in other areas unrelated to lag (typically the case), then we deem our handling of latency as "good enough" and focus our engineering efforts on the problem areas or on adding new features. Now, I should acknowledge, this is largely because our application is not particularly latency-sensitive. If it were, we probably would have a lag check as part of our standard automated test bed. For us, as long as our application starts to provide our users with streaming access to our data within a second or so, that's good enough. I realize good-enough is not a hard metric by itself, but it's ultimately the only factor that matters to most users. The exception would be some very specific use cases where 1ms of latency delta makes a difference, like for some stock market transactions and competitive e-sports. To convert the nebulous term "good enough" into actual metrics that ISP's and other providers can use to quantify their service, I stand by my prior point that the industry could establish needed metrics per application. VoIP has stricter latency needs than web browsing. Cloud-based gaming has still stricter latency requirements. There would be some disagreement on what exactly is "good enough" for each of those, but I'm confident we could reach numbers for them, whether by survey and selecting the median, by reported complaints based on service to establish a minimum acceptable level, or by some other method. I doubt there's significant variance on what qualifies as good-enough for each application. 4K vs Higher Resolution as Standard And regarding 4K TV as a standard, I'm surprised this is controversial. 4K is THE high-end standard that defines bandwidth needs today. It is NOT 8K or anything higher (similarly, in spite of those other capabilities you mentioned, CD's are also still 44.1kHz (48hKz is for DVD), with musical fidelity at a commercial level having DECREASED, not increased, where most sales and streaming occurs using lower quality MP3 files). That's not a subjective statement; that is a fact. By "fact" I don't mean that no one thinks 8K is nice or that higher isn't better, but that there is an established industry standard that has already settled this. Netflix defines it as 25Mbps. The other big streamers, Disney+, Max, and Paramount+ all agree. 25Mbps is higher than is usually needed for 4K HDR content (10-15Mbps can generally hit it, depending on the nature of the scenes where slow scenes with a lot of solid background color like cartoons compress into less data than fast moving visually complex scenes), but it it's a good figure to use because it includes a safety margin and, more importantly, it's what the industry has already defined as the requirement. To me, this one is very black and white and clear cut, even more so than latency. IF you're an Internet provider and want to claim that your Internet supports modern viewing standards for streaming, you must provide 25Mbps. I'm generally happy to debate anything and acknowledge other points of view are just as valid as my own, but I don't see this particular point as debatable, because it's a defined fact by the industry. It's effectively too late to challenge this. At best, you'd be fighting customers and content providers alike and to what purpose? Will that 25Mbps requirement change in the future? Probably. It will probably go up even though 4K HDR streaming will probably be achievable with less bandwidth in the future due to further improvements in compression algorithms. This is because, yeah, eventually maybe 8K or higher resolutions will be a standard, or maybe there will be a higher bit depth HDR (that seems slightly more likely to me). It's not at all clear though that's the case. At some point, you reach a state where there is no benefit to higher resolutions. Phones hit that point a few years ago and have stopped moving to higher resolution displays. There is currently 0% of content from any major provider that's in 8K (just some experimental YouTube videos), and a person viewing 8K would be unlikely to report any visual advantage over 4K (SD -> HD is huge, HD -> 4K is noticeable, 4K -> 8K is imperceptible for camera-recording scenes on any standard size viewing experience). Where 8K+ could make a difference would primarily be in rendered content (and the handful of 8K sets sold today play to this market). Standard camera lenses just don't capture a sharp enough picture to benefit from the extra pixels (they can in some cases, but depth of field and human error render these successes isolated to specific kinds of largely static landscape scenes). If the innate fuzziness or blurriness in the image exceeds the size of a pixel, then more pixels don't add any value. However, in a rendered image, like in a video game, those are pixel perfect, so at least there it's possible to benefit from a higher resolution display. But for that, even the top of the line graphics today (Nvidia RTX 4090, now over a year old) can barely generate 4K HDR content with path tracing active at reasonable framerates (60 frames per second), and because of their high cost, those make up only 0.23% of the market as of the most recent data I've seen (this will obviously increase over time). I could also imagine AI may be able to reduce blurriness in captured video in the future and sharpen it before sending it out to viewers, but we're not there yet. For all these reasons, 8K will remain niche for the time being. There's just no good reason for it. When the Super Bowl (one of the first to offer 4K viewing) advertises that it can be viewed in 8K, that's when you know it's approaching a mainstream option. On OLED screens and upcoming microLED displays that can achieve higher contrast ratios than LCD, HDR is far more impactful to the UX and viewing experience than further pixel density increases. Current iterations of LCD can't handle this, even though they claim to support HDR, which has given many consumers the wrong impression that HDR is not a big deal. It is not on LCD's because they cannot achieve the contrast rations needed for impactful HDR. At least not with today's technology, and probably never, just because the advantages to microLED outweigh the benefits I would expect you could get by improving LCD. So maybe we go from the current 10-bit/color HDR to something like 12 or 16 bit HDR. That could also increase bandwidth needs at the same 4K display size. Or, maybe the next generation displays won't be screens but will be entire walls built of microLED fabric that justify going to 16K displays at hundreds of inches. At this point, you'd be close to displays that duplicate a window to the outside world (but still far from the brightness of the sun shining through). But there is nothing at that size that will be at consumer scale in the next 10 years. It's at least that far out (12+-bit HDR might land before that on 80-110" screens), and I suspect quite a bit further. It's one thing to move to a larger TV, because there's already infrastructure for that. On the other hand, to go to entire walls made of a display material would need an entirely different supply chain, different manufacturers, installers, cultural change in how we watch and use it, etc. Those kinds of changes take decades. Cheers, Colin Date: Sun, 17 Mar 2024 12:17:11 -0400 From: Dave Collier-Brown <dave.collier-Brown@indexexchange.com> To: starlink@lists.bufferbloat.net Subject: [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Message-ID: <e0f9affe-f205-4f01-9ff5-3dc93abc31ca@indexexchange.com> Content-Type: text/plain; charset=UTF-8; format=flowed On 2024-03-17 11:47, Colin_Higbie via Starlink wrote: > Fortunately, in our case, even high latency shouldn't be too terrible, but as you rightly point out, if there are many iterations, 1s minimum latency could yield a several second lag, which would be poor UX for almost any application. Since we're no longer testing for that on the premise that 1s minimum latency is no longer a common real-world scenario, it's possible those painful lags could creep into our system without our knowledge. Does that suggest that you should have an easy way to see if you're unexpectedly delivering a slow service? A tool that reports your RTT to customers and an alert on it being high for a significant period might be something all ISPs want, even ones like mine, who just want it to be able to tell a customer "you don't have a network problem" (;-)) And the FCC might find the data illuminating --dave -- David Collier-Brown, | Always do right. This will gratify System Programmer and Author | some people and astonish the rest dave.collier-brown@indexexchange.com | -- Mark Twain CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory. ------------------------------ Message: 2 Date: Sun, 17 Mar 2024 18:00:42 +0100 From: Alexandre Petrescu <alexandre.petrescu@gmail.com> To: starlink@lists.bufferbloat.net Subject: Re: [Starlink] It’s the Latency, FCC Message-ID: <b0b5db3c-baf4-425a-a2c6-38ebc4296e56@gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed Le 16/03/2024 à 20:10, Colin_Higbie via Starlink a écrit : > Just to be clear: 4K is absolutely a standard in streaming, with that being the most popular TV being sold today. 8K is not and likely won't be until 80+" TVs become the norm. I can agree screen size is one aspect pushing the higher resolutions to acceptance, but there are some more signs indicating that 8K is just round the corner, and 16K right after it. The recording consumer devices (cameras) already do 8K recording cheaply, since a couple of years. New acronyms beyond simply resolutions are always ready to come up. HDR (high dynamic range) was such an acronym accompanying 4K, so for 8K there might be another, bringing more than just resolution, maybe even more dynamic range, blacker blacks, wider gamut,-for goggles, etc. for a same screen size. 8K and 16K playing devices might not have a surface to exhibit their entire power, but when such surfaces become available, these 8K and 16K playing devices will be ready for them, whereas 4K no. A similar evolution is witnessed by sound and by crypto: 44KHz CD was enough for all, until SACD 88KHz came about, then DSD64, DSD128 and today DSD 1024, which means DSD 2048 tomorrow. And the Dolby Atmos and 11.1 outputs. These too dont yet have the speakers nor the ears to take advantage of, but in the future they might. In crypto, the 'post-quantum' algorithms are designed to resist brute force by computers that dont exist publicly (a few hundred qubit range exists, but 20.000 qubit range computer is needed) but when they will, these crypto algos will be ready. Given that, one could imagine the bandwidth and latency by a 3D 16K DSD1024 quantum-resistant ciphered multi-party visio-conference with gloves, goggles and other interacting devices, with low latency over starlink. The growth trends (4K...) can be identified and the needed latency numbers can be projected. Alex ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Starlink] Sidebar to It’s the Latency, FCC: Measure it? 2024-03-18 16:41 ` [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Colin_Higbie @ 2024-03-18 16:49 ` Dave Taht 2024-03-18 19:32 ` David Lang 1 sibling, 0 replies; 7+ messages in thread From: Dave Taht @ 2024-03-18 16:49 UTC (permalink / raw) To: Colin_Higbie; +Cc: starlink I am curious what the real world bandwidth requirements are for live sports, streaming? I imagine during episodes of high motion, encoders struggle. On Mon, Mar 18, 2024 at 12:42 PM Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote: > > To the comments and question from Dave Collier-Brown in response to my saying that we test latency for UX and Alex on 8K screens, both of these seem to take more academic view than I can address on what I view as commercial subjects. By that, I mean that they seem to assume budget and market preferences are secondary considerations rather than the primary driving forces they are to me. > > From my perspective, end user/customer experience is ultimately the only important metric, where all others are just tools to help convert UX into something more measurable and quantifiable. To be clear, I fully respect the importance of being able to quantify these things, so those metrics have value, but they should always serve as ways to proxy the UX, not a target unto themselves. If you're designing a system that needs minimal lag for testing your new quantum computer or to use in place of synchronized clocks for those amazing x-ray photos of black holes, then your needs may be different, but if you're talking about how Internet providers measure their latency and bandwidth for sales to millions or billions of homes and businesses, then UX based on mainstream applications is what matters. > > To the specifics: > > No, we (our company) don't have a detailed latency testing method. We test purely for UX. If users or our QA team report a lag, that's bad and we work to fix it. If QA and users are happy with the that and negative feedback is in other areas unrelated to lag (typically the case), then we deem our handling of latency as "good enough" and focus our engineering efforts on the problem areas or on adding new features. Now, I should acknowledge, this is largely because our application is not particularly latency-sensitive. If it were, we probably would have a lag check as part of our standard automated test bed. For us, as long as our application starts to provide our users with streaming access to our data within a second or so, that's good enough. > > I realize good-enough is not a hard metric by itself, but it's ultimately the only factor that matters to most users. The exception would be some very specific use cases where 1ms of latency delta makes a difference, like for some stock market transactions and competitive e-sports. > > To convert the nebulous term "good enough" into actual metrics that ISP's and other providers can use to quantify their service, I stand by my prior point that the industry could establish needed metrics per application. VoIP has stricter latency needs than web browsing. Cloud-based gaming has still stricter latency requirements. There would be some disagreement on what exactly is "good enough" for each of those, but I'm confident we could reach numbers for them, whether by survey and selecting the median, by reported complaints based on service to establish a minimum acceptable level, or by some other method. I doubt there's significant variance on what qualifies as good-enough for each application. > > 4K vs Higher Resolution as Standard > And regarding 4K TV as a standard, I'm surprised this is controversial. 4K is THE high-end standard that defines bandwidth needs today. It is NOT 8K or anything higher (similarly, in spite of those other capabilities you mentioned, CD's are also still 44.1kHz (48hKz is for DVD), with musical fidelity at a commercial level having DECREASED, not increased, where most sales and streaming occurs using lower quality MP3 files). That's not a subjective statement; that is a fact. By "fact" I don't mean that no one thinks 8K is nice or that higher isn't better, but that there is an established industry standard that has already settled this. Netflix defines it as 25Mbps. The other big streamers, Disney+, Max, and Paramount+ all agree. 25Mbps is higher than is usually needed for 4K HDR content (10-15Mbps can generally hit it, depending on the nature of the scenes where slow scenes with a lot of solid background color like cartoons compress into less data than fast moving visually complex scenes), but it it's a good figure to use because it includes a safety margin and, more importantly, it's what the industry has already defined as the requirement. To me, this one is very black and white and clear cut, even more so than latency. IF you're an Internet provider and want to claim that your Internet supports modern viewing standards for streaming, you must provide 25Mbps. I'm generally happy to debate anything and acknowledge other points of view are just as valid as my own, but I don't see this particular point as debatable, because it's a defined fact by the industry. It's effectively too late to challenge this. At best, you'd be fighting customers and content providers alike and to what purpose? > > Will that 25Mbps requirement change in the future? Probably. It will probably go up even though 4K HDR streaming will probably be achievable with less bandwidth in the future due to further improvements in compression algorithms. This is because, yeah, eventually maybe 8K or higher resolutions will be a standard, or maybe there will be a higher bit depth HDR (that seems slightly more likely to me). It's not at all clear though that's the case. At some point, you reach a state where there is no benefit to higher resolutions. Phones hit that point a few years ago and have stopped moving to higher resolution displays. There is currently 0% of content from any major provider that's in 8K (just some experimental YouTube videos), and a person viewing 8K would be unlikely to report any visual advantage over 4K (SD -> HD is huge, HD -> 4K is noticeable, 4K -> 8K is imperceptible for camera-recording scenes on any standard size viewing experience). > > Where 8K+ could make a difference would primarily be in rendered content (and the handful of 8K sets sold today play to this market). Standard camera lenses just don't capture a sharp enough picture to benefit from the extra pixels (they can in some cases, but depth of field and human error render these successes isolated to specific kinds of largely static landscape scenes). If the innate fuzziness or blurriness in the image exceeds the size of a pixel, then more pixels don't add any value. However, in a rendered image, like in a video game, those are pixel perfect, so at least there it's possible to benefit from a higher resolution display. But for that, even the top of the line graphics today (Nvidia RTX 4090, now over a year old) can barely generate 4K HDR content with path tracing active at reasonable framerates (60 frames per second), and because of their high cost, those make up only 0.23% of the market as of the most recent data I've seen (this will obviously increase over time). > > I could also imagine AI may be able to reduce blurriness in captured video in the future and sharpen it before sending it out to viewers, but we're not there yet. For all these reasons, 8K will remain niche for the time being. There's just no good reason for it. When the Super Bowl (one of the first to offer 4K viewing) advertises that it can be viewed in 8K, that's when you know it's approaching a mainstream option. > > On OLED screens and upcoming microLED displays that can achieve higher contrast ratios than LCD, HDR is far more impactful to the UX and viewing experience than further pixel density increases. Current iterations of LCD can't handle this, even though they claim to support HDR, which has given many consumers the wrong impression that HDR is not a big deal. It is not on LCD's because they cannot achieve the contrast rations needed for impactful HDR. At least not with today's technology, and probably never, just because the advantages to microLED outweigh the benefits I would expect you could get by improving LCD. > > So maybe we go from the current 10-bit/color HDR to something like 12 or 16 bit HDR. That could also increase bandwidth needs at the same 4K display size. Or, maybe the next generation displays won't be screens but will be entire walls built of microLED fabric that justify going to 16K displays at hundreds of inches. At this point, you'd be close to displays that duplicate a window to the outside world (but still far from the brightness of the sun shining through). But there is nothing at that size that will be at consumer scale in the next 10 years. It's at least that far out (12+-bit HDR might land before that on 80-110" screens), and I suspect quite a bit further. It's one thing to move to a larger TV, because there's already infrastructure for that. On the other hand, to go to entire walls made of a display material would need an entirely different supply chain, different manufacturers, installers, cultural change in how we watch and use it, etc. Those kinds of changes take decades. > > Cheers, > Colin > > > Date: Sun, 17 Mar 2024 12:17:11 -0400 > From: Dave Collier-Brown <dave.collier-Brown@indexexchange.com> > To: starlink@lists.bufferbloat.net > Subject: [Starlink] Sidebar to It’s the Latency, FCC: Measure it? > Message-ID: <e0f9affe-f205-4f01-9ff5-3dc93abc31ca@indexexchange.com> > Content-Type: text/plain; charset=UTF-8; format=flowed > > On 2024-03-17 11:47, Colin_Higbie via Starlink wrote: > > > Fortunately, in our case, even high latency shouldn't be too terrible, but as you rightly point out, if there are many iterations, 1s minimum latency could yield a several second lag, which would be poor UX for almost any application. Since we're no longer testing for that on the premise that 1s minimum latency is no longer a common real-world scenario, it's possible those painful lags could creep into our system without our knowledge. > > Does that suggest that you should have an easy way to see if you're unexpectedly delivering a slow service? A tool that reports your RTT to customers and an alert on it being high for a significant period might be something all ISPs want, even ones like mine, who just want it to be able to tell a customer "you don't have a network problem" (;-)) > > And the FCC might find the data illuminating > > --dave > > -- > David Collier-Brown, | Always do right. This will gratify > System Programmer and Author | some people and astonish the rest > dave.collier-brown@indexexchange.com | -- Mark Twain > > > CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory. > > > ------------------------------ > > Message: 2 > Date: Sun, 17 Mar 2024 18:00:42 +0100 > From: Alexandre Petrescu <alexandre.petrescu@gmail.com> > To: starlink@lists.bufferbloat.net > Subject: Re: [Starlink] It’s the Latency, FCC > Message-ID: <b0b5db3c-baf4-425a-a2c6-38ebc4296e56@gmail.com> > Content-Type: text/plain; charset=UTF-8; format=flowed > > > Le 16/03/2024 à 20:10, Colin_Higbie via Starlink a écrit : > > Just to be clear: 4K is absolutely a standard in streaming, with that being the most popular TV being sold today. 8K is not and likely won't be until 80+" TVs become the norm. > > I can agree screen size is one aspect pushing the higher resolutions to acceptance, but there are some more signs indicating that 8K is just round the corner, and 16K right after it. > > The recording consumer devices (cameras) already do 8K recording cheaply, since a couple of years. > > New acronyms beyond simply resolutions are always ready to come up. HDR (high dynamic range) was such an acronym accompanying 4K, so for 8K there might be another, bringing more than just resolution, maybe even more dynamic range, blacker blacks, wider gamut,-for goggles, etc. for a same screen size. > > 8K and 16K playing devices might not have a surface to exhibit their entire power, but when such surfaces become available, these 8K and 16K playing devices will be ready for them, whereas 4K no. > > A similar evolution is witnessed by sound and by crypto: 44KHz CD was enough for all, until SACD 88KHz came about, then DSD64, DSD128 and today DSD 1024, which means DSD 2048 tomorrow. And the Dolby Atmos and > 11.1 outputs. These too dont yet have the speakers nor the ears to take advantage of, but in the future they might. In crypto, the 'post-quantum' algorithms are designed to resist brute force by computers that dont exist publicly (a few hundred qubit range exists, but 20.000 qubit range computer is needed) but when they will, these crypto algos will be ready. > > Given that, one could imagine the bandwidth and latency by a 3D 16K > DSD1024 quantum-resistant ciphered multi-party visio-conference with gloves, goggles and other interacting devices, with low latency over starlink. > > The growth trends (4K...) can be identified and the needed latency numbers can be projected. > > Alex > _______________________________________________ > Starlink mailing list > Starlink@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/starlink -- https://www.youtube.com/watch?v=N0Tmvv5jJKs Epik Mellon Podcast Dave Täht CSO, LibreQos ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Starlink] Sidebar to It’s the Latency, FCC: Measure it? 2024-03-18 16:41 ` [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Colin_Higbie 2024-03-18 16:49 ` Dave Taht @ 2024-03-18 19:32 ` David Lang 2024-03-18 19:52 ` Sebastian Moeller 1 sibling, 1 reply; 7+ messages in thread From: David Lang @ 2024-03-18 19:32 UTC (permalink / raw) To: Colin_Higbie; +Cc: starlink On Mon, 18 Mar 2024, Colin_Higbie via Starlink wrote: > Will that 25Mbps requirement change in the future? Probably. It will probably > go up even though 4K HDR streaming will probably be achievable with less > bandwidth in the future due to further improvements in compression algorithms. > This is because, yeah, eventually maybe 8K or higher resolutions will be a > standard, or maybe there will be a higher bit depth HDR (that seems slightly > more likely to me). It's not at all clear though that's the case. At some > point, you reach a state where there is no benefit to higher resolutions. > Phones hit that point a few years ago and have stopped moving to higher > resolution displays. There is currently 0% of content from any major provider > that's in 8K (just some experimental YouTube videos), and a person viewing 8K > would be unlikely to report any visual advantage over 4K (SD -> HD is huge, HD > -> 4K is noticeable, 4K -> 8K is imperceptible for camera-recording scenes on > any standard size viewing experience). I'll point out that professional still cameras (DSLRs and the new mirrorless ones) also seem to have stalled with the top-of-the-line Canon and Nikon topping out at around 20-24 mp (after selling some models that went to 30p or so), Sony has some models at 45 mp. 8k video is in the ballpark of 30mp David Lang ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Starlink] Sidebar to It’s the Latency, FCC: Measure it? 2024-03-18 19:32 ` David Lang @ 2024-03-18 19:52 ` Sebastian Moeller 2024-03-18 20:00 ` David Lang 0 siblings, 1 reply; 7+ messages in thread From: Sebastian Moeller @ 2024-03-18 19:52 UTC (permalink / raw) To: David Lang; +Cc: Colin_Higbie, starlink Hi David, > On 18. Mar 2024, at 20:32, David Lang via Starlink <starlink@lists.bufferbloat.net> wrote: > > On Mon, 18 Mar 2024, Colin_Higbie via Starlink wrote: > >> Will that 25Mbps requirement change in the future? Probably. It will probably go up even though 4K HDR streaming will probably be achievable with less bandwidth in the future due to further improvements in compression algorithms. This is because, yeah, eventually maybe 8K or higher resolutions will be a standard, or maybe there will be a higher bit depth HDR (that seems slightly more likely to me). It's not at all clear though that's the case. At some point, you reach a state where there is no benefit to higher resolutions. Phones hit that point a few years ago and have stopped moving to higher resolution displays. There is currently 0% of content from any major provider that's in 8K (just some experimental YouTube videos), and a person viewing 8K would be unlikely to report any visual advantage over 4K (SD -> HD is huge, HD -> 4K is noticeable, 4K -> 8K is imperceptible for camera-recording scenes on any standard size viewing experience). > > I'll point out that professional still cameras (DSLRs and the new mirrorless ones) also seem to have stalled with the top-of-the-line Canon and Nikon topping out at around 20-24 mp (after selling some models that went to 30p or so), Sony has some models at 45 mp. One of the issues is cost, Zour sensor pixels need to be large enough to capture a sufficient amount of photons in a short enough amount of time to be useful, and that puts a (soft) lower limit on how small you can make your pixels... Once your divided up your sensor are into the smalles reasonable pixel size all you can do iso is increase sensor size and hence cost... especially if I am correct in assuming that at one point you also need to increase the diameter of your optics to "feed" the sensor properly. At which point it is not only cost but also size... Regards Sebastian > > 8k video is in the ballpark of 30mp > > David Lang > _______________________________________________ > Starlink mailing list > Starlink@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/starlink ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Starlink] Sidebar to It’s the Latency, FCC: Measure it? 2024-03-18 19:52 ` Sebastian Moeller @ 2024-03-18 20:00 ` David Lang 2024-03-19 16:06 ` David Lang 0 siblings, 1 reply; 7+ messages in thread From: David Lang @ 2024-03-18 20:00 UTC (permalink / raw) To: Sebastian Moeller; +Cc: David Lang, Colin_Higbie, starlink On Mon, 18 Mar 2024, Sebastian Moeller wrote: >> I'll point out that professional still cameras (DSLRs and the new mirrorless >> ones) also seem to have stalled with the top-of-the-line Canon and Nikon >> topping out at around 20-24 mp (after selling some models that went to 30p or >> so), Sony has some models at 45 mp. > > One of the issues is cost, Zour sensor pixels need to be large enough to > capture a sufficient amount of photons in a short enough amount of time to be > useful, and that puts a (soft) lower limit on how small you can make your > pixels... Once your divided up your sensor are into the smalles reasonable > pixel size all you can do iso is increase sensor size and hence cost... > especially if I am correct in assuming that at one point you also need to > increase the diameter of your optics to "feed" the sensor properly. At which > point it is not only cost but also size... I'm talking about full frame high-end professional cameras (the ones where the body with no lens costs $8k or so). This has been consistant for over a decade. So I don't think it's a cost/manufacturing limit in place here. There are a lot of cameras made with smaller sensors in similar resolution, but very little at much higher resolutions. at the low end, you will see some small, higher resolution sensors, but those are for fixed lens cameras (like phones) where you use digital zoom David Lang ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Starlink] Sidebar to It’s the Latency, FCC: Measure it? 2024-03-18 20:00 ` David Lang @ 2024-03-19 16:06 ` David Lang 0 siblings, 0 replies; 7+ messages in thread From: David Lang @ 2024-03-19 16:06 UTC (permalink / raw) To: David Lang; +Cc: Sebastian Moeller, Colin_Higbie, starlink On Mon, 18 Mar 2024, David Lang wrote: > On Mon, 18 Mar 2024, Sebastian Moeller wrote: > >>> I'll point out that professional still cameras (DSLRs and the new >>> mirrorless ones) also seem to have stalled with the top-of-the-line Canon >>> and Nikon topping out at around 20-24 mp (after selling some models that >>> went to 30p or so), Sony has some models at 45 mp. corretion to my earlier post, 8k video is ~30 megapixels, 4k video is about 8 megapixels. So cameras and lenses can easily handle 8k video (in terms of quality), beyond that it seems that even for professional photographers who's work is going to be blown up to big posters seldom bother going to higher resolutions. David Lang >> One of the issues is cost, Zour sensor pixels need to be large enough >> to capture a sufficient amount of photons in a short enough amount of time >> to be useful, and that puts a (soft) lower limit on how small you can make >> your pixels... Once your divided up your sensor are into the smalles >> reasonable pixel size all you can do iso is increase sensor size and hence >> cost... especially if I am correct in assuming that at one point you also >> need to increase the diameter of your optics to "feed" the sensor properly. >> At which point it is not only cost but also size... > > I'm talking about full frame high-end professional cameras (the ones where > the body with no lens costs $8k or so). This has been consistant for over a > decade. So I don't think it's a cost/manufacturing limit in place here. > > There are a lot of cameras made with smaller sensors in similar resolution, > but very little at much higher resolutions. > > at the low end, you will see some small, higher resolution sensors, but those > are for fixed lens cameras (like phones) where you use digital zoom > > David Lang > ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <mailman.11.1710518402.17089.starlink@lists.bufferbloat.net>]
* Re: [Starlink] It’s the Latency, FCC [not found] <mailman.11.1710518402.17089.starlink@lists.bufferbloat.net> @ 2024-03-15 18:32 ` Colin Higbie 2024-03-15 18:41 ` Colin_Higbie 0 siblings, 1 reply; 7+ messages in thread From: Colin Higbie @ 2024-03-15 18:32 UTC (permalink / raw) To: starlink > I have now been trying to break the common conflation that download "speed" > means anything at all for day to day, minute to minute, second to second, use, > once you crack 10mbit, now, for over 14 years. Am I succeeding? I lost the 25/10 > battle, and keep pointing at really terrible latency under load and wifi weirdnesses > for many existing 100/20 services today. While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content. So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams. For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about: Latency: below 50ms under load always feels good except for some intensive gaming (I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference) Download Bandwidth: 10Mbps good enough if not doing UHD video streaming Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other. Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023). Cheers, Colin ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Starlink] It’s the Latency, FCC 2024-03-15 18:32 ` [Starlink] It’s the Latency, FCC Colin Higbie @ 2024-03-15 18:41 ` Colin_Higbie 2024-03-15 19:53 ` Spencer Sevilla 0 siblings, 1 reply; 7+ messages in thread From: Colin_Higbie @ 2024-03-15 18:41 UTC (permalink / raw) To: starlink > I have now been trying to break the common conflation that download "speed" > means anything at all for day to day, minute to minute, second to > second, use, once you crack 10mbit, now, for over 14 years. Am I > succeeding? I lost the 25/10 battle, and keep pointing at really > terrible latency under load and wifi weirdnesses for many existing 100/20 services today. While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content. So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams. For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about: Latency: below 50ms under load always feels good except for some intensive gaming (I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference) Download Bandwidth: 10Mbps good enough if not doing UHD video streaming Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other. Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023). Cheers, Colin ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Starlink] It’s the Latency, FCC 2024-03-15 18:41 ` Colin_Higbie @ 2024-03-15 19:53 ` Spencer Sevilla 2024-03-15 23:07 ` David Lang 0 siblings, 1 reply; 7+ messages in thread From: Spencer Sevilla @ 2024-03-15 19:53 UTC (permalink / raw) To: Colin_Higbie; +Cc: Dave Taht via Starlink Your comment about 4k HDR TVs got me thinking about the bandwidth “arms race” between infrastructure and its clients. It’s a particular pet peeve of mine that as any resource (bandwidth in this case, but the same can be said for memory) becomes more plentiful, software engineers respond by wasting it until it’s scarce enough to require optimization again. Feels like an awkward kind of malthusian inflation that ends up forcing us to buy newer/faster/better devices to perform the same basic functions, which haven’t changed almost at all. I completely agree that no one “needs” 4K UHDR, but when we say this I think we generally mean as opposed to a slightly lower codec, like regular HDR or 1080p. In practice, I’d be willing to bet that there’s at least one poorly programmed TV out there that doesn’t downgrade well or at all, so the tradeoff becomes "4K UHDR or endless stuttering/buffering.” Under this (totally unnecessarily forced upon us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise arms raced ourselves into a TV that can’t really stream anything. A technical downgrade from literally the 1960s. See also: The endless march of “smart appliances” and TVs/gaming systems that require endless humongous software updates. My stove requires natural gas and 120VAC, and I like it that way. Other stoves require… how many Mbps to function regularly? Other food for thought, I wonder how increasing minimum broadband speed requirements across the country will encourage or discourage this behavior among network engineers. I sincerely don’t look forward to a future in which we all require 10Gbps to the house but can’t do much with it cause it’s all taken up by lightbulb software updates every evening /rant. > On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote: > >> I have now been trying to break the common conflation that download "speed" >> means anything at all for day to day, minute to minute, second to >> second, use, once you crack 10mbit, now, for over 14 years. Am I >> succeeding? I lost the 25/10 battle, and keep pointing at really >> terrible latency under load and wifi weirdnesses for many existing 100/20 services today. > > While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content. > > So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams. > > For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about: > > Latency: below 50ms under load always feels good except for some intensive gaming (I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference) > > Download Bandwidth: 10Mbps good enough if not doing UHD video streaming > > Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k > > Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams > > So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other. > > Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023). > > Cheers, > Colin > > _______________________________________________ > Starlink mailing list > Starlink@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/starlink ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Starlink] It’s the Latency, FCC 2024-03-15 19:53 ` Spencer Sevilla @ 2024-03-15 23:07 ` David Lang 2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie 0 siblings, 1 reply; 7+ messages in thread From: David Lang @ 2024-03-15 23:07 UTC (permalink / raw) To: Spencer Sevilla; +Cc: Colin_Higbie, Dave Taht via Starlink [-- Attachment #1: Type: TEXT/PLAIN, Size: 6025 bytes --] one person's 'wasteful resolution' is another person's 'large enhancement' going from 1080p to 4k video is not being wasteful, it's opting to use the bandwidth in a different way. saying that it's wasteful for someone to choose to do something is saying that you know better what their priorities should be. I agree that increasing resources allow programmers to be lazier and write apps that are bigger, but they are also writing them in less time. What right do you have to say that the programmer's time is less important than the ram/bandwidth used? I agree that it would be nice to have more people write better code, but everything, including this, has trade-offs. David Lang On Fri, 15 Mar 2024, Spencer Sevilla via Starlink wrote: > Your comment about 4k HDR TVs got me thinking about the bandwidth “arms race” between infrastructure and its clients. It’s a particular pet peeve of mine that as any resource (bandwidth in this case, but the same can be said for memory) becomes more plentiful, software engineers respond by wasting it until it’s scarce enough to require optimization again. Feels like an awkward kind of malthusian inflation that ends up forcing us to buy newer/faster/better devices to perform the same basic functions, which haven’t changed almost at all. > > I completely agree that no one “needs” 4K UHDR, but when we say this I think we generally mean as opposed to a slightly lower codec, like regular HDR or 1080p. In practice, I’d be willing to bet that there’s at least one poorly programmed TV out there that doesn’t downgrade well or at all, so the tradeoff becomes "4K UHDR or endless stuttering/buffering.” Under this (totally unnecessarily forced upon us!) paradigm, 4K UHDR feels a lot more necessary, or we’ve otherwise arms raced ourselves into a TV that can’t really stream anything. A technical downgrade from literally the 1960s. > > See also: The endless march of “smart appliances” and TVs/gaming systems that require endless humongous software updates. My stove requires natural gas and 120VAC, and I like it that way. Other stoves require… how many Mbps to function regularly? Other food for thought, I wonder how increasing minimum broadband speed requirements across the country will encourage or discourage this behavior among network engineers. I sincerely don’t look forward to a future in which we all require 10Gbps to the house but can’t do much with it cause it’s all taken up by lightbulb software updates every evening /rant. > >> On Mar 15, 2024, at 11:41, Colin_Higbie via Starlink <starlink@lists.bufferbloat.net> wrote: >> >>> I have now been trying to break the common conflation that download "speed" >>> means anything at all for day to day, minute to minute, second to >>> second, use, once you crack 10mbit, now, for over 14 years. Am I >>> succeeding? I lost the 25/10 battle, and keep pointing at really >>> terrible latency under load and wifi weirdnesses for many existing 100/20 services today. >> >> While I completely agree that latency has bigger impact on how responsive the Internet feels to use, I do think that 10Mbit is too low for some standard applications regardless of latency: with the more recent availability of 4K and higher streaming, that does require a higher minimum bandwidth to work at all. One could argue that no one NEEDS 4K streaming, but many families would view this as an important part of what they do with their Internet (Starlink makes this reliably possible at our farmhouse). 4K HDR-supporting TV's are among the most popular TVs being purchased in the U.S. today. Netflix, Amazon, Max, Disney and other streaming services provide a substantial portion of 4K HDR content. >> >> So, I agree that 25/10 is sufficient, for up to 4k HDR streaming. 100/20 would provide plenty of bandwidth for multiple concurrent 4K users or a 1-2 8K streams. >> >> For me, not claiming any special expertise on market needs, just my own personal assessment on what typical families will need and care about: >> >> Latency: below 50ms under load always feels good except for some intensive gaming (I don't see any benefit to getting loaded latency further below ~20ms for typical applications, with an exception for cloud-based gaming that benefits with lower latency all the way down to about 5ms for young, really fast players, the rest of us won't be able to tell the difference) >> >> Download Bandwidth: 10Mbps good enough if not doing UHD video streaming >> >> Download Bandwidth: 25 - 100Mbps if doing UHD video streaming, depending on # of streams or if wanting to be ready for 8k >> >> Upload Bandwidth: 10Mbps good enough for quality video conferencing, higher only needed for multiple concurrent outbound streams >> >> So, for example (and ignoring upload for this), I would rather have latency at 50ms (under load) and DL bandwidth of 25Mbps than latency of 1ms with a max bandwidth of 10Mbps, because the super-low latency doesn't solve the problem with insufficient bandwidth to watch 4K HDR content. But, I'd also rather have latency of 20ms with 100Mbps DL, then latency that exceeds 100ms under load with 1Gbps DL bandwidth. I think the important thing is to reach "good enough" on both, not just excel at one while falling short of "good enough" on the other. >> >> Note that Starlink handles all of this well, including kids watching YouTube while my wife and I watch 4K UHD Netflix, except the upload speed occasionally tops at under 3Mbps for me, causing quality degradation for outbound video calls (or used to, it seems to have gotten better in recent months – no problems since sometime in 2023). >> >> Cheers, >> Colin >> >> _______________________________________________ >> Starlink mailing list >> Starlink@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/starlink > > _______________________________________________ > Starlink mailing list > Starlink@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/starlink ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC 2024-03-15 23:07 ` David Lang @ 2024-03-16 18:45 ` Colin_Higbie 2024-03-16 23:05 ` David Lang 0 siblings, 1 reply; 7+ messages in thread From: Colin_Higbie @ 2024-03-16 18:45 UTC (permalink / raw) To: David Lang, Dave Taht via Starlink Beautifully said, David Lang. I completely agree. At the same time, I do think if you give people tools where latency is rarely an issue (say a 10x improvement, so perception of 1/10 the latency), developers will be less efficient UNTIL that inefficiency begins to yield poor UX. For example, if I know I can rely on latency being 10ms and users don't care until total lag exceeds 500ms, I might design something that uses a lot of back-and-forth between client and server. As long as there are fewer than 50 iterations (500 / 10), users will be happy. But if I need to do 100 iterations to get the result, then I'll do some bundling of the operations to keep the total observable lag at or below that 500ms. I remember programming computer games in the 1980s and the typical RAM users had increased. Before that, I had to contort my code to get it to run in 32kB. After the increase, I could stretch out and use 48kB and stop wasting time shoehorning my code or loading-in segments from floppy disk into the limited RAM. To your point: yes, this made things faster for me as a developer, just as the latency improvements ease the burden on the client-server application developer who needs to ensure a maximum lag below 500ms. In terms of user experience (UX), I think of there as being "good enough" plateaus based on different use-cases. For example, when web browsing, even 1,000ms of latency is barely noticeable. So any web browser application that comes in under 1,000ms will be "good enough." For VoIP, the "good enough" figure is probably more like 100ms. For video conferencing, maybe it's 80ms (the ability to see the person's facial expression likely increases the expectation of reactions and reduces the tolerance for lag). For some forms of cloud gaming, the "good enough" figure may be as low as 5ms. That's not to say that 20ms isn't better for VoIP than 100 or 500ms isn't better than 1,000 for web browsing, just that the value for each further incremental reduction in latency drops significantly once you get to that good-enough point. However, those further improvements may open entirely new applications, such as enabling VoIP where before maybe it was only "good enough" for web browsing (think geosynchronous satellites). In other words, more important than just chasing ever lower latency, it's important to provide SUFFICIENTLY LOW latency for users to perform their intended applications. Getting even lower is still great for opening things up to new applications we never considered before, just like faster CPU's, more RAM, better graphics, etc. have always done since the first computer. But if we're talking about measuring what people need today, this can be done fairly easily based on intended applications. Bandwidth scales a little differently. There's still a "good enough" level driven by time for a web page to load of about 5s (as web pages become ever more complex and dynamic, this means that bandwidth needs increase), 1Mbps for VoIP, 7Mbps UL/DL for video conferencing, 20Mbps DL for 4K streaming, etc. In addition, there's also a linear scaling to the number of concurrent users. If 1 user needs 15Mbps to stream 4K, 3 users in the household will need about 45Mbps to all stream 4K at the same time, a very real-world scenario at 7pm in a home. This differs from the latency hit of multiple users. I don't know how latency is affected by users, but I know if it's 20ms with 1 user, it's NOT 40ms with 2 users, 60ms with 3, etc. With the bufferbloat improvements created and put forward by members of this group, I think latency doesn't increase by much with multiple concurrent streams. So all taken together, there can be fairly straightforward descriptions of latency and bandwidth based on expected usage. These are not mysterious attributes. It can be easily calculated per user based on expected use cases. Cheers, Colin -----Original Message----- From: David Lang <david@lang.hm> Sent: Friday, March 15, 2024 7:08 PM To: Spencer Sevilla <spencer.builds.networks@gmail.com> Cc: Colin_Higbie <CHigbie1@Higbie.name>; Dave Taht via Starlink <starlink@lists.bufferbloat.net> Subject: Re: [Starlink] Itʼs the Latency, FCC one person's 'wasteful resolution' is another person's 'large enhancement' going from 1080p to 4k video is not being wasteful, it's opting to use the bandwidth in a different way. saying that it's wasteful for someone to choose to do something is saying that you know better what their priorities should be. I agree that increasing resources allow programmers to be lazier and write apps that are bigger, but they are also writing them in less time. What right do you have to say that the programmer's time is less important than the ram/bandwidth used? I agree that it would be nice to have more people write better code, but everything, including this, has trade-offs. David Lang ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Starlink] Itʼs the Latency, FCC 2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie @ 2024-03-16 23:05 ` David Lang 2024-03-17 15:47 ` [Starlink] It’s " Colin_Higbie 0 siblings, 1 reply; 7+ messages in thread From: David Lang @ 2024-03-16 23:05 UTC (permalink / raw) To: Colin_Higbie; +Cc: Dave Taht via Starlink On Sat, 16 Mar 2024, Colin_Higbie wrote: > At the same time, I do think if you give people tools where latency is rarely > an issue (say a 10x improvement, so perception of 1/10 the latency), > developers will be less efficient UNTIL that inefficiency begins to yield poor > UX. For example, if I know I can rely on latency being 10ms and users don't > care until total lag exceeds 500ms, I might design something that uses a lot > of back-and-forth between client and server. As long as there are fewer than > 50 iterations (500 / 10), users will be happy. But if I need to do 100 > iterations to get the result, then I'll do some bundling of the operations to > keep the total observable lag at or below that 500ms. I don't think developers think about latency at all (as a general rule) they develop and test over their local lan, and assume it will 'just work' over the Internet. > In terms of user experience (UX), I think of there as being "good enough" > plateaus based on different use-cases. For example, when web browsing, even > 1,000ms of latency is barely noticeable. So any web browser application that > comes in under 1,000ms will be "good enough." For VoIP, the "good enough" > figure is probably more like 100ms. For video conferencing, maybe it's 80ms > (the ability to see the person's facial expression likely increases the > expectation of reactions and reduces the tolerance for lag). For some forms of > cloud gaming, the "good enough" figure may be as low as 5ms. 1 second for the page to load is acceptable (ot nice), but one second delay in reacting to a clip is unacceptable. As I understand it, below 100ms is considered 'instantanious response' for most people. > That's not to say that 20ms isn't better for VoIP than 100 or 500ms isn't > better than 1,000 for web browsing, just that the value for each further > incremental reduction in latency drops significantly once you get to that > good-enough point. However, those further improvements may open entirely new > applications, such as enabling VoIP where before maybe it was only "good > enough" for web browsing (think geosynchronous satellites). the problem is that latency stacks, you click on the web page, you do a dns lookup for the page, then a http request for the page contents, which triggers a http request for a css page, and possibly multiple dns/http requests for libraries so a 100ms latency on the network can result in multiple second page load times for the user (even if all of the content ends up being cached already) <snip a bunch of good discussion> > So all taken together, there can be fairly straightforward descriptions of > latency and bandwidth based on expected usage. These are not mysterious > attributes. It can be easily calculated per user based on expected use cases. however, the lag between new uses showing up and changes to the network driven by those new uses is multiple years long, so the network operators and engineers need to be proactive, not reactive. don't wait until the users are complaining before upgrading bandwidth/latency David Lang ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Starlink] It’s the Latency, FCC 2024-03-16 23:05 ` David Lang @ 2024-03-17 15:47 ` Colin_Higbie 2024-03-17 16:17 ` [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Dave Collier-Brown 0 siblings, 1 reply; 7+ messages in thread From: Colin_Higbie @ 2024-03-17 15:47 UTC (permalink / raw) To: David Lang, Dave Taht via Starlink David, Just on that one point that you "don't think developers think about latency at all," what developers (en masse, and as managed by their employers) care about is the user experience. If they don't think latency is an important part of the UX, then indeed they won't think about it. However, if latency is vital to the UX, such as in gaming or voice and video calling, it will be a focus. Standard QA will include use cases that they believe reflect the majority of their users. We have done testing with artificially high latencies to simulate geosynchronous satellite users, back when they represented a notable portion of our userbase. They no longer do (thanks to services like Starlink and recent proliferation of FTTH and even continued spreading of slower cable and DSL availability into more rural areas), so we no longer include those high latencies in our testing. This does indeed mean that our services will probably become less tolerant of higher latencies (and if we still have any geosynchronous satellite customers, they may resent this possible degradation in service). Some could call this lazy on our part, but it's just doing what's cost effective for most of our users. I'm estimating, but I think probably about 3 sigma of our users have typical latency (unloaded) of under 120ms. You or others on this list probably know better than I what fraction of our users will suffer severe enough bufferbloat to push a perceptible % of their transactions beyond 200ms. Fortunately, in our case, even high latency shouldn't be too terrible, but as you rightly point out, if there are many iterations, 1s minimum latency could yield a several second lag, which would be poor UX for almost any application. Since we're no longer testing for that on the premise that 1s minimum latency is no longer a common real-world scenario, it's possible those painful lags could creep into our system without our knowledge. This is rational and what we should expect and want application and solution developers to do. We would not want developers to spend time, and thereby increase costs, focusing on areas that are not particularly important to their users and customers. Cheers, Colin -----Original Message----- From: David Lang <david@lang.hm> Sent: Saturday, March 16, 2024 7:06 PM To: Colin_Higbie <CHigbie1@Higbie.name> Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net> Subject: RE: [Starlink] It’s the Latency, FCC On Sat, 16 Mar 2024, Colin_Higbie wrote: > At the same time, I do think if you give people tools where latency is > rarely an issue (say a 10x improvement, so perception of 1/10 the > latency), developers will be less efficient UNTIL that inefficiency > begins to yield poor UX. For example, if I know I can rely on latency > being 10ms and users don't care until total lag exceeds 500ms, I might > design something that uses a lot of back-and-forth between client and > server. As long as there are fewer than > 50 iterations (500 / 10), users will be happy. But if I need to do 100 > iterations to get the result, then I'll do some bundling of the > operations to keep the total observable lag at or below that 500ms. I don't think developers think about latency at all (as a general rule) ^ permalink raw reply [flat|nested] 7+ messages in thread
* [Starlink] Sidebar to It’s the Latency, FCC: Measure it? 2024-03-17 15:47 ` [Starlink] It’s " Colin_Higbie @ 2024-03-17 16:17 ` Dave Collier-Brown 0 siblings, 0 replies; 7+ messages in thread From: Dave Collier-Brown @ 2024-03-17 16:17 UTC (permalink / raw) To: starlink On 2024-03-17 11:47, Colin_Higbie via Starlink wrote: > Fortunately, in our case, even high latency shouldn't be too terrible, but as you rightly point out, if there are many iterations, 1s minimum latency could yield a several second lag, which would be poor UX for almost any application. Since we're no longer testing for that on the premise that 1s minimum latency is no longer a common real-world scenario, it's possible those painful lags could creep into our system without our knowledge. Does that suggest that you should have an easy way to see if you're unexpectedly delivering a slow service? A tool that reports your RTT to customers and an alert on it being high for a significant period might be something all ISPs want, even ones like mine, who just want it to be able to tell a customer "you don't have a network problem" (;-)) And the FCC might find the data illuminating --dave -- David Collier-Brown, | Always do right. This will gratify System Programmer and Author | some people and astonish the rest dave.collier-brown@indexexchange.com | -- Mark Twain CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory. ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-03-19 16:06 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <mailman.2503.1710703654.1074.starlink@lists.bufferbloat.net> 2024-03-18 16:41 ` [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Colin_Higbie 2024-03-18 16:49 ` Dave Taht 2024-03-18 19:32 ` David Lang 2024-03-18 19:52 ` Sebastian Moeller 2024-03-18 20:00 ` David Lang 2024-03-19 16:06 ` David Lang [not found] <mailman.11.1710518402.17089.starlink@lists.bufferbloat.net> 2024-03-15 18:32 ` [Starlink] It’s the Latency, FCC Colin Higbie 2024-03-15 18:41 ` Colin_Higbie 2024-03-15 19:53 ` Spencer Sevilla 2024-03-15 23:07 ` David Lang 2024-03-16 18:45 ` [Starlink] Itʼs " Colin_Higbie 2024-03-16 23:05 ` David Lang 2024-03-17 15:47 ` [Starlink] It’s " Colin_Higbie 2024-03-17 16:17 ` [Starlink] Sidebar to It’s the Latency, FCC: Measure it? Dave Collier-Brown
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox