From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 429D63B2A4 for ; Sat, 16 Mar 2024 15:10:08 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmx.de; s=s31663417; t=1710616199; x=1711220999; i=moeller0@gmx.de; bh=4hXar5m1DpVv4hAmdPo+7xGCEBd86rDeLLw0r5L7is4=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References: To; b=IthaAmgdAd1NyBxjI/3D7/NQGsYQZaexmLxuayejIUe/hJlbThS7bScxv0AjhL35 dXUiBpc5GLj3lRfaQkACh/8HcbwHLXTJ71QqUnLx2C04ZALDf0MCchupNZlv8lRqT BE6bdpDG/THWao5cgaNA+8xuTj6BgwDJMmnNe2EC28/r/WieK7B7HJX0YqefBhTdb JT8IN6f2wbKxbHbTqKlBgO1HNvAWXxDVv9AsFVq79pfCaTB9D9+MHvc3S+iH9kyLI 57GL2WMF3HOAFjJDgS6a7x1K7lUCUyH/lmr7c7qsxUGwWbKhTRl/gaFBQ9Xp7w6uP TVx+vp4wcPYcAJVYRw== X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a Received: from smtpclient.apple ([134.76.241.253]) by mail.gmx.net (mrgmx005 [212.227.17.190]) with ESMTPSA (Nemesis) id 1Ml6qM-1r4UJ847Vk-00lV2y; Sat, 16 Mar 2024 20:09:59 +0100 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3774.500.171.1.1\)) From: Sebastian Moeller In-Reply-To: Date: Sat, 16 Mar 2024 20:09:45 +0100 Cc: David Lang , Dave Taht via Starlink Content-Transfer-Encoding: quoted-printable Message-Id: References: <5CA23B3B-B3FB-4749-97BD-05D3A4552453@gmail.com> To: Colin_Higbie X-Mailer: Apple Mail (2.3774.500.171.1.1) X-Provags-ID: V03:K1:JeuQ4vOGW785IFAzLOjTeG6ivI6tpNyuTmB1J0Y0l1cQRk8E93z cgJZeEeqVvNDGkZnlT26Ntte/N0nOWOJ945S3cY6mCvmQh3YPLcbRtVhe0x6HXf+fS8YmeP 7zoUI3hjyZOn4xsblNOFqv3DHQedJzfaGqfkP4cNc6jVwIk33IOCvMEBZs76JKeL9gVh0ig ZtsuOVtQUfZmME5fJKwaA== X-Spam-Flag: NO UI-OutboundReport: notjunk:1;M01:P0:+JgKYhqBrRM=;DW1PWrOxgM6cTSvimJ0+VqCUXCZ 8joPiw2sTP1W2Xed++RdKo3HUNuYfUKqwQL8LdFFpz2i5IXCq9jwS/lkbejKq/0gDQSy+Nt7h av2JR35UcpgbpkwtxoVUapHkSuiLxJ/EsaEEt2sXg7x+GjuYfDXsQqeCJ5C5yYzoYPWuhDznP ahr0GCGhTh1+UqXxNqmWNfGuUq2Tay02phmC5ejozOgoQUfLt9PT8WERTgGEyJgh7NYRo7vDU zkKVfwwEq9X+40fT4KFzweea7TCE0p3ttl2HhGkJfPT040XdY8/iEmRu5/VS8fxSHKb6Ve4b+ kP55YsN9pA7wKfZK+oyRF0dko8DqThsZr2ulp0W3RpEcJFCnIRPAFqQbmbDMAt1s5hwEb81uX IrM7n/BsS5XBQyDCZ7298bhum+mIJeI0zOJswDiht05LuuaVJIe4oTXb0VSU9O2iAyqMXPZ4n MXqcruyM9Gqd8GUdqWbExND3hHNnQ8NELAsQIjvyY+iRAjU9Rqhsuuv412pqCTy4M26iXhpCc ilPq4F1ZahuQnBfuxhLKg8tL2PQ/jGHS7CzRXB2xKWpV+AaaUjaPhlD4VX+0Bv/nhphElAbO0 mlWzxlIZq87GQEfFibmAWGNFPWehqd0NGnG1lhwIQ217jI89S4L0NjxwU8e5nm9ezg40BIlZ3 MvfWc6q0YcRjwsOW78iZXpm9WsLIQR7FudVcl/CUm9XfVFiX+681ID2vCrMBFkO1WccViYdk3 3oE6JjWlDpw6FYu3zpemMrzkeBtM12QSq6oHgNBG+umck67iI2TfCKeSoCOhgMzqtz8fz4gut WDKfOLpjoCXsmxWXLFG28GHBEF5At+mVpflUHI+Ro7fXc= Subject: Re: [Starlink] =?utf-8?q?It=CA=BCs_the_Latency=2C_FCC?= X-BeenThere: starlink@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Starlink has bufferbloat. Bad." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 16 Mar 2024 19:10:08 -0000 > On 16. Mar 2024, at 19:45, Colin_Higbie via Starlink = wrote: >=20 > Beautifully said, David Lang. I completely agree. >=20 > At the same time, I do think if you give people tools where latency is = rarely an issue (say a 10x improvement, so perception of 1/10 the = latency), developers will be less efficient UNTIL that inefficiency = begins to yield poor UX. For example, if I know I can rely on latency = being 10ms and users don't care until total lag exceeds 500ms, I might = design something that uses a lot of back-and-forth between client and = server. As long as there are fewer than 50 iterations (500 / 10), users = will be happy. But if I need to do 100 iterations to get the result, = then I'll do some bundling of the operations to keep the total = observable lag at or below that 500ms.=20 >=20 > I remember programming computer games in the 1980s and the typical RAM = users had increased. Before that, I had to contort my code to get it to = run in 32kB. After the increase, I could stretch out and use 48kB and = stop wasting time shoehorning my code or loading-in segments from floppy = disk into the limited RAM. To your point: yes, this made things faster = for me as a developer, just as the latency improvements ease the burden = on the client-server application developer who needs to ensure a maximum = lag below 500ms. >=20 > In terms of user experience (UX), I think of there as being "good = enough" plateaus based on different use-cases. For example, when web = browsing, even 1,000ms of latency is barely noticeable. So any web = browser application that comes in under 1,000ms will be "good enough." = For VoIP, the "good enough" figure is probably more like 100ms. For = video conferencing, maybe it's 80ms (the ability to see the person's = facial expression likely increases the expectation of reactions and = reduces the tolerance for lag). For some forms of cloud gaming, the = "good enough" figure may be as low as 5ms.=20 >=20 > That's not to say that 20ms isn't better for VoIP than 100 or 500ms = isn't better than 1,000 for web browsing, just that the value for each = further incremental reduction in latency drops significantly once you = get to that good-enough point. However, those further improvements may = open entirely new applications, such as enabling VoIP where before maybe = it was only "good enough" for web browsing (think geosynchronous = satellites). >=20 > In other words, more important than just chasing ever lower latency, = it's important to provide SUFFICIENTLY LOW latency for users to perform = their intended applications. Getting even lower is still great for = opening things up to new applications we never considered before, just = like faster CPU's, more RAM, better graphics, etc. have always done = since the first computer. But if we're talking about measuring what = people need today, this can be done fairly easily based on intended = applications.=20 >=20 > Bandwidth scales a little differently. There's still a "good enough" = level driven by time for a web page to load of about 5s (as web pages = become ever more complex and dynamic, this means that bandwidth needs = increase), 1Mbps for VoIP, 7Mbps UL/DL for video conferencing, 20Mbps DL = for 4K streaming, etc. In addition, there's also a linear scaling to the = number of concurrent users. If 1 user needs 15Mbps to stream 4K, 3 users = in the household will need about 45Mbps to all stream 4K at the same = time, a very real-world scenario at 7pm in a home. This differs from the = latency hit of multiple users. I don't know how latency is affected by = users, but I know if it's 20ms with 1 user, it's NOT 40ms with 2 users, = 60ms with 3, etc. With the bufferbloat improvements created and put = forward by members of this group, I think latency doesn't increase by = much with multiple concurrent streams. >=20 > So all taken together, there can be fairly straightforward = descriptions of latency and bandwidth based on expected usage. These are = not mysterious attributes. It can be easily calculated per user based on = expected use cases. Well, for most applications there is an absolute lower capacity limit = below which it does not work, and for most there is also an upper limit = beyond which any additional capacity will not result in noticeable = improvements. Latency tends to work differently, instead of a hard cliff = there tends to be a slow increasing degradation... And latency over the internet is never guaranteed, just as network paths = outside a single AS rarely are guaranteed...=20 Now for different applications there are different amounts of delay that = users find acceptable, for reaction time gates games this will be lower, = for correspondence chess with one move per day this will be higher. = Conceptually this can be thought of as a latency budget that one can = spend on different components (access latency, transport latency, = latency variation buffers...), and and latency in the immediate access = network will eat into this budget irrevocably ... and that e.g. = restricts the "conus" of the world that can be reached/communicated = within the latency budget. But due to the lack of a hard cliff, it is = always easy to argue that any latency number is good enough and hard to = claim that any random latency number is too large. Regards Sebastian >=20 > Cheers, > Colin >=20 > -----Original Message----- > From: David Lang =20 > Sent: Friday, March 15, 2024 7:08 PM > To: Spencer Sevilla > Cc: Colin_Higbie ; Dave Taht via Starlink = > Subject: Re: [Starlink] It=CA=BCs the Latency, FCC >=20 > one person's 'wasteful resolution' is another person's 'large = enhancement' >=20 > going from 1080p to 4k video is not being wasteful, it's opting to use = the bandwidth in a different way. >=20 > saying that it's wasteful for someone to choose to do something is = saying that you know better what their priorities should be. >=20 > I agree that increasing resources allow programmers to be lazier and = write apps that are bigger, but they are also writing them in less time. >=20 > What right do you have to say that the programmer's time is less = important than the ram/bandwidth used? >=20 > I agree that it would be nice to have more people write better code, = but everything, including this, has trade-offs. >=20 > David Lang >=20 > _______________________________________________ > Starlink mailing list > Starlink@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/starlink