From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com [IPv6:2a00:1450:4864:20::22a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 36B823CB4B for ; Fri, 3 May 2024 05:10:03 -0400 (EDT) Received: by mail-lj1-x22a.google.com with SMTP id 38308e7fff4ca-2df83058d48so80338551fa.1 for ; Fri, 03 May 2024 02:10:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1714727401; x=1715332201; darn=lists.bufferbloat.net; h=to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=NINsUrwr9l9moUdbws7Jq8sgYv//uXymOjwieaSEqeQ=; b=L7DeB6TWsjI72zJwsflQ/Z4KHyjG05CjGhU9Ox9qHeoBuQu9EXpw5d3c8JPpXXbpPb d6R5zCNljpt60qaAxgszMwYBkq8W668l5MF3fKHOr9zO3RyxAtByxBZV1XOMBPbAptpf 8Pxs3Jr9JKIGA90wTN0bhPADlqy2THr7fRI9WAhD28j4k8nojWtpXWQN1CwQpiOQcCIj 7D3PLVf70uhXrEqpQL2FrqO08d718bGvCIGxUwNkVGL85MbSQwQDXaBC2QwQKuGO0rtw nPt94NUr6AfCky6jqjYPws6r8OhYNTZiMO3YDL2y/r4mWbKjQAHJdn98SQt9lEn/syi4 kcxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714727401; x=1715332201; h=to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=NINsUrwr9l9moUdbws7Jq8sgYv//uXymOjwieaSEqeQ=; b=Y1B+2jX1bC//8ctP/nGFdhDvlFukcqMJ7IIKVuYXLZDjoPPkHVz6AKZO8d88bJycKK T6/pFTItmueiC1BFW7reF+hSAtKb8M6OYX9VrxnFt3B74lX8bI8y6rtbToOzL8zP/5El XeEbIKrUcOB1ej5fhjqYbjZfg3XHF7ly3PAvIcUrK2mpbK+KlJjU1Pu39ml8Ap6EuU6a 1YURmm0ufjZBpuzXK0C1ksxD3XvEg9DXHvbG+6ZAdhFUMR4HOXM4ptd6ICLl/1YD3qyC pgO4mUMswEd6THrE07Zhx7+2R2bwYZHY8xjOJbOQ2laJvP9b3HL9DbnkFXmhjnfs+OYk yuag== X-Gm-Message-State: AOJu0YzNjZariiiI/ragWZTq/5GChbYCbKNb01JQdC8jqmZ3O0jarMen c4lY7JTMsueHKxSatZKp4M4B4ZGB5E4lSsbzlsAdsZnVGVWu1coy+0kXI/hi8/nnROHA6KZll1U C4O14lCAzGsRqMN8FlAUbTWbBeqP01xaMCnYw9w== X-Google-Smtp-Source: AGHT+IG2775vpwO/maT/Jj+R++YgiNsAUvHNFKrLIjhC+hHEX7YpJY99ZrTE9KS6qOKv58jHvbSSjSHX2QQtFmN8ZXM= X-Received: by 2002:a05:651c:1026:b0:2da:49cc:ef39 with SMTP id w6-20020a05651c102600b002da49ccef39mr1294394ljm.0.1714727400909; Fri, 03 May 2024 02:10:00 -0700 (PDT) MIME-Version: 1.0 From: =?UTF-8?Q?David_Fern=C3=A1ndez?= Date: Fri, 3 May 2024 11:09:25 +0200 Message-ID: To: starlink Content-Type: multipart/alternative; boundary="0000000000009e341f061789158d" Subject: Re: [Starlink] It's the Latency, FCC X-BeenThere: starlink@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Starlink has bufferbloat. Bad." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 May 2024 09:10:03 -0000 --0000000000009e341f061789158d Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Recent video codecs are very efficient denoisers, removing film grain from old movies, so they don't look like the originals when decoded (and disappointing some people). There are ideas for solving this going on, like characterizing the film grain with a model with some parameters. Then, the model and its parameters, not the actual pixels with the noise, are transmitted and generated at the decoder. with a processing that will make them ressemble the originals (without being exactly the originals). That is somehow a kind of semantic communication, in which instead of the actual information, you transmit the meaningful information, not the pixels at bit resolution. I see it a bit like using vector graphics instead of pixels for images (SVG vs. PNG), to be able to generate images of arbitrary resolution from the meaningful info. This is a way of breaking Shanon capacity limits. ---------------------------------------------------------------------- Date: Fri, 3 May 2024 13:48:37 +1200 From: Ulrich Speidel To: starlink@lists.bufferbloat.net Subject: Re: [Starlink] It=E2=80=99s the Latency, FCC Message-ID: <77d8f31e-860b-478e-8f93-30cb6e0730ac@auckland.ac.nz> Content-Type: text/plain; charset=3D"utf-8"; Format=3D"flowed" There's also the not-so-minor issue of video compression, which generally has the effect of removing largely imperceptible detail from your video frames so your high-res video will fit through the pipeline you've got to squeeze it through. But this is a bit of a snag in its own right, as I found out about two decades ago when I was still amazed at the fact that you could use the parsing algorithms underpinning universal data compression to get an estimate of how much information a digital object (say, a CCD image frame) contained. So I set about with two Japanese colleagues to look at the reference image sequences that pretty much everyone used to benchmark their video compressors against. One of the surprising finds was that the odd-numbered frames in the sequences had a distinctly different amount of information in them than the even-numbered ones, yet you couldn't tell from looking at the frames. We more or less came to the conclusion that the camera that had been used to record the world's most commonly used reference video sequences had added a small amount of random noise to every second image. - the effect (and estimated information content) dropped noticeably when we progressively dropped the least significant bits in the pixels. We published this: KAWAHARADA, K., OHZEKI, K., SPEIDEL, U. 'Information and Entropy Measurements on Video Sequences', 5th International Conference on Information, Communications and Signal Processing (ICICS2005), Bangkok, 6-9 December 2005, p.1150-1154, DOI 10.1109/ICICS.2005.1689234 Did the world take notice? Of course not. But it still amuses me no end that some people spent entire careers trying to optimise the compression of these image sequences - and all that because of an obscure hardware flaw that the cameras which their algorithms ended up on may not have even suffered from. Which brings me back to the question of how important bandwidth is. The answer is: probably more important in the future. We're currently relying mostly on CDNs for video delivery, but I can't fail but notice the progress that's being made by AI-based video generation. Four or five years ago, Gen-AI could barely compose a credible image. A couple of years ago, it could do video sequences of a few seconds. Now we're up to videos in the minutes. If that development is sustained, you'll be able to tell your personal electronic assistant / spy to dream up a personalised movie, say an operatic sci-fi Western with car chases on the Titanic floating in space, and it'll have it generated in no time starring the actors you like. ETA: Around 2030 maybe? But these things will be (a) data-heavy and (b) aren't well suited for CDN delivery because you may be the only one to every see a particular movie, so you'll either need to move the movie generation to the edge, or you need to build bigger pipes across the world. I'm not sure how feasible either option is. --0000000000009e341f061789158d Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Recent video codecs are very efficient denoisers, rem= oving film grain from old movies, so they don't look like the originals= when decoded (and disappointing some people).

There are ideas for solving this going on, like characterizing the film gr= ain with a model with some parameters. Then, the model and its parameters, = not the actual pixels with the noise, are transmitted and generated at the = decoder. with a processing that will make them ressemble the originals (wit= hout being exactly the originals).

That is som= ehow a kind of semantic communication, in which instead of the actual infor= mation, you transmit the meaningful information, not the pixels at bit reso= lution. I see it a bit like using vector graphics instead of pixels for ima= ges=20 (SVG vs. PNG), to be able to generate images of arbitrary resolution from t= he meaningful info. This is a way of breaking Shanon capacity limits.
=

----------------------------------------------------------------------
Date: Fri, 3 May 2024 13:48:37 +1200
From: Ulrich Speidel <u.speidel@auckland.ac.nz>
To: sta= rlink@lists.bufferbloat.net
Subject: Re: [Starlink] It=E2=80=99s the Latency, FCC
Message-ID: <77d8f31e-860b-478e-8f93-30cb6e0730ac@auckland= .ac.nz>
Content-Type: text/plain; charset=3D"utf-8"; Format=3D"flowe= d"

There's also the not-so-minor issue of video compression, which
generally has the effect of removing largely imperceptible detail from
your video frames so your high-res video will fit through the pipeline
you've got to squeeze it through.

But this is a bit of a snag in its own right, as I found out about two
decades ago when I was still amazed at the fact that you could use the
parsing algorithms underpinning universal data compression to get an
estimate of how much information a digital object (say, a CCD image
frame) contained. So I set about with two Japanese colleagues to look at the reference image sequences that pretty much everyone used to
benchmark their video compressors against. One of the surprising finds
was that the odd-numbered frames in the sequences had a distinctly
different amount of information in them than the even-numbered ones, yet you couldn't tell from looking at the frames.

We more or less came to the conclusion that the camera that had been
used to record the world's most commonly used reference video sequences=
had added a small amount of random noise to every second image. - the
effect (and estimated information content) dropped noticeably when we
progressively dropped the least significant bits in the pixels. We
published this:

KAWAHARADA, K., OHZEKI, K., SPEIDEL, U. 'Information and Entropy
Measurements on Video Sequences', 5th International Conference on
Information, Communications and Signal Processing (ICICS2005), Bangkok, 6-9 December 2005, p.1150-1154, DOI 10.1109/ICICS.2005.1689234

Did the world take notice? Of course not. But it still amuses me no end that some people spent entire careers trying to optimise the compression of these image sequences - and all that because of an obscure hardware
flaw that the cameras which their algorithms ended up on may not have
even suffered from.

Which brings me back to the question of how important bandwidth is. The answer is: probably more important in the future. We're currently
relying mostly on CDNs for video delivery, but I can't fail but notice =
the progress that's being made by AI-based video generation. Four or five years ago, Gen-AI could barely compose a credible image. A couple
of years ago, it could do video sequences of a few seconds. Now we're u= p
to videos in the minutes.

If that development is sustained, you'll be able to tell your personal =
electronic assistant / spy to dream up a personalised movie, say an
operatic sci-fi Western with car chases on the Titanic floating in
space, and it'll have it generated in no time starring the actors you <= br> like. ETA: Around 2030 maybe?

But these things will be (a) data-heavy and (b) aren't well suited for =
CDN delivery because you may be the only one to every see a particular
movie, so you'll either need to move the movie generation to the edge, =
or you need to build bigger pipes across the world. I'm not sure how feasible either option is.
--0000000000009e341f061789158d--