[Starlink] It's the Latency, FCC
David Fernández
davidfdzp at gmail.com
Fri May 3 05:09:25 EDT 2024
Recent video codecs are very efficient denoisers, removing film grain from
old movies, so they don't look like the originals when decoded (and
disappointing some people).
There are ideas for solving this going on, like characterizing the film
grain with a model with some parameters. Then, the model and its
parameters, not the actual pixels with the noise, are transmitted and
generated at the decoder. with a processing that will make them ressemble
the originals (without being exactly the originals).
That is somehow a kind of semantic communication, in which instead of the
actual information, you transmit the meaningful information, not the pixels
at bit resolution. I see it a bit like using vector graphics instead of
pixels for images (SVG vs. PNG), to be able to generate images of arbitrary
resolution from the meaningful info. This is a way of breaking Shanon
capacity limits.
----------------------------------------------------------------------
Date: Fri, 3 May 2024 13:48:37 +1200
From: Ulrich Speidel <u.speidel at auckland.ac.nz>
To: starlink at lists.bufferbloat.net
Subject: Re: [Starlink] It’s the Latency, FCC
Message-ID: <77d8f31e-860b-478e-8f93-30cb6e0730ac at auckland.ac.nz>
Content-Type: text/plain; charset="utf-8"; Format="flowed"
There's also the not-so-minor issue of video compression, which
generally has the effect of removing largely imperceptible detail from
your video frames so your high-res video will fit through the pipeline
you've got to squeeze it through.
But this is a bit of a snag in its own right, as I found out about two
decades ago when I was still amazed at the fact that you could use the
parsing algorithms underpinning universal data compression to get an
estimate of how much information a digital object (say, a CCD image
frame) contained. So I set about with two Japanese colleagues to look at
the reference image sequences that pretty much everyone used to
benchmark their video compressors against. One of the surprising finds
was that the odd-numbered frames in the sequences had a distinctly
different amount of information in them than the even-numbered ones, yet
you couldn't tell from looking at the frames.
We more or less came to the conclusion that the camera that had been
used to record the world's most commonly used reference video sequences
had added a small amount of random noise to every second image. - the
effect (and estimated information content) dropped noticeably when we
progressively dropped the least significant bits in the pixels. We
published this:
KAWAHARADA, K., OHZEKI, K., SPEIDEL, U. 'Information and Entropy
Measurements on Video Sequences', 5th International Conference on
Information, Communications and Signal Processing (ICICS2005), Bangkok,
6-9 December 2005, p.1150-1154, DOI 10.1109/ICICS.2005.1689234
Did the world take notice? Of course not. But it still amuses me no end
that some people spent entire careers trying to optimise the compression
of these image sequences - and all that because of an obscure hardware
flaw that the cameras which their algorithms ended up on may not have
even suffered from.
Which brings me back to the question of how important bandwidth is. The
answer is: probably more important in the future. We're currently
relying mostly on CDNs for video delivery, but I can't fail but notice
the progress that's being made by AI-based video generation. Four or
five years ago, Gen-AI could barely compose a credible image. A couple
of years ago, it could do video sequences of a few seconds. Now we're up
to videos in the minutes.
If that development is sustained, you'll be able to tell your personal
electronic assistant / spy to dream up a personalised movie, say an
operatic sci-fi Western with car chases on the Titanic floating in
space, and it'll have it generated in no time starring the actors you
like. ETA: Around 2030 maybe?
But these things will be (a) data-heavy and (b) aren't well suited for
CDN delivery because you may be the only one to every see a particular
movie, so you'll either need to move the movie generation to the edge,
or you need to build bigger pipes across the world. I'm not sure how
feasible either option is.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20240503/28dd93a3/attachment.html>
More information about the Starlink
mailing list