<div dir="ltr"><div>Recent video codecs are very efficient denoisers, removing film grain from old movies, so they don't look like the originals when decoded (and disappointing some people).<br></div><div><br></div><div>There are ideas for solving this going on, like characterizing the film grain with a model with some parameters. Then, the model and its parameters, not the actual pixels with the noise, are transmitted and generated at the decoder. with a processing that will make them ressemble the originals (without being exactly the originals).<br></div><div><br></div><div>That is somehow a kind of semantic communication, in which instead of the actual information, you transmit the meaningful information, not the pixels at bit resolution. I see it a bit like using vector graphics instead of pixels for images
(SVG vs. PNG), to be able to generate images of arbitrary resolution from the meaningful info. This is a way of breaking Shanon capacity limits.</div><div></div><div><br></div>
----------------------------------------------------------------------<br>
Date: Fri, 3 May 2024 13:48:37 +1200<br>
From: Ulrich Speidel <<a href="mailto:u.speidel@auckland.ac.nz" target="_blank">u.speidel@auckland.ac.nz</a>><br>
To: <a href="mailto:starlink@lists.bufferbloat.net" target="_blank">starlink@lists.bufferbloat.net</a><br>
Subject: Re: [Starlink] It’s the Latency, FCC<br>
Message-ID: <<a href="mailto:77d8f31e-860b-478e-8f93-30cb6e0730ac@auckland.ac.nz" target="_blank">77d8f31e-860b-478e-8f93-30cb6e0730ac@auckland.ac.nz</a>><br>
Content-Type: text/plain; charset="utf-8"; Format="flowed"<br>
<br>
There's also the not-so-minor issue of video compression, which <br>
generally has the effect of removing largely imperceptible detail from <br>
your video frames so your high-res video will fit through the pipeline <br>
you've got to squeeze it through.<br>
<br>
But this is a bit of a snag in its own right, as I found out about two <br>
decades ago when I was still amazed at the fact that you could use the <br>
parsing algorithms underpinning universal data compression to get an <br>
estimate of how much information a digital object (say, a CCD image <br>
frame) contained. So I set about with two Japanese colleagues to look at <br>
the reference image sequences that pretty much everyone used to <br>
benchmark their video compressors against. One of the surprising finds <br>
was that the odd-numbered frames in the sequences had a distinctly <br>
different amount of information in them than the even-numbered ones, yet <br>
you couldn't tell from looking at the frames.<br>
<br>
We more or less came to the conclusion that the camera that had been <br>
used to record the world's most commonly used reference video sequences <br>
had added a small amount of random noise to every second image. - the <br>
effect (and estimated information content) dropped noticeably when we <br>
progressively dropped the least significant bits in the pixels. We <br>
published this:<br>
<br>
KAWAHARADA, K., OHZEKI, K., SPEIDEL, U. 'Information and Entropy <br>
Measurements on Video Sequences', 5th International Conference on <br>
Information, Communications and Signal Processing (ICICS2005), Bangkok, <br>
6-9 December 2005, p.1150-1154, DOI 10.1109/ICICS.2005.1689234<br>
<br>
Did the world take notice? Of course not. But it still amuses me no end <br>
that some people spent entire careers trying to optimise the compression <br>
of these image sequences - and all that because of an obscure hardware <br>
flaw that the cameras which their algorithms ended up on may not have <br>
even suffered from.<br>
<br>
Which brings me back to the question of how important bandwidth is. The <br>
answer is: probably more important in the future. We're currently <br>
relying mostly on CDNs for video delivery, but I can't fail but notice <br>
the progress that's being made by AI-based video generation. Four or <br>
five years ago, Gen-AI could barely compose a credible image. A couple <br>
of years ago, it could do video sequences of a few seconds. Now we're up <br>
to videos in the minutes.<br>
<br>
If that development is sustained, you'll be able to tell your personal <br>
electronic assistant / spy to dream up a personalised movie, say an <br>
operatic sci-fi Western with car chases on the Titanic floating in <br>
space, and it'll have it generated in no time starring the actors you <br>
like. ETA: Around 2030 maybe?<br>
<br>
But these things will be (a) data-heavy and (b) aren't well suited for <br>
CDN delivery because you may be the only one to every see a particular <br>
movie, so you'll either need to move the movie generation to the edge, <br>
or you need to build bigger pipes across the world. I'm not sure how <br>
feasible either option is.<br>
</div>