[NNagain] separable processes for live in-person and live zoom-like faces
Sebastian Moeller
moeller0 at gmx.de
Fri Nov 17 13:57:04 EST 2023
> On Nov 17, 2023, at 18:27, rjmcmahon via Nnagain <nnagain at lists.bufferbloat.net> wrote:
>
> The human brain is way too complicated to make simplified analysis like this is the latency required.
[SM] On the sensory side this is not all that hard, e.g. we can (and routinely do) measure how long it takes after stimulus onset until neurons start to significantly change their firing rate, a value that often is described as "neuronal latency" or "response latency". While single unit electro-physiologic recordings in the human brain are rare, they are not unheard of, most neuronal data however comes from different species. However it crucially depends on the definition of "latency" one uses, and I am not sure we are talking about the same latency here?
> It's a vast prediction machine and much, much more.
[SM] Indeed ;) and all of this in a tightly interwoven network where reductionism only carries so far. Still we have come a long way and gained educated glimpses into some of the functionality.
>
> I found at least three ways to understand the brain;
[SM] You are ahead of me the, I still struggle to understand the brain ;) (fine by me, there are questions big enough that one needs to expect that they will stubbornly withstand attempts at getting elegant and helpful answers/theories, for me "how does the brain work" is one of those)
>
> 1) Read A Thousand Brains: A New Theory of Intelligence
> 2) Make friends with high skilled psychologists, people that assist world athletes can be quite good
> 3) Have a daughter study neuroscience so she can answer my basic question from an expert position
[SM] All seem fine, even though 3) is a bit tricky to replicate.
Regards
Sebastian
> Bob
>> sending again as my server acted up on this url, I think. sorry for the dup...
>> ---------- Forwarded message ---------
>> From: Sebastian Moeller <moeller0 at gmx.de>
>> Date: Fri, Nov 17, 2023 at 3:45 AM
>> Subject: Re: [NNagain] separable processes for live in-person and live
>> zoom-like faces
>> To: Network Neutrality is back! Let´s make the technical aspects heard
>> this time! <nnagain at lists.bufferbloat.net>
>> Cc: <joy.hirsch at yale.edu>, Dave Täht <dave.taht at gmail.com>
>> Hi Dave, dear list
>> here is the link to the paper's web page:
>> h++ps://direct.mit.edu/imag/article/doi/10.1162/imag_a_00027/117875/Separable-processes-for-live-in-person-and-live
>> from which it can be downloaded.
>> This fits right in my wheel house# ;) However I am concerned that the
>> pupil diameter differs so much between the tested conditions, which
>> implies significant differences in actual physical stimuli, making the
>> whole conclusion a bit shaky*)... Also placing the true face at twice
>> the distance of the "zoom" screens while from an experimentalist
>> perspective understandable, was a sub-optimal decision**.
>> Not a bad study (rather the opposite), but as so often poses even more
>> detail question than it answers. Regarding your point about latency,
>> this seems not well controlled at all, as all digital systems will
>> have some latency and they do not report anything substantial:
>> "In the Virtual Face condition, each dyad watched their partner’s
>> faces projected in real time on separate 24-inch 16 × 9 computer
>> monitors placed in front of the glass"
>> I note technically in "real-time" only means that the inherent delay
>> is smaller than what ever delay the relevant control loop can
>> tolerate, so depending on the problem at hand "once-per-day" can be
>> fully real-time, while for other problems "once-per-1µsec" might be
>> too slow... But to give a lower bound delay number, they likely used a
>> web cam (the paper I am afraid does not say specifically) so at best
>> running at 60Hz (or even 30Hz) rolling shutter, so we have a) a
>> potential image distortion from the rolling shutter (probably small
>> due to the faces being close to at rest) and a "lens to RAM" delay of
>> 1000/60 = 16.67 milliseconds. Then let's assume we can get this pushed
>> to the screen ASAP, we will likely incur at the very least 0.5 refresh
>> times on average for a total delay of >= 25ms. With modern "digital"
>> screens that might be doing any fancy image processing (if only to
>> calculate "over-drive" voltages to allow or faster gray-to-gray
>> changes) the camera to eye delay might be considerably larger (adding
>> a few frame times). This is a field where older analog systems could
>> operate with much lower delay...
>> I would assume that compared to the neuronal latencies of actually
>> extracting information from the faces (it takes ~74-100ms to drive
>> neurons in the more anterior face patches in macaques, and human
>> brains are noticeably larger) this delay will be smallish, but it will
>> certainly be only encountered for the "live" and not for the in-person
>> faces.
>> Regards
>> Sebastian
>> P.S.: In spite of my arguments I like the study, it is much easier to
>> pose challenges to a study than to find robust and reliable solutions
>> to the same challenges ;)
>> #) Or it did, as I am not directly working on the face processing
>> system any more
>> *) Pupil diameter is controlled by multiple factors, ranging from its
>> "boring" physiologic function as adaptive aperture the visual system
>> uses to limit the amount of light hitting the retina, to some effect
>> of cognitive processes or states of the sympathetic nervous system see
>> e.g. h++ps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6634360/ the paper,
>> IMHO does over play the pupil diameter reasponses by not acknoledging
>> that these might result from things as boring as not having the true
>> faces and zoom faces sufficiently luminosity matched.
>> **) Correcting the size of the projected image to match in degrees of
>> visual angle only gets you so far, as we do have some sense of
>> distance, so the same visual angle at 70cm corresponds to a smaller
>> head/face than the same visual angle at 140cm... this is both
>> nit-picky, but also important. I also note that 70cm is at the edge of
>> a typical reach distance, while 1.4 m is clearly outside, yet we do
>> treat peri-personal space with-in effector reach differently from
>> space beyond that.
>>> On Nov 16, 2023, at 22:57, Dave Taht via Nnagain <nnagain at lists.bufferbloat.net> wrote:
>>> Dear Joy:
>>> good paper that extends the idea of zoom fatigue into something closer
>>> to zoom-induced somnolence. Thanks for doing this kind of detailed
>>> measurements!
>>> I would be very interested in a study of brain activity while varying
>>> latency alone as the variable for videoconferencing. One being say, a
>>> live video feed between the participants (0 latency) vs zoom (at
>>> 500ms), or with one jittering around, or one at, say 60ms vs 500ms. I
>>> tend to be much happier after a day using "galene.org" which tries for
>>> minimum latency than zoom, and still find my ability to interact
>>> quickly across a dinner table hard to get into after too many hours on
>>> it. Are y'all pursuing further studies?
>>> The link to the paper is mildly puzzling in that the token is
>>> required, and I am assuming that perhaps it is generating a
>>> watermarked version differently on every download?
>>>
>>> [...]
>>> :( My old R&D campus is up for sale: h++ps://tinyurl.com/yurtlab
>>> Dave Täht CSO, LibreQos
>>> _______________________________________________
>>> Nnagain mailing list
>>> Nnagain at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/nnagain
> _______________________________________________
> Nnagain mailing list
> Nnagain at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
More information about the Nnagain
mailing list