[NNagain] Fwd: separable processes for live in-person and live zoom-like faces
rjmcmahon
rjmcmahon at rjmcmahon.com
Fri Nov 17 12:27:56 EST 2023
The human brain is way too complicated to make simplified analysis like
this is the latency required. It's a vast prediction machine and much,
much more.
I found at least three ways to understand the brain;
1) Read A Thousand Brains: A New Theory of Intelligence
2) Make friends with high skilled psychologists, people that assist
world athletes can be quite good
3) Have a daughter study neuroscience so she can answer my basic
question from an expert position
Bob
> sending again as my server acted up on this url, I think. sorry for the
> dup...
>
> ---------- Forwarded message ---------
> From: Sebastian Moeller <moeller0 at gmx.de>
> Date: Fri, Nov 17, 2023 at 3:45 AM
> Subject: Re: [NNagain] separable processes for live in-person and live
> zoom-like faces
> To: Network Neutrality is back! Let´s make the technical aspects heard
> this time! <nnagain at lists.bufferbloat.net>
> Cc: <joy.hirsch at yale.edu>, Dave Täht <dave.taht at gmail.com>
>
>
> Hi Dave, dear list
>
> here is the link to the paper's web page:
> https://direct.mit.edu/imag/article/doi/10.1162/imag_a_00027/117875/Separable-processes-for-live-in-person-and-live
> from which it can be downloaded.
>
> This fits right in my wheel house# ;) However I am concerned that the
> pupil diameter differs so much between the tested conditions, which
> implies significant differences in actual physical stimuli, making the
> whole conclusion a bit shaky*)... Also placing the true face at twice
> the distance of the "zoom" screens while from an experimentalist
> perspective understandable, was a sub-optimal decision**.
>
> Not a bad study (rather the opposite), but as so often poses even more
> detail question than it answers. Regarding your point about latency,
> this seems not well controlled at all, as all digital systems will
> have some latency and they do not report anything substantial:
> "In the Virtual Face condition, each dyad watched their partner’s
> faces projected in real time on separate 24-inch 16 × 9 computer
> monitors placed in front of the glass"
>
> I note technically in "real-time" only means that the inherent delay
> is smaller than what ever delay the relevant control loop can
> tolerate, so depending on the problem at hand "once-per-day" can be
> fully real-time, while for other problems "once-per-1µsec" might be
> too slow... But to give a lower bound delay number, they likely used a
> web cam (the paper I am afraid does not say specifically) so at best
> running at 60Hz (or even 30Hz) rolling shutter, so we have a) a
> potential image distortion from the rolling shutter (probably small
> due to the faces being close to at rest) and a "lens to RAM" delay of
> 1000/60 = 16.67 milliseconds. Then let's assume we can get this pushed
> to the screen ASAP, we will likely incur at the very least 0.5 refresh
> times on average for a total delay of >= 25ms. With modern "digital"
> screens that might be doing any fancy image processing (if only to
> calculate "over-drive" voltages to allow or faster gray-to-gray
> changes) the camera to eye delay might be considerably larger (adding
> a few frame times). This is a field where older analog systems could
> operate with much lower delay...
>
> I would assume that compared to the neuronal latencies of actually
> extracting information from the faces (it takes ~74-100ms to drive
> neurons in the more anterior face patches in macaques, and human
> brains are noticeably larger) this delay will be smallish, but it will
> certainly be only encountered for the "live" and not for the in-person
> faces.
>
>
> Regards
> Sebastian
>
> P.S.: In spite of my arguments I like the study, it is much easier to
> pose challenges to a study than to find robust and reliable solutions
> to the same challenges ;)
>
>
> #) Or it did, as I am not directly working on the face processing
> system any more
> *) Pupil diameter is controlled by multiple factors, ranging from its
> "boring" physiologic function as adaptive aperture the visual system
> uses to limit the amount of light hitting the retina, to some effect
> of cognitive processes or states of the sympathetic nervous system see
> e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6634360/ the paper,
> IMHO does over play the pupil diameter reasponses by not acknoledging
> that these might result from things as boring as not having the true
> faces and zoom faces sufficiently luminosity matched.
> **) Correcting the size of the projected image to match in degrees of
> visual angle only gets you so far, as we do have some sense of
> distance, so the same visual angle at 70cm corresponds to a smaller
> head/face than the same visual angle at 140cm... this is both
> nit-picky, but also important. I also note that 70cm is at the edge of
> a typical reach distance, while 1.4 m is clearly outside, yet we do
> treat peri-personal space with-in effector reach differently from
> space beyond that.
>
>
>> On Nov 16, 2023, at 22:57, Dave Taht via Nnagain
>> <nnagain at lists.bufferbloat.net> wrote:
>>
>> Dear Joy:
>>
>> good paper that extends the idea of zoom fatigue into something closer
>> to zoom-induced somnolence. Thanks for doing this kind of detailed
>> measurements!
>>
>> I would be very interested in a study of brain activity while varying
>> latency alone as the variable for videoconferencing. One being say, a
>> live video feed between the participants (0 latency) vs zoom (at
>> 500ms), or with one jittering around, or one at, say 60ms vs 500ms. I
>> tend to be much happier after a day using "galene.org" which tries for
>> minimum latency than zoom, and still find my ability to interact
>> quickly across a dinner table hard to get into after too many hours on
>> it. Are y'all pursuing further studies?
>>
>> The link to the paper is mildly puzzling in that the token is
>> required, and I am assuming that perhaps it is generating a
>> watermarked version differently on every download?
>>
>>
https://watermark.silverchair.com/imag_a_00027.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAzQwggMwBgkqhkiG9w0BBwagggMhMIIDHQIBADCCAxYGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMBvAQsisJ_ABhWglzAgEQgIIC5womb1HIyE-rX0v_xte9EwVGyabIjMO6g80txKAFqHmQPVEv7FostAfIK-4yXUZnSyLivNxp6pummVVwxB_kCJEyG2DtAH2R8ywkWTTjGw22vpotfz5injZM6fMRQNyTq8dcjtMTTFpEjbsrbupEoMFo7Z0wxqV8bmbPD8xO6hu1T8I8gg5579PZNHt7-PMNgqJVlEaxPY3nMvc1XkKYdh1RIhFetQkAdhhro2eWfu_njMvzdRWVeN2ohY6OnSJSDljWiWxyUOqnKX6tps2XFtVBWUh2sE3HK-EsI-w0EmpBlAC7huyQsXkXW7tmPOwA7yiGQm4uSfcOn_EKGhvzhjHsdP8Mm1QJat6_rWSPZZGwhFzPB2Wl92DDfiSOesKKQBv_OvmGc3FUmFAhqIeAlzlyNkdBydk2hQqvS46OTGfdBEvwpIH_AZclDiLeuyJPP5v2YaoByFQ7w4uXHMyNhEo5mR2_pQ3WM7CpzknZixRvA5TQySW830iH0k00QZwt6a3nphgV6R4int5Pl-QdmCKzFoJ2EuPIBKvG9H5yBq18E6r1jyk1mdFKpo0-OEpLNIBpGm-1SomHw2qo5uCRWoAW6MO7K-sKZokirXGgJ7rIdRznq3BXvYxFKVn7tqJlIAAX6qDrC0bkefj8PEweuk2zIraj1Ri3otbdX3h0zBsKgmdY6qiOn8LtyxIy3vvXLnbiaIColztgAt1cHuI6b0w3rLg7BGSE2cetBDTyGS9OS0NKq91xqljwDAZBFkuKwtfYLzxIeeBy4KrG-
PBqGtUEholGjHHyKCwxytw12qvgrTjdX7cXhYJSrs-HBJtRgiP5Yb6DJAQrlqEKeGnyTlPv2o3jNVvT0CZ9zWX8Qm0O6wiGo1PqpxCM3VLw0VXVsWcHJ39eLYN30GuHznYCaH5INdtgZoJdQbmZO3o_tF7itz1uYHItxNK_ZQ3oFKoUQd0e7sx51xaFj6VnNNo39Ms3mdyEQOEp
>>
>>
>> --
>> :( My old R&D campus is up for sale: https://tinyurl.com/yurtlab
>> Dave Täht CSO, LibreQos
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
More information about the Nnagain
mailing list