* [NNagain] separable processes for live in-person and live zoom-like faces @ 2023-11-16 21:57 Dave Taht 2023-11-17 14:16 ` Hirsch, Joy [not found] ` <AD77204F-4839-4292-976D-E7BE11A12C9B@gmx.de> 0 siblings, 2 replies; 6+ messages in thread From: Dave Taht @ 2023-11-16 21:57 UTC (permalink / raw) To: joy.hirsch, bloat, Network Neutrality is back! Let´s make the technical aspects heard this time! Dear Joy: good paper that extends the idea of zoom fatigue into something closer to zoom-induced somnolence. Thanks for doing this kind of detailed measurements! I would be very interested in a study of brain activity while varying latency alone as the variable for videoconferencing. One being say, a live video feed between the participants (0 latency) vs zoom (at 500ms), or with one jittering around, or one at, say 60ms vs 500ms. I tend to be much happier after a day using "galene.org" which tries for minimum latency than zoom, and still find my ability to interact quickly across a dinner table hard to get into after too many hours on it. Are y'all pursuing further studies? The link to the paper is mildly puzzling in that the token is required, and I am assuming that perhaps it is generating a watermarked version differently on every download? https://watermark.silverchair.com/imag_a_00027.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAzQwggMwBgkqhkiG9w0BBwagggMhMIIDHQIBADCCAxYGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMBvAQsisJ_ABhWglzAgEQgIIC5womb1HIyE-rX0v_xte9EwVGyabIjMO6g80txKAFqHmQPVEv7FostAfIK-4yXUZnSyLivNxp6pummVVwxB_kCJEyG2DtAH2R8ywkWTTjGw22vpotfz5injZM6fMRQNyTq8dcjtMTTFpEjbsrbupEoMFo7Z0wxqV8bmbPD8xO6hu1T8I8gg5579PZNHt7-PMNgqJVlEaxPY3nMvc1XkKYdh1RIhFetQkAdhhro2eWfu_njMvzdRWVeN2ohY6OnSJSDljWiWxyUOqnKX6tps2XFtVBWUh2sE3HK-EsI-w0EmpBlAC7huyQsXkXW7tmPOwA7yiGQm4uSfcOn_EKGhvzhjHsdP8Mm1QJat6_rWSPZZGwhFzPB2Wl92DDfiSOesKKQBv_OvmGc3FUmFAhqIeAlzlyNkdBydk2hQqvS46OTGfdBEvwpIH_AZclDiLeuyJPP5v2YaoByFQ7w4uXHMyNhEo5mR2_pQ3WM7CpzknZixRvA5TQySW830iH0k00QZwt6a3nphgV6R4int5Pl-QdmCKzFoJ2EuPIBKvG9H5yBq18E6r1jyk1mdFKpo0-OEpLNIBpGm-1SomHw2qo5uCRWoAW6MO7K-sKZokirXGgJ7rIdRznq3BXvYxFKVn7tqJlIAAX6qDrC0bkefj8PEweuk2zIraj1Ri3otbdX3h0zBsKgmdY6qiOn8LtyxIy3vvXLnbiaIColztgAt1cHuI6b0w3rLg7BGSE2cetBDTyGS9OS0NKq91xqljwDAZBFkuKwtfYLzxIeeBy4KrG-PBqGtUEholGjHHyKCwxytw12qvgrTjdX7cXhYJSrs-HBJtRgiP5Yb6DJAQrlqEKeGnyTlPv2o3jNVvT0CZ9zWX8Qm0O6wiGo1PqpxCM3VLw0VXVsWcHJ39eLYN30GuHznYCaH5INdtgZoJdQbmZO3o_tF7itz1uYHItxNK_ZQ3oFKoUQd0e7sx51xaFj6VnNNo39Ms3mdyEQOEp -- :( My old R&D campus is up for sale: https://tinyurl.com/yurtlab Dave Täht CSO, LibreQos ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [NNagain] separable processes for live in-person and live zoom-like faces 2023-11-16 21:57 [NNagain] separable processes for live in-person and live zoom-like faces Dave Taht @ 2023-11-17 14:16 ` Hirsch, Joy [not found] ` <AD77204F-4839-4292-976D-E7BE11A12C9B@gmx.de> 1 sibling, 0 replies; 6+ messages in thread From: Hirsch, Joy @ 2023-11-17 14:16 UTC (permalink / raw) To: Dave Taht, bloat, Network Neutrality is back! Let´s make the technical aspects heard this time! Cc: Hirsch, Joy [-- Attachment #1.1: Type: text/plain, Size: 5434 bytes --] Dave Thanks for your comments. I agree that there is much ahead to study about the effects of on-line communication and to improve them. I have attached a copy of the paper as I think you indicated it was difficult to find. All the best JH Joy Hirsch, Ph.D. Elizabeth Mears and House Jameson Professor of Psychiatry, Comparative Medicine, and Neuroscience Director, Brain Function Laboratory Interdepartmental Neuroscience Program Yale School of Medicine Wu Tsai Institute, Yale University 300 George St, Suite 902 New Haven, CT 06511 Professor of Neuroscience (ex officio) Department of Medical Physics and Biomedical Engineering Faculty of Engineering Sciences University College London, London WC1E 6BT, UK Phone mobile: 917 494-7768 e-mail: joy.hirsch@yale.edu e-mail: joyhirsch@yahoo.com website:www.fmri.org www.medicine.yale.edu/lab/hirsch ________________________________ From: Dave Taht <dave.taht@gmail.com> Sent: Thursday, November 16, 2023 4:57 PM To: Hirsch, Joy <joy.hirsch@yale.edu>; bloat <bloat@lists.bufferbloat.net>; Network Neutrality is back! Let´s make the technical aspects heard this time! <nnagain@lists.bufferbloat.net> Subject: separable processes for live in-person and live zoom-like faces Dear Joy: good paper that extends the idea of zoom fatigue into something closer to zoom-induced somnolence. Thanks for doing this kind of detailed measurements! I would be very interested in a study of brain activity while varying latency alone as the variable for videoconferencing. One being say, a live video feed between the participants (0 latency) vs zoom (at 500ms), or with one jittering around, or one at, say 60ms vs 500ms. I tend to be much happier after a day using "galene.org" which tries for minimum latency than zoom, and still find my ability to interact quickly across a dinner table hard to get into after too many hours on it. Are y'all pursuing further studies? The link to the paper is mildly puzzling in that the token is required, and I am assuming that perhaps it is generating a watermarked version differently on every download? https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwatermark.silverchair.com%2Fimag_a_00027.pdf%3Ftoken%3DAQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAzQwggMwBgkqhkiG9w0BBwagggMhMIIDHQIBADCCAxYGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMBvAQsisJ_ABhWglzAgEQgIIC5womb1HIyE-rX0v_xte9EwVGyabIjMO6g80txKAFqHmQPVEv7FostAfIK-4yXUZnSyLivNxp6pummVVwxB_kCJEyG2DtAH2R8ywkWTTjGw22vpotfz5injZM6fMRQNyTq8dcjtMTTFpEjbsrbupEoMFo7Z0wxqV8bmbPD8xO6hu1T8I8gg5579PZNHt7-PMNgqJVlEaxPY3nMvc1XkKYdh1RIhFetQkAdhhro2eWfu_njMvzdRWVeN2ohY6OnSJSDljWiWxyUOqnKX6tps2XFtVBWUh2sE3HK-EsI-w0EmpBlAC7huyQsXkXW7tmPOwA7yiGQm4uSfcOn_EKGhvzhjHsdP8Mm1QJat6_rWSPZZGwhFzPB2Wl92DDfiSOesKKQBv_OvmGc3FUmFAhqIeAlzlyNkdBydk2hQqvS46OTGfdBEvwpIH_AZclDiLeuyJPP5v2YaoByFQ7w4uXHMyNhEo5mR2_pQ3WM7CpzknZixRvA5TQySW830iH0k00QZwt6a3nphgV6R4int5Pl-QdmCKzFoJ2EuPIBKvG9H5yBq18E6r1jyk1mdFKpo0-OEpLNIBpGm-1SomHw2qo5uCRWoAW6MO7K-sKZokirXGgJ7rIdRznq3BXvYxFKVn7tqJlIAAX6qDrC0bkefj8PEweuk2zIraj1Ri3otbdX3h0zBsKgmdY6qiOn8LtyxIy3vvXLnbiaIColztgAt1cHuI6b0w3rLg7BGSE2cetBDTyGS9OS0NKq91xqljwDAZBFkuKwtfYLzxIeeBy4KrG-PBqGtUEholGjHHyKCwxytw12qvgrTjdX7cXhYJSrs-HBJtRgiP5Yb6DJAQrlqEKeGnyTlPv2o3jNVvT0CZ9zWX8Qm0O6wiGo1PqpxCM3VLw0VXVsWcHJ39eLYN30GuHznYCaH5INdtgZoJdQbmZO3o_tF7itz1uYHItxNK_ZQ3oFKoUQd0e7sx51xaFj6VnNNo39Ms3mdyEQOEp&data=05%7C01%7Cjoy.hirsch%40yale.edu%7Cf5e9e87355214b87d92c08dbe6ef19ce%7Cdd8cbebb21394df8b4114e3e87abeb5c%7C0%7C0%7C638357687303210261%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=H7WypNDMzKvbj1H8Kz1dfkX0TBERXAg0EzMHzV7sYR8%3D&reserved=0<https://watermark.silverchair.com/imag_a_00027.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAzQwggMwBgkqhkiG9w0BBwagggMhMIIDHQIBADCCAxYGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMBvAQsisJ_ABhWglzAgEQgIIC5womb1HIyE-rX0v_xte9EwVGyabIjMO6g80txKAFqHmQPVEv7FostAfIK-4yXUZnSyLivNxp6pummVVwxB_kCJEyG2DtAH2R8ywkWTTjGw22vpotfz5injZM6fMRQNyTq8dcjtMTTFpEjbsrbupEoMFo7Z0wxqV8bmbPD8xO6hu1T8I8gg5579PZNHt7-PMNgqJVlEaxPY3nMvc1XkKYdh1RIhFetQkAdhhro2eWfu_njMvzdRWVeN2ohY6OnSJSDljWiWxyUOqnKX6tps2XFtVBWUh2sE3HK-EsI-w0EmpBlAC7huyQsXkXW7tmPOwA7yiGQm4uSfcOn_EKGhvzhjHsdP8Mm1QJat6_rWSPZZGwhFzPB2Wl92DDfiSOesKKQBv_OvmGc3FUmFAhqIeAlzlyNkdBydk2hQqvS46OTGfdBEvwpIH_AZclDiLeuyJPP5v2YaoByFQ7w4uXHMyNhEo5mR2_pQ3WM7CpzknZixRvA5TQySW830iH0k00QZwt6a3nphgV6R4int5Pl-QdmCKzFoJ2EuPIBKvG9H5yBq18E6r1jyk1mdFKpo0-OEpLNIBpGm-1SomHw2qo5uCRWoAW6MO7K-sKZokirXGgJ7rIdRznq3BXvYxFKVn7tqJlIAAX6qDrC0bkefj8PEweuk2zIraj1Ri3otbdX3h0zBsKgmdY6qiOn8LtyxIy3vvXLnbiaIColztgAt1cHuI6b0w3rLg7BGSE2cetBDTyGS9OS0NKq91xqljwDAZBFkuKwtfYLzxIeeBy4KrG-PBqGtUEholGjHHyKCwxytw12qvgrTjdX7cXhYJSrs-HBJtRgiP5Yb6DJAQrlqEKeGnyTlPv2o3jNVvT0CZ9zWX8Qm0O6wiGo1PqpxCM3VLw0VXVsWcHJ39eLYN30GuHznYCaH5INdtgZoJdQbmZO3o_tF7itz1uYHItxNK_ZQ3oFKoUQd0e7sx51xaFj6VnNNo39Ms3mdyEQOEp> -- :( My old R&D campus is up for sale: https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftinyurl.com%2Fyurtlab&data=05%7C01%7Cjoy.hirsch%40yale.edu%7Cf5e9e87355214b87d92c08dbe6ef19ce%7Cdd8cbebb21394df8b4114e3e87abeb5c%7C0%7C0%7C638357687303366476%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=ZWWPpR1kI2YreQj1ChbWYPQJBMiQmXi%2BjtC765ic9b0%3D&reserved=0<https://tinyurl.com/yurtlab> Dave Täht CSO, LibreQos [-- Attachment #1.2: Type: text/html, Size: 10674 bytes --] [-- Attachment #2: Zoom.pdf --] [-- Type: application/pdf, Size: 3867791 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <AD77204F-4839-4292-976D-E7BE11A12C9B@gmx.de>]
* [NNagain] Fwd: separable processes for live in-person and live zoom-like faces [not found] ` <AD77204F-4839-4292-976D-E7BE11A12C9B@gmx.de> @ 2023-11-17 14:18 ` Dave Taht 2023-11-17 17:27 ` rjmcmahon 0 siblings, 1 reply; 6+ messages in thread From: Dave Taht @ 2023-11-17 14:18 UTC (permalink / raw) To: Network Neutrality is back! Let´s make the technical aspects heard this time! sending again as my server acted up on this url, I think. sorry for the dup... ---------- Forwarded message --------- From: Sebastian Moeller <moeller0@gmx.de> Date: Fri, Nov 17, 2023 at 3:45 AM Subject: Re: [NNagain] separable processes for live in-person and live zoom-like faces To: Network Neutrality is back! Let´s make the technical aspects heard this time! <nnagain@lists.bufferbloat.net> Cc: <joy.hirsch@yale.edu>, Dave Täht <dave.taht@gmail.com> Hi Dave, dear list here is the link to the paper's web page: https://direct.mit.edu/imag/article/doi/10.1162/imag_a_00027/117875/Separable-processes-for-live-in-person-and-live from which it can be downloaded. This fits right in my wheel house# ;) However I am concerned that the pupil diameter differs so much between the tested conditions, which implies significant differences in actual physical stimuli, making the whole conclusion a bit shaky*)... Also placing the true face at twice the distance of the "zoom" screens while from an experimentalist perspective understandable, was a sub-optimal decision**. Not a bad study (rather the opposite), but as so often poses even more detail question than it answers. Regarding your point about latency, this seems not well controlled at all, as all digital systems will have some latency and they do not report anything substantial: "In the Virtual Face condition, each dyad watched their partner’s faces projected in real time on separate 24-inch 16 × 9 computer monitors placed in front of the glass" I note technically in "real-time" only means that the inherent delay is smaller than what ever delay the relevant control loop can tolerate, so depending on the problem at hand "once-per-day" can be fully real-time, while for other problems "once-per-1µsec" might be too slow... But to give a lower bound delay number, they likely used a web cam (the paper I am afraid does not say specifically) so at best running at 60Hz (or even 30Hz) rolling shutter, so we have a) a potential image distortion from the rolling shutter (probably small due to the faces being close to at rest) and a "lens to RAM" delay of 1000/60 = 16.67 milliseconds. Then let's assume we can get this pushed to the screen ASAP, we will likely incur at the very least 0.5 refresh times on average for a total delay of >= 25ms. With modern "digital" screens that might be doing any fancy image processing (if only to calculate "over-drive" voltages to allow or faster gray-to-gray changes) the camera to eye delay might be considerably larger (adding a few frame times). This is a field where older analog systems could operate with much lower delay... I would assume that compared to the neuronal latencies of actually extracting information from the faces (it takes ~74-100ms to drive neurons in the more anterior face patches in macaques, and human brains are noticeably larger) this delay will be smallish, but it will certainly be only encountered for the "live" and not for the in-person faces. Regards Sebastian P.S.: In spite of my arguments I like the study, it is much easier to pose challenges to a study than to find robust and reliable solutions to the same challenges ;) #) Or it did, as I am not directly working on the face processing system any more *) Pupil diameter is controlled by multiple factors, ranging from its "boring" physiologic function as adaptive aperture the visual system uses to limit the amount of light hitting the retina, to some effect of cognitive processes or states of the sympathetic nervous system see e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6634360/ the paper, IMHO does over play the pupil diameter reasponses by not acknoledging that these might result from things as boring as not having the true faces and zoom faces sufficiently luminosity matched. **) Correcting the size of the projected image to match in degrees of visual angle only gets you so far, as we do have some sense of distance, so the same visual angle at 70cm corresponds to a smaller head/face than the same visual angle at 140cm... this is both nit-picky, but also important. I also note that 70cm is at the edge of a typical reach distance, while 1.4 m is clearly outside, yet we do treat peri-personal space with-in effector reach differently from space beyond that. > On Nov 16, 2023, at 22:57, Dave Taht via Nnagain <nnagain@lists.bufferbloat.net> wrote: > > Dear Joy: > > good paper that extends the idea of zoom fatigue into something closer > to zoom-induced somnolence. Thanks for doing this kind of detailed > measurements! > > I would be very interested in a study of brain activity while varying > latency alone as the variable for videoconferencing. One being say, a > live video feed between the participants (0 latency) vs zoom (at > 500ms), or with one jittering around, or one at, say 60ms vs 500ms. I > tend to be much happier after a day using "galene.org" which tries for > minimum latency than zoom, and still find my ability to interact > quickly across a dinner table hard to get into after too many hours on > it. Are y'all pursuing further studies? > > The link to the paper is mildly puzzling in that the token is > required, and I am assuming that perhaps it is generating a > watermarked version differently on every download? > > https://watermark.silverchair.com/imag_a_00027.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAzQwggMwBgkqhkiG9w0BBwagggMhMIIDHQIBADCCAxYGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMBvAQsisJ_ABhWglzAgEQgIIC5womb1HIyE-rX0v_xte9EwVGyabIjMO6g80txKAFqHmQPVEv7FostAfIK-4yXUZnSyLivNxp6pummVVwxB_kCJEyG2DtAH2R8ywkWTTjGw22vpotfz5injZM6fMRQNyTq8dcjtMTTFpEjbsrbupEoMFo7Z0wxqV8bmbPD8xO6hu1T8I8gg5579PZNHt7-PMNgqJVlEaxPY3nMvc1XkKYdh1RIhFetQkAdhhro2eWfu_njMvzdRWVeN2ohY6OnSJSDljWiWxyUOqnKX6tps2XFtVBWUh2sE3HK-EsI-w0EmpBlAC7huyQsXkXW7tmPOwA7yiGQm4uSfcOn_EKGhvzhjHsdP8Mm1QJat6_rWSPZZGwhFzPB2Wl92DDfiSOesKKQBv_OvmGc3FUmFAhqIeAlzlyNkdBydk2hQqvS46OTGfdBEvwpIH_AZclDiLeuyJPP5v2YaoByFQ7w4uXHMyNhEo5mR2_pQ3WM7CpzknZixRvA5TQySW830iH0k00QZwt6a3nphgV6R4int5Pl-QdmCKzFoJ2EuPIBKvG9H5yBq18E6r1jyk1mdFKpo0-OEpLNIBpGm-1SomHw2qo5uCRWoAW6MO7K-sKZokirXGgJ7rIdRznq3BXvYxFKVn7tqJlIAAX6qDrC0bkefj8PEweuk2zIraj1Ri3otbdX3h0zBsKgmdY6qiOn8LtyxIy3vvXLnbiaIColztgAt1cHuI6b0w3rLg7BGSE2cetBDTyGS9OS0NKq91xqljwDAZBFkuKwtfYLzxIeeBy4KrG-PBqGtUEholGjHHyKCwxytw12qvgrTjdX7cXhYJSrs-HBJtRgiP5Yb6DJAQrlqEKeGnyTlPv2o3jNVvT0CZ9zWX8Qm0O6wiGo1PqpxCM3VLw0VXVsWcHJ39eLYN30GuHznYCaH5INdtgZoJdQbmZO3o_tF7itz1uYHItxNK_ZQ3oFKoUQd0e7sx51xaFj6VnNNo39Ms3mdyEQOEp > > > -- > :( My old R&D campus is up for sale: https://tinyurl.com/yurtlab > Dave Täht CSO, LibreQos > _______________________________________________ > Nnagain mailing list > Nnagain@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/nnagain -- :( My old R&D campus is up for sale: https://tinyurl.com/yurtlab Dave Täht CSO, LibreQos ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [NNagain] Fwd: separable processes for live in-person and live zoom-like faces 2023-11-17 14:18 ` [NNagain] Fwd: " Dave Taht @ 2023-11-17 17:27 ` rjmcmahon 2023-11-17 18:57 ` [NNagain] " Sebastian Moeller 2023-11-17 19:14 ` [NNagain] Fwd: " Hal Murray 0 siblings, 2 replies; 6+ messages in thread From: rjmcmahon @ 2023-11-17 17:27 UTC (permalink / raw) To: Network Neutrality is back! Let´s make the technical aspects heard this time! The human brain is way too complicated to make simplified analysis like this is the latency required. It's a vast prediction machine and much, much more. I found at least three ways to understand the brain; 1) Read A Thousand Brains: A New Theory of Intelligence 2) Make friends with high skilled psychologists, people that assist world athletes can be quite good 3) Have a daughter study neuroscience so she can answer my basic question from an expert position Bob > sending again as my server acted up on this url, I think. sorry for the > dup... > > ---------- Forwarded message --------- > From: Sebastian Moeller <moeller0@gmx.de> > Date: Fri, Nov 17, 2023 at 3:45 AM > Subject: Re: [NNagain] separable processes for live in-person and live > zoom-like faces > To: Network Neutrality is back! Let´s make the technical aspects heard > this time! <nnagain@lists.bufferbloat.net> > Cc: <joy.hirsch@yale.edu>, Dave Täht <dave.taht@gmail.com> > > > Hi Dave, dear list > > here is the link to the paper's web page: > https://direct.mit.edu/imag/article/doi/10.1162/imag_a_00027/117875/Separable-processes-for-live-in-person-and-live > from which it can be downloaded. > > This fits right in my wheel house# ;) However I am concerned that the > pupil diameter differs so much between the tested conditions, which > implies significant differences in actual physical stimuli, making the > whole conclusion a bit shaky*)... Also placing the true face at twice > the distance of the "zoom" screens while from an experimentalist > perspective understandable, was a sub-optimal decision**. > > Not a bad study (rather the opposite), but as so often poses even more > detail question than it answers. Regarding your point about latency, > this seems not well controlled at all, as all digital systems will > have some latency and they do not report anything substantial: > "In the Virtual Face condition, each dyad watched their partner’s > faces projected in real time on separate 24-inch 16 × 9 computer > monitors placed in front of the glass" > > I note technically in "real-time" only means that the inherent delay > is smaller than what ever delay the relevant control loop can > tolerate, so depending on the problem at hand "once-per-day" can be > fully real-time, while for other problems "once-per-1µsec" might be > too slow... But to give a lower bound delay number, they likely used a > web cam (the paper I am afraid does not say specifically) so at best > running at 60Hz (or even 30Hz) rolling shutter, so we have a) a > potential image distortion from the rolling shutter (probably small > due to the faces being close to at rest) and a "lens to RAM" delay of > 1000/60 = 16.67 milliseconds. Then let's assume we can get this pushed > to the screen ASAP, we will likely incur at the very least 0.5 refresh > times on average for a total delay of >= 25ms. With modern "digital" > screens that might be doing any fancy image processing (if only to > calculate "over-drive" voltages to allow or faster gray-to-gray > changes) the camera to eye delay might be considerably larger (adding > a few frame times). This is a field where older analog systems could > operate with much lower delay... > > I would assume that compared to the neuronal latencies of actually > extracting information from the faces (it takes ~74-100ms to drive > neurons in the more anterior face patches in macaques, and human > brains are noticeably larger) this delay will be smallish, but it will > certainly be only encountered for the "live" and not for the in-person > faces. > > > Regards > Sebastian > > P.S.: In spite of my arguments I like the study, it is much easier to > pose challenges to a study than to find robust and reliable solutions > to the same challenges ;) > > > #) Or it did, as I am not directly working on the face processing > system any more > *) Pupil diameter is controlled by multiple factors, ranging from its > "boring" physiologic function as adaptive aperture the visual system > uses to limit the amount of light hitting the retina, to some effect > of cognitive processes or states of the sympathetic nervous system see > e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6634360/ the paper, > IMHO does over play the pupil diameter reasponses by not acknoledging > that these might result from things as boring as not having the true > faces and zoom faces sufficiently luminosity matched. > **) Correcting the size of the projected image to match in degrees of > visual angle only gets you so far, as we do have some sense of > distance, so the same visual angle at 70cm corresponds to a smaller > head/face than the same visual angle at 140cm... this is both > nit-picky, but also important. I also note that 70cm is at the edge of > a typical reach distance, while 1.4 m is clearly outside, yet we do > treat peri-personal space with-in effector reach differently from > space beyond that. > > >> On Nov 16, 2023, at 22:57, Dave Taht via Nnagain >> <nnagain@lists.bufferbloat.net> wrote: >> >> Dear Joy: >> >> good paper that extends the idea of zoom fatigue into something closer >> to zoom-induced somnolence. Thanks for doing this kind of detailed >> measurements! >> >> I would be very interested in a study of brain activity while varying >> latency alone as the variable for videoconferencing. One being say, a >> live video feed between the participants (0 latency) vs zoom (at >> 500ms), or with one jittering around, or one at, say 60ms vs 500ms. I >> tend to be much happier after a day using "galene.org" which tries for >> minimum latency than zoom, and still find my ability to interact >> quickly across a dinner table hard to get into after too many hours on >> it. Are y'all pursuing further studies? >> >> The link to the paper is mildly puzzling in that the token is >> required, and I am assuming that perhaps it is generating a >> watermarked version differently on every download? >> >> https://watermark.silverchair.com/imag_a_00027.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAzQwggMwBgkqhkiG9w0BBwagggMhMIIDHQIBADCCAxYGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMBvAQsisJ_ABhWglzAgEQgIIC5womb1HIyE-rX0v_xte9EwVGyabIjMO6g80txKAFqHmQPVEv7FostAfIK-4yXUZnSyLivNxp6pummVVwxB_kCJEyG2DtAH2R8ywkWTTjGw22vpotfz5injZM6fMRQNyTq8dcjtMTTFpEjbsrbupEoMFo7Z0wxqV8bmbPD8xO6hu1T8I8gg5579PZNHt7-PMNgqJVlEaxPY3nMvc1XkKYdh1RIhFetQkAdhhro2eWfu_njMvzdRWVeN2ohY6OnSJSDljWiWxyUOqnKX6tps2XFtVBWUh2sE3HK-EsI-w0EmpBlAC7huyQsXkXW7tmPOwA7yiGQm4uSfcOn_EKGhvzhjHsdP8Mm1QJat6_rWSPZZGwhFzPB2Wl92DDfiSOesKKQBv_OvmGc3FUmFAhqIeAlzlyNkdBydk2hQqvS46OTGfdBEvwpIH_AZclDiLeuyJPP5v2YaoByFQ7w4uXHMyNhEo5mR2_pQ3WM7CpzknZixRvA5TQySW830iH0k00QZwt6a3nphgV6R4int5Pl-QdmCKzFoJ2EuPIBKvG9H5yBq18E6r1jyk1mdFKpo0-OEpLNIBpGm-1SomHw2qo5uCRWoAW6MO7K-sKZokirXGgJ7rIdRznq3BXvYxFKVn7tqJlIAAX6qDrC0bkefj8PEweuk2zIraj1Ri3otbdX3h0zBsKgmdY6qiOn8LtyxIy3vvXLnbiaIColztgAt1cHuI6b0w3rLg7BGSE2cetBDTyGS9OS0NKq91xqljwDAZBFkuKwtfYLzxIeeBy4KrG- PBqGtUEholGjHHyKCwxytw12qvgrTjdX7cXhYJSrs-HBJtRgiP5Yb6DJAQrlqEKeGnyTlPv2o3jNVvT0CZ9zWX8Qm0O6wiGo1PqpxCM3VLw0VXVsWcHJ39eLYN30GuHznYCaH5INdtgZoJdQbmZO3o_tF7itz1uYHItxNK_ZQ3oFKoUQd0e7sx51xaFj6VnNNo39Ms3mdyEQOEp >> >> >> -- >> :( My old R&D campus is up for sale: https://tinyurl.com/yurtlab >> Dave Täht CSO, LibreQos >> _______________________________________________ >> Nnagain mailing list >> Nnagain@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/nnagain ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [NNagain] separable processes for live in-person and live zoom-like faces 2023-11-17 17:27 ` rjmcmahon @ 2023-11-17 18:57 ` Sebastian Moeller 2023-11-17 19:14 ` [NNagain] Fwd: " Hal Murray 1 sibling, 0 replies; 6+ messages in thread From: Sebastian Moeller @ 2023-11-17 18:57 UTC (permalink / raw) To: Network Neutrality is back! Let´s make the technical aspects heard this time! > On Nov 17, 2023, at 18:27, rjmcmahon via Nnagain <nnagain@lists.bufferbloat.net> wrote: > > The human brain is way too complicated to make simplified analysis like this is the latency required. [SM] On the sensory side this is not all that hard, e.g. we can (and routinely do) measure how long it takes after stimulus onset until neurons start to significantly change their firing rate, a value that often is described as "neuronal latency" or "response latency". While single unit electro-physiologic recordings in the human brain are rare, they are not unheard of, most neuronal data however comes from different species. However it crucially depends on the definition of "latency" one uses, and I am not sure we are talking about the same latency here? > It's a vast prediction machine and much, much more. [SM] Indeed ;) and all of this in a tightly interwoven network where reductionism only carries so far. Still we have come a long way and gained educated glimpses into some of the functionality. > > I found at least three ways to understand the brain; [SM] You are ahead of me the, I still struggle to understand the brain ;) (fine by me, there are questions big enough that one needs to expect that they will stubbornly withstand attempts at getting elegant and helpful answers/theories, for me "how does the brain work" is one of those) > > 1) Read A Thousand Brains: A New Theory of Intelligence > 2) Make friends with high skilled psychologists, people that assist world athletes can be quite good > 3) Have a daughter study neuroscience so she can answer my basic question from an expert position [SM] All seem fine, even though 3) is a bit tricky to replicate. Regards Sebastian > Bob >> sending again as my server acted up on this url, I think. sorry for the dup... >> ---------- Forwarded message --------- >> From: Sebastian Moeller <moeller0@gmx.de> >> Date: Fri, Nov 17, 2023 at 3:45 AM >> Subject: Re: [NNagain] separable processes for live in-person and live >> zoom-like faces >> To: Network Neutrality is back! Let´s make the technical aspects heard >> this time! <nnagain@lists.bufferbloat.net> >> Cc: <joy.hirsch@yale.edu>, Dave Täht <dave.taht@gmail.com> >> Hi Dave, dear list >> here is the link to the paper's web page: >> h++ps://direct.mit.edu/imag/article/doi/10.1162/imag_a_00027/117875/Separable-processes-for-live-in-person-and-live >> from which it can be downloaded. >> This fits right in my wheel house# ;) However I am concerned that the >> pupil diameter differs so much between the tested conditions, which >> implies significant differences in actual physical stimuli, making the >> whole conclusion a bit shaky*)... Also placing the true face at twice >> the distance of the "zoom" screens while from an experimentalist >> perspective understandable, was a sub-optimal decision**. >> Not a bad study (rather the opposite), but as so often poses even more >> detail question than it answers. Regarding your point about latency, >> this seems not well controlled at all, as all digital systems will >> have some latency and they do not report anything substantial: >> "In the Virtual Face condition, each dyad watched their partner’s >> faces projected in real time on separate 24-inch 16 × 9 computer >> monitors placed in front of the glass" >> I note technically in "real-time" only means that the inherent delay >> is smaller than what ever delay the relevant control loop can >> tolerate, so depending on the problem at hand "once-per-day" can be >> fully real-time, while for other problems "once-per-1µsec" might be >> too slow... But to give a lower bound delay number, they likely used a >> web cam (the paper I am afraid does not say specifically) so at best >> running at 60Hz (or even 30Hz) rolling shutter, so we have a) a >> potential image distortion from the rolling shutter (probably small >> due to the faces being close to at rest) and a "lens to RAM" delay of >> 1000/60 = 16.67 milliseconds. Then let's assume we can get this pushed >> to the screen ASAP, we will likely incur at the very least 0.5 refresh >> times on average for a total delay of >= 25ms. With modern "digital" >> screens that might be doing any fancy image processing (if only to >> calculate "over-drive" voltages to allow or faster gray-to-gray >> changes) the camera to eye delay might be considerably larger (adding >> a few frame times). This is a field where older analog systems could >> operate with much lower delay... >> I would assume that compared to the neuronal latencies of actually >> extracting information from the faces (it takes ~74-100ms to drive >> neurons in the more anterior face patches in macaques, and human >> brains are noticeably larger) this delay will be smallish, but it will >> certainly be only encountered for the "live" and not for the in-person >> faces. >> Regards >> Sebastian >> P.S.: In spite of my arguments I like the study, it is much easier to >> pose challenges to a study than to find robust and reliable solutions >> to the same challenges ;) >> #) Or it did, as I am not directly working on the face processing >> system any more >> *) Pupil diameter is controlled by multiple factors, ranging from its >> "boring" physiologic function as adaptive aperture the visual system >> uses to limit the amount of light hitting the retina, to some effect >> of cognitive processes or states of the sympathetic nervous system see >> e.g. h++ps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6634360/ the paper, >> IMHO does over play the pupil diameter reasponses by not acknoledging >> that these might result from things as boring as not having the true >> faces and zoom faces sufficiently luminosity matched. >> **) Correcting the size of the projected image to match in degrees of >> visual angle only gets you so far, as we do have some sense of >> distance, so the same visual angle at 70cm corresponds to a smaller >> head/face than the same visual angle at 140cm... this is both >> nit-picky, but also important. I also note that 70cm is at the edge of >> a typical reach distance, while 1.4 m is clearly outside, yet we do >> treat peri-personal space with-in effector reach differently from >> space beyond that. >>> On Nov 16, 2023, at 22:57, Dave Taht via Nnagain <nnagain@lists.bufferbloat.net> wrote: >>> Dear Joy: >>> good paper that extends the idea of zoom fatigue into something closer >>> to zoom-induced somnolence. Thanks for doing this kind of detailed >>> measurements! >>> I would be very interested in a study of brain activity while varying >>> latency alone as the variable for videoconferencing. One being say, a >>> live video feed between the participants (0 latency) vs zoom (at >>> 500ms), or with one jittering around, or one at, say 60ms vs 500ms. I >>> tend to be much happier after a day using "galene.org" which tries for >>> minimum latency than zoom, and still find my ability to interact >>> quickly across a dinner table hard to get into after too many hours on >>> it. Are y'all pursuing further studies? >>> The link to the paper is mildly puzzling in that the token is >>> required, and I am assuming that perhaps it is generating a >>> watermarked version differently on every download? >>> >>> [...] >>> :( My old R&D campus is up for sale: h++ps://tinyurl.com/yurtlab >>> Dave Täht CSO, LibreQos >>> _______________________________________________ >>> Nnagain mailing list >>> Nnagain@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/nnagain > _______________________________________________ > Nnagain mailing list > Nnagain@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/nnagain ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [NNagain] Fwd: separable processes for live in-person and live zoom-like faces 2023-11-17 17:27 ` rjmcmahon 2023-11-17 18:57 ` [NNagain] " Sebastian Moeller @ 2023-11-17 19:14 ` Hal Murray 1 sibling, 0 replies; 6+ messages in thread From: Hal Murray @ 2023-11-17 19:14 UTC (permalink / raw) To: Network Neutrality is back! Let´s make the technical aspects heard this time! rjmcmahon said: > The human brain is way too complicated to make simplified analysis like this > is the latency required. It's a vast prediction machine and much, much more. I agree that the brain is very complex, but it isn't a total mystery. We can measure some things and work out some timing requirements. Examples: Movies/TV have a minimum frame rate to avoid flicker. Phone systems have a max round trip latency. (I think back in the days of satellites, they decided that one sat link was OK but 2 was too long.) You can measure the time to push a button after a light goes on. That's tangled up with hand/eye coordination for catching a ball or using a mouse. I get (slightly) annoyed by the delay when news shows switch to a (very) remote reporter. I see no reason why a latency requirement couldn't be worked out for something like a Zoom meeting. -- These are my opinions. I hate spam. ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-11-17 19:14 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-11-16 21:57 [NNagain] separable processes for live in-person and live zoom-like faces Dave Taht 2023-11-17 14:16 ` Hirsch, Joy [not found] ` <AD77204F-4839-4292-976D-E7BE11A12C9B@gmx.de> 2023-11-17 14:18 ` [NNagain] Fwd: " Dave Taht 2023-11-17 17:27 ` rjmcmahon 2023-11-17 18:57 ` [NNagain] " Sebastian Moeller 2023-11-17 19:14 ` [NNagain] Fwd: " Hal Murray
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox