From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id B7E0E3CB46 for ; Fri, 17 Nov 2023 13:57:10 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.de; s=s31663417; t=1700247425; x=1700852225; i=moeller0@gmx.de; bh=lHTnNsRaQzfnhk62EUObHXf/NG2EADUlPBT572j1vdM=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References: To; b=VVh7G9QPXa80isdvVcLcy+n9FdKko/GlYnB+UlMZb25imybCajrKFu6Z0GknNe15 E6NYhXUVHLC0C38BL7d4grKkhhrenT7bQROj/75UxPE4tnkMFQ+pOgi8D7Ue6uDd1 ZzIrLLpbYCPcEFBYD5qCEUjr5F889ZcUybgXHBvZAIsQkWzwNXDuBmd9OZAG9IQnI HnwDMcwz1cm1uEOg9k3r98WbJBCbLfCGfcVJ+BVrRbIxF4m6NHn9985u5NNVoUWQC OSt6vOWNuoK8rIii8hCdGoVo5QWoSgqHVkXDL/Witgvr5+BSmBO0pJWapgJdXRSyh oGLOBTgrkaqAjgtWiQ== X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a Received: from smtpclient.apple ([77.3.62.8]) by mail.gmx.net (mrgmx104 [212.227.17.168]) with ESMTPSA (Nemesis) id 1Mlf0U-1rlKwF2Xbu-00ikKA; Fri, 17 Nov 2023 19:57:05 +0100 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.4\)) From: Sebastian Moeller In-Reply-To: <965f84c27d3d7c00ebcbaae5382234cd@rjmcmahon.com> Date: Fri, 17 Nov 2023 19:57:04 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <50F42E9C-EF14-43DD-A42D-AEFF47E76B81@gmx.de> References: <965f84c27d3d7c00ebcbaae5382234cd@rjmcmahon.com> To: =?utf-8?Q?Network_Neutrality_is_back!_Let=C2=B4s_make_the_technical_as?= =?utf-8?Q?pects_heard_this_time!?= X-Mailer: Apple Mail (2.3696.120.41.1.4) X-Provags-ID: V03:K1:bL4Ej2XUiQycltODwkwaXQxv0TbF9qlfcbWEkMRT5YG6zbTEwIr 6+PYN7W8aOT7MqLQUks2RWO3CpvzKOlOO6z+qHlAUR1UBijyZOWymvQk83kgjf9yUNRsWPB WbVN8aY7TggMktK9grRMO0TcMXbgfPF60+PEXLVsz6eJ5vgEpTSg1lXB2FUu2AG+6WIct/c Q1H3H2UrOOT0QxRmKKXuw== X-Spam-Flag: NO UI-OutboundReport: notjunk:1;M01:P0:kt8axg2qbwg=;Nvyb7euAt5Q1Smcdw/5rJaaZbxS 2u0AaOi/GjO7og2bagR5f1gnfs1ioWpLMVpovhlRLjk+EWOILG1FAMNP0sQ4yUiMBmm42ohuo cXEktrWnnDhP9hUK9pxtyuWf0YLGrvptUznWiZ2MiV9XYhm9C5roHo08SiSg7foIbQgWCp/v/ AIk78Clua/W/1u7P5blJsT/mPPkVPo9XXbkKnmRtD6l41iABZ0MRz7Vu/RyWQFxVk1nf0CqMd WVNYXNd7GXrEp8RRkkbV5BgKKfp/lGQsZCqVk9sW+L6rVIiQIieZtPImmXuYIUBjhEMWupvZg PzRhgycmupb6Pp4UkvnpItU8G9Rfmr97LorwLZtxh0TwjXMzMJ2L6BnJvlB5eCHXsB8r8852t hUhDGIK1bx1Gh+VSrzckT3BCXCpvRCqFBVNQvoXq5y5lF5htUX0Zhza5QhHH/WIRqAg57uqTw mmu1KXOc1PoaDmgdgEFrt5sMnbsmc7ugpwabO1jR0PDvNrsA7LBCDww+fID5ZS1L+ll2ctJZf To+UgmRQ8sYJP4gOsCkQjXHhkCB11KZjSRGn4nHZG3agDa5tg0pp0DIC8AzAqIy7dLN65mnIj Dy5SfK/yFQvKYNzXEIJjfP/eLLojVRNCZKwMyGmsIP5Zcs9lePbMopxM1B29KIrFpNRT1qcit rLNJHDwQSTy+Me3ZcA/fR2H8PRA7VKPnfhrOVaB46NVE/egazUIwjiMy22nVtPC+ibTdNBja1 U7F9GmA51zHznswifU+6IwOoT9n7UxMOt+uheUSMFyDJG3r0T/aCEwB5LPJlOiMUDNxV/WGDV Mokzt/9Ix1mB6h1yCsy39EQteNdz4Ay56lJa9h/LzOlHynOiX4Qsp/5Rx2G2WdKbLtct26vjS KGYAQzYC3d+nw509nVOOHBiHG3JMwbByoF+wjtegxz+PJ4YJkJKTOHDgf1nSguwNqG4eKaM2S gYSUR3YTIe1RJfpGgd6eGVMq3Tu2SnBpbPtG+y0noH1p8h+y Subject: Re: [NNagain] separable processes for live in-person and live zoom-like faces X-BeenThere: nnagain@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: =?utf-8?q?Network_Neutrality_is_back!_Let=C2=B4s_make_the_technical_aspects_heard_this_time!?= List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Nov 2023 18:57:11 -0000 > On Nov 17, 2023, at 18:27, rjmcmahon via Nnagain = wrote: >=20 > The human brain is way too complicated to make simplified analysis = like this is the latency required. [SM] On the sensory side this is not all that hard, e.g. we can = (and routinely do) measure how long it takes after stimulus onset until = neurons start to significantly change their firing rate, a value that = often is described as "neuronal latency" or "response latency". While = single unit electro-physiologic recordings in the human brain are rare, = they are not unheard of, most neuronal data however comes from different = species. However it crucially depends on the definition of "latency" one = uses, and I am not sure we are talking about the same latency here? > It's a vast prediction machine and much, much more. [SM] Indeed ;) and all of this in a tightly interwoven network = where reductionism only carries so far. Still we have come a long way = and gained educated glimpses into some of the functionality. >=20 > I found at least three ways to understand the brain; [SM] You are ahead of me the, I still struggle to understand the = brain ;) (fine by me, there are questions big enough that one needs to = expect that they will stubbornly withstand attempts at getting elegant = and helpful answers/theories, for me "how does the brain work" is one of = those) >=20 > 1) Read A Thousand Brains: A New Theory of Intelligence > 2) Make friends with high skilled psychologists, people that assist = world athletes can be quite good > 3) Have a daughter study neuroscience so she can answer my basic = question from an expert position [SM] All seem fine, even though 3) is a bit tricky to replicate. Regards Sebastian > Bob >> sending again as my server acted up on this url, I think. sorry for = the dup... >> ---------- Forwarded message --------- >> From: Sebastian Moeller >> Date: Fri, Nov 17, 2023 at 3:45=E2=80=AFAM >> Subject: Re: [NNagain] separable processes for live in-person and = live >> zoom-like faces >> To: Network Neutrality is back! Let=C2=B4s make the technical aspects = heard >> this time! >> Cc: , Dave T=C3=A4ht >> Hi Dave, dear list >> here is the link to the paper's web page: >> = h++ps://direct.mit.edu/imag/article/doi/10.1162/imag_a_00027/117875/Separa= ble-processes-for-live-in-person-and-live >> from which it can be downloaded. >> This fits right in my wheel house# ;) However I am concerned that the >> pupil diameter differs so much between the tested conditions, which >> implies significant differences in actual physical stimuli, making = the >> whole conclusion a bit shaky*)... Also placing the true face at twice >> the distance of the "zoom" screens while from an experimentalist >> perspective understandable, was a sub-optimal decision**. >> Not a bad study (rather the opposite), but as so often poses even = more >> detail question than it answers. Regarding your point about latency, >> this seems not well controlled at all, as all digital systems will >> have some latency and they do not report anything substantial: >> "In the Virtual Face condition, each dyad watched their partner=E2=80=99= s >> faces projected in real time on separate 24-inch 16 =C3=97 9 computer >> monitors placed in front of the glass" >> I note technically in "real-time" only means that the inherent delay >> is smaller than what ever delay the relevant control loop can >> tolerate, so depending on the problem at hand "once-per-day" can be >> fully real-time, while for other problems "once-per-1=C2=B5sec" might = be >> too slow... But to give a lower bound delay number, they likely used = a >> web cam (the paper I am afraid does not say specifically) so at best >> running at 60Hz (or even 30Hz) rolling shutter, so we have a) a >> potential image distortion from the rolling shutter (probably small >> due to the faces being close to at rest) and a "lens to RAM" delay of >> 1000/60 =3D 16.67 milliseconds. Then let's assume we can get this = pushed >> to the screen ASAP, we will likely incur at the very least 0.5 = refresh >> times on average for a total delay of >=3D 25ms. With modern = "digital" >> screens that might be doing any fancy image processing (if only to >> calculate "over-drive" voltages to allow or faster gray-to-gray >> changes) the camera to eye delay might be considerably larger (adding >> a few frame times). This is a field where older analog systems could >> operate with much lower delay... >> I would assume that compared to the neuronal latencies of actually >> extracting information from the faces (it takes ~74-100ms to drive >> neurons in the more anterior face patches in macaques, and human >> brains are noticeably larger) this delay will be smallish, but it = will >> certainly be only encountered for the "live" and not for the = in-person >> faces. >> Regards >> Sebastian >> P.S.: In spite of my arguments I like the study, it is much easier to >> pose challenges to a study than to find robust and reliable solutions >> to the same challenges ;) >> #) Or it did, as I am not directly working on the face processing >> system any more >> *) Pupil diameter is controlled by multiple factors, ranging from its >> "boring" physiologic function as adaptive aperture the visual system >> uses to limit the amount of light hitting the retina, to some effect >> of cognitive processes or states of the sympathetic nervous system = see >> e.g. h++ps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6634360/ the paper, >> IMHO does over play the pupil diameter reasponses by not acknoledging >> that these might result from things as boring as not having the true >> faces and zoom faces sufficiently luminosity matched. >> **) Correcting the size of the projected image to match in degrees of >> visual angle only gets you so far, as we do have some sense of >> distance, so the same visual angle at 70cm corresponds to a smaller >> head/face than the same visual angle at 140cm... this is both >> nit-picky, but also important. I also note that 70cm is at the edge = of >> a typical reach distance, while 1.4 m is clearly outside, yet we do >> treat peri-personal space with-in effector reach differently from >> space beyond that. >>> On Nov 16, 2023, at 22:57, Dave Taht via Nnagain = wrote: >>> Dear Joy: >>> good paper that extends the idea of zoom fatigue into something = closer >>> to zoom-induced somnolence. Thanks for doing this kind of detailed >>> measurements! >>> I would be very interested in a study of brain activity while = varying >>> latency alone as the variable for videoconferencing. One being say, = a >>> live video feed between the participants (0 latency) vs zoom (at >>> 500ms), or with one jittering around, or one at, say 60ms vs 500ms. = I >>> tend to be much happier after a day using "galene.org" which tries = for >>> minimum latency than zoom, and still find my ability to interact >>> quickly across a dinner table hard to get into after too many hours = on >>> it. Are y'all pursuing further studies? >>> The link to the paper is mildly puzzling in that the token is >>> required, and I am assuming that perhaps it is generating a >>> watermarked version differently on every download? >>>=20 >>> [...] >>> :( My old R&D campus is up for sale: h++ps://tinyurl.com/yurtlab >>> Dave T=C3=A4ht CSO, LibreQos >>> _______________________________________________ >>> Nnagain mailing list >>> Nnagain@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/nnagain > _______________________________________________ > Nnagain mailing list > Nnagain@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/nnagain