From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bobcat.rjmcmahon.com (bobcat.rjmcmahon.com [45.33.58.123]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id D649F3B29D for ; Tue, 14 Nov 2023 13:30:17 -0500 (EST) Received: from mail.rjmcmahon.com (bobcat.rjmcmahon.com [45.33.58.123]) by bobcat.rjmcmahon.com (Postfix) with ESMTPA id 355851B252 for ; Tue, 14 Nov 2023 10:30:17 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 bobcat.rjmcmahon.com 355851B252 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rjmcmahon.com; s=bobcat; t=1699986617; bh=gQogAO3qZiD+IXzI7PlZJMP4h0O2/ySLKAQdxcW3rLA=; h=Date:From:To:Subject:In-Reply-To:References:From; b=AcsoIJhZNTLWpGCjIMAlFeXJEV3n4KmUn+7eEQBycnc9nq2AhH2D0+vxqWzQwyvHC BvIenHfBhq4orfNjx/MW0zhEw19ztWG+PB6qb1U8HzduWLjaZmt206Sch3j80S9n6j r9C9Y5hfcnkMmgvg3tZm9puZXZXLc1UM0Uins2ws= MIME-Version: 1.0 Date: Tue, 14 Nov 2023 10:30:17 -0800 From: rjmcmahon To: =?UTF-8?Q?Network_Neutrality_is_back!_Let=C2=B4s_make_the_technical_a?= =?UTF-8?Q?spects_heard_this_time!?= In-Reply-To: References: Message-ID: <9f857c1ada5bdb77fc5005e897b2eaa8@rjmcmahon.com> X-Sender: rjmcmahon@rjmcmahon.com Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [NNagain] FCC NOI due dec 1 on broadband speed standards X-BeenThere: nnagain@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: =?utf-8?q?Network_Neutrality_is_back!_Let=C2=B4s_make_the_technical_aspects_heard_this_time!?= List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Nov 2023 18:30:17 -0000 Also, don't forget to measure with working-loads too --working-load[=up|down|bidir][,n] request a concurrent working load, currently TCP stream(s), defaults to full duplex (or bidir) unless the up or down option is provided. The number of TCP streams defaults to 1 and can be changed via the n value, e.g. --working-load=down,4 will use four TCP streams from server to the client as the working load. The IP ToS will be BE (0x0) for working load traffic. --working-load-cca Set the congestion control algorithm to be used for TCP working loads Maybe keep a few rasberry pi's in one's backback with iperf 2? That way you're always prepared for latency emergencies. The Rpi5 has a TCXO for it's clock. Bob > It's frustrating to me that even experts here don't measure latency as > a first priority. The tooling has been available for years to do this. > And it's only getting better and more feature rich, e.g. bounce-back. > > --bounceback[=n] > run a TCP bounceback or rps test with optional number writes in a > burst per value of n. The default is ten writes every period and the > default period is one second (Note: set size with > --bounceback-request). See NOTES on clock unsynchronized detections. > --bounceback-hold n > request the server to insert a delay of n milliseconds between its > read and write (default is no delay) > --bounceback-no-quickack > request the server not set the TCP_QUICKACK socket option (disabling > TCP ACK delays) during a bounceback test (see NOTES) > --bounceback-period[=n] > request the client schedule its send(s) every n seconds (default is > one second, use zero value for immediate or continuous back to back) > --bounceback-request n > set the bounceback request size in units bytes. Default value is 100 > bytes. > --bounceback-reply n > set the bounceback reply size in units bytes. This supports asymmetric > message sizes between the request and the reply. Default value is > zero, which uses the value of --bounceback-request. > --bounceback-txdelay n > request the client to delay n seconds between the start of the working > load and the bounceback traffic (default is no delay) > > https://iperf2.sourceforge.io/iperf-manpage.html > > Bob >> If video conferencing worked well enough, they would not have to all >> get together in one place and would instead hold IETF meetings online >> ...? >> >> Did anyone measure latency? Does anyone measure throughput of >> "useful" traffic - e.g., excluding video/audio data that didn't arrive >> in time to be actually used on the screen or speaker? >> >> Jack Haverty >> >> On 11/14/23 09:25, Vint Cerf via Nnagain wrote: >> >>> if they had not been all together they would have been consuming >>> tons of video capacity doing video conference calls.... >>> >>> :-)) >>> v >>> >>> On Tue, Nov 14, 2023 at 10:46 AM Livingood, Jason via Nnagain >>> wrote: >>> >>>> On the subject of how much bandwidth does one household need, >>>> here's a fun stat for you. >>>> >>>> At the IETF’s 118th meeting [1] last week (Nov 4 – 10, 2023), >>>> there were over 1,000 engineers in attendance. At peak there were >>>> 870 devices connected to the WiFi network. Peak bandwidth usage: >>>> >>>> * Downstream peak ~750 Mbps >>>> * Upstream ~250 Mbps >>>> >>>> From my pre-meeting Twitter poll >>>> (https://twitter.com/jlivingood/status/1720060429311901873): >>>> >>>> _______________________________________________ >>>> Nnagain mailing list >>>> Nnagain@lists.bufferbloat.net >>>> https://lists.bufferbloat.net/listinfo/nnagain >>> >>> -- >>> >>> Please send any postal/overnight deliveries to: >>> >>> Vint Cerf >>> Google, LLC >>> 1900 Reston Metro Plaza, 16th Floor >>> Reston, VA 20190 >>> +1 (571) 213 1346 >>> >>> until further notice >>> >>> _______________________________________________ >>> Nnagain mailing list >>> Nnagain@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/nnagain >> >> >> >> Links: >> ------ >> [1] https://www.ietf.org/how/meetings/118/ >> _______________________________________________ >> Nnagain mailing list >> Nnagain@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/nnagain > _______________________________________________ > Nnagain mailing list > Nnagain@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/nnagain