From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bobcat.rjmcmahon.com (bobcat.rjmcmahon.com [45.33.58.123]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id CA4CB3B29D; Sat, 3 Jun 2023 15:15:31 -0400 (EDT) Received: from mail.rjmcmahon.com (bobcat.rjmcmahon.com [45.33.58.123]) by bobcat.rjmcmahon.com (Postfix) with ESMTPA id 185261B31E; Sat, 3 Jun 2023 12:15:31 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 bobcat.rjmcmahon.com 185261B31E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rjmcmahon.com; s=bobcat; t=1685819731; bh=seFVJv+bRV0xBYJoGhkX+Qvn1djGWgYVKAQ9IqsuURs=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=Z3yVJoU+KFq/tE6wzaQGloCuJ/zR9jK2+AduuBCYbJyvzGSwDcQE9ZCq0i5QVUmZd xnSid9ZtDwb/JC/2cnTfa78BYiNRC9ZMKQVg2b1Kab/McSVs94DOvIQivoiCBkG+/l 0xbmDNLNJs2DMrtDMxvZo2F38YNI6hwoO7JVdH54= MIME-Version: 1.0 Date: Sat, 03 Jun 2023 12:15:31 -0700 From: rjmcmahon To: Aaron Wood Cc: Dave Taht , Rpm , bloat In-Reply-To: References: Message-ID: <6968461fa2076430aa8d379709488c5a@rjmcmahon.com> X-Sender: rjmcmahon@rjmcmahon.com Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [Bloat] [Rpm] receive window bug fix X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 03 Jun 2023 19:15:31 -0000 I think better tooling can help and I am always interested in suggestions on what to add to iperf 2 for better coverages. I've thought it good for iperf 2 to support some sort of graph which drives socket read/write/delays vs a simplistic pattern of AFAP. It for sure stresses things differently, even in drivers. I've seen huge delays in some 10G drivers where some UDP packets seem to get stuck in queues and where the e2e latency is driven by the socket write rates vs the network delays. This is most obvious using burst patterns where the last packet of a latency burst is coupled to the first packet of the subsequent burst. The coupling between the syscalls to network performance is nonobvious and sometimes hard to believe. We've been adding more "traffic profile" knobs for socket testing and have much of the latency metrics incorporated. Most don't use these. They seem to be hard to generalize. Cloudflare seems to have crafted specific tests after obtaining knowledge of causality. Bob PS. As a side note, I'm now being asked how to generate "AI loads" into switch fabrics, though there it probably won't be based upon socket syscalls but maybe using io_urings - not sure. > This is good work! I love reading their posts on scale like this. > > It’s wild to me that the Linux kernel has (apparently) never > implemented shrinking the receive window, or handling the case of > userspace starting a large transfer and then just not ever reading > it… the latter is less surprising, I guess, because that’s an > application bug that you probably would catch separately, and would be > focused on fixing in the application layer… > > -Aaron > > On Sat, Jun 3, 2023 at 1:04 AM Dave Taht via Rpm > wrote: > >> these folk do good work, and I loved the graphs >> >> > https://blog.cloudflare.com/unbounded-memory-usage-by-tcp-for-receive-buffers-and-how-we-fixed-it/ >> >> -- >> Podcast: >> > https://www.linkedin.com/feed/update/urn:li:activity:7058793910227111937/ >> Dave Täht CSO, LibreQos >> _______________________________________________ >> Rpm mailing list >> Rpm@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/rpm > -- > - Sent from my iPhone. > _______________________________________________ > Rpm mailing list > Rpm@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/rpm