From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 2728A3B29D for ; Mon, 7 Jun 2021 16:07:43 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623096462; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HWWPKgRo52z6KOrgi6hsCHXgKQzWY359pVBGLEmJDwg=; b=L+MfPLodPwshaKrrUMtcxGrkBTZaVGRxZFa65d7E5Mq3zWFsWmNwENx0QtrlA3SGXQbrMY NvGohE6I6F7tsWXkjPxuSGH+qCfKxTGBlv8LLDtl9Z87Di1VDdRWGxhnqPG75rt3JzBhWG NeX6ZlvZOu2zXR3Wiiccbnrc1PboJN8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-399-9NnOUAucOpSRkUAGzq193Q-1; Mon, 07 Jun 2021 16:07:40 -0400 X-MC-Unique: 9NnOUAucOpSRkUAGzq193Q-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1534819253C0; Mon, 7 Jun 2021 20:07:39 +0000 (UTC) Received: from carbon (unknown [10.36.110.39]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3ABC660917; Mon, 7 Jun 2021 20:07:37 +0000 (UTC) Date: Mon, 7 Jun 2021 22:07:35 +0200 From: Jesper Dangaard Brouer To: Rich Brown Cc: brouer@redhat.com, bloat , "Ethy H. Brito" Message-ID: <20210607220735.7d4e0b4c@carbon> In-Reply-To: References: <20210607133853.045a96d5@babalu> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=brouer@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [Bloat] Fwd: Traffic shaping at 10~300mbps at a 10Gbps link X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jun 2021 20:07:43 -0000 Hi Rich, Quote: > > I use one root HTB qdisc and one root (1:) HTB class. Sounds like a classical case of lock congestion on the TC-root qdisc lock. I have a solution using XDP here[1] combined with TC. Google have also hit the problem, they solved it differently, specific to their use-case. [1] https://github.com/xdp-project/xdp-cpumap-tc On Mon, 7 Jun 2021 13:28:10 -0400 Rich Brown wrote: > Saw this on the lartc mailing list... For my own information, does > anyone have thoughts, esp. for this quote: > > "... when the speed comes to about 4.5Gbps download (upload is about > 500mbps), chaos kicks in. CPU load goes sky high (all 24x2.4Ghz > physical cores above 90% - 48x2.4Ghz if count that virtualization is > on)..." > > Thanks. > > Rich > > > > Begin forwarded message: > > > > From: "Ethy H. Brito" > > Subject: Traffic shaping at 10~300mbps at a 10Gbps link > > Date: June 7, 2021 at 12:38:53 PM EDT > > To: lartc > > > > > > Hi > > > > I am having a hard time trying to shape 3000 users at ceil speeds > > from 10 to 300mbps in a 7/7Gbps link using HTB+SFQ+TC(filter by IP > > hashkey mask) for a few days now tweaking HTB and SFQ parameters > > with no luck so far. > > > > Everything seems right, up 4Gbps overall download speed with > > shaping on. I have no significant packets delay, no dropped packets > > and no high CPU average loads (not more than 20% - htop info) > > > > But when the speed comes to about 4.5Gbps download (upload is about > > 500mbps), chaos kicks in. CPU load goes sky high (all 24x2.4Ghz > > physical cores above 90% - 48x2.4Ghz if count that virtualization > > is on) and as a consequence packets are dropped (as reported by tc > > -s class sh ...), RTT goes above 200ms and a lots of ungry users. > > This goes from about 7PM to 11 PM every day. > > > > If I turn shaping off, everything return to normality immediately > > and peaks of not more than 5Gbps (1 second average) are observed > > and a CPU load of about 5%. So I infer the uplink is not crowded. > > > > > > I use one root HTB qdisc and one root (1:) HTB class. > > > > Then about 20~30 same level (1:xx) inner classes to (sort of) separate the users by region > > And under these inner classes, goes the almost 3000 leaves (1:xxxx). > > > > I have one class with about 900 users and this quantity decreases > > by the other inner classes having some of them with just one user. > > > > Is the way I'm using HTB+SFQ+TC suitable for this job? > > > > Since the script that creates the shaping environment is too long I do not post it here. > > > > What can I inform you guys to help me solve this? > > Fragments of code, stats, some measurements? What? > > > > Thanks. > > > > Regards > > > > Ethy > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer