From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-x242.google.com (mail-pg0-x242.google.com [IPv6:2607:f8b0:400e:c05::242]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 935083B2A4 for ; Thu, 26 Jan 2017 14:18:45 -0500 (EST) Received: by mail-pg0-x242.google.com with SMTP id 204so23016974pge.2 for ; Thu, 26 Jan 2017 11:18:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:subject:from:to:cc:date:in-reply-to:references :mime-version:content-transfer-encoding; bh=5IzD3NVCyWDwXwMLWxd5hudoGTDV5V4ZmlniPqSW/Rw=; b=bjByBv9csPoUm7ScoYec1ps+SB8IsatQjsBUIo8vJ7UWtrR0IWH6Ti1kL1RrXco+SA 0szKBYi8VcqL9NfBt5LlBj4idGQHNlsL1LCxEBriMNjTg5DbFpIYwIiJ+Xl0vaUUVnP1 VYBXuHy+9fQF8JynwT71+2Z7laNIBDUBRH5YD3AKU449QAqNhKQS9O+bRGhwLLiuQJaS R8bzNdQvfut+NB7kimxv1TTQhA/XcgpblIy7+HPVhXpi4gwsbTzOi//9HpkFWMEiE40D Iw8qsA622UpLYlJ+p7Dtf7NZP67zUeomvka4jE5Ty02hclMjV1ygT19UWRq1O1nVX8o5 3GKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:subject:from:to:cc:date:in-reply-to :references:mime-version:content-transfer-encoding; bh=5IzD3NVCyWDwXwMLWxd5hudoGTDV5V4ZmlniPqSW/Rw=; b=rIS/4nSBLfoIbLsE+vn+lKnFJb3cD7PNyKQ3GpihSvAWce4Qfdv1sK8DYvCaN+3rir apECMVC13vP2+jiurl3KGAv5hymJI55xZ0iElwSy37DJqfcXl4IAIxqWESbIx9ncapqx cb2nWfSS3gmGKXUSLO/VTCfO5fRy98h8E9ALRUa9ZRg1el0NGFQeasRPYizdMr4pF+lN 5EZ7MiG2MouGLxbiEtUvo7cDkkRZWA39VJ6KJD4nzchWs7EUJK1Qubhu6KwAy5flI+g+ Wg3ioTuCIG8/5WSocGPaYIORWPkHjfCBGn1bvJaZe8Sza+k/TwJ5BZR0hJojev6D9d7l /Gfg== X-Gm-Message-State: AIkVDXKUTtZBuBf4YeZNS/FQMMFBJ/AHERWk9QSndZ6yNuEX86THVT5FDeEG1tun6NTKRA== X-Received: by 10.99.154.9 with SMTP id o9mr4916853pge.69.1485458324737; Thu, 26 Jan 2017 11:18:44 -0800 (PST) Received: from ?IPv6:2620:0:1000:1704:84b:7df1:bc16:f35b? ([2620:0:1000:1704:84b:7df1:bc16:f35b]) by smtp.googlemail.com with ESMTPSA id z70sm5291865pff.26.2017.01.26.11.18.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Jan 2017 11:18:43 -0800 (PST) Message-ID: <1485458323.5145.151.camel@edumazet-glaptop3.roam.corp.google.com> From: Eric Dumazet To: Hans-Kristian Bakke Cc: bloat Date: Thu, 26 Jan 2017 11:18:43 -0800 In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.10.4-0ubuntu2 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: Re: [Bloat] Excessive throttling with fq X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 Jan 2017 19:18:45 -0000 Nothing jumps on my head. We use FQ on links varying from 1Gbit to 100Gbit, and we have no such issues. You could probably check on the server the TCP various infos given by ss command ss -temoi dst pacing rate is shown. You might have some issues, but it is hard to say. On Thu, 2017-01-26 at 19:55 +0100, Hans-Kristian Bakke wrote: > After some more testing I see that if I disable fq pacing the > performance is restored to the expected levels: > # for i in eth0 eth1; do tc qdisc replace dev $i root fq nopacing; > done > > > Is this expected behaviour? There is some background traffic, but only > in the sub 100 mbit/s on the switches and gateway between the server > and client. > > > The chain: > Windows 10 client -> 1000 mbit/s -> switch -> 2xgigabit LACP -> switch > -> 4 x gigabit LACP -> gw (fq_codel on all nics) -> 4 x gigabit LACP > (the same as in) -> switch -> 2 x lacp -> server (with misbehaving fq > pacing) > > > > On 26 January 2017 at 19:38, Hans-Kristian Bakke > wrote: > I can add that this is without BBR, just plain old kernel 4.8 > cubic. > > On 26 January 2017 at 19:36, Hans-Kristian Bakke > wrote: > Another day, another fq issue (or user error). > > > I try to do the seeminlig simple task of downloading a > single large file over local gigabit LAN from a > physical server running kernel 4.8 and sch_fq on intel > server NICs. > > > For some reason it wouldn't go past around 25 MB/s. > After having replaced SSL with no SSL, replaced apache > with nginx and verified that there is plenty of > bandwith available between my client and the server I > tried to change qdisc from fq to pfifo_fast. It > instantly shot up to around the expected 85-90 MB/s. > The same happened with fq_codel in place of fq. > > > I then checked the statistics for fq and the throttled > counter is increasing massively every second (eth0 and > eth1 is LACPed using Linux bonding so both is seen > here): > > > qdisc fq 8007: root refcnt 2 limit 10000p flow_limit > 100p buckets 1024 orphan_mask 1023 quantum 3028 > initial_quantum 15140 refill_delay 40.0ms > Sent 787131797 bytes 520082 pkt (dropped 15, > overlimits 0 requeues 0) > backlog 98410b 65p requeues 0 > 15 flows (14 inactive, 1 throttled) > 0 gc, 2 highprio, 259920 throttled, 15 flows_plimit > qdisc fq 8008: root refcnt 2 limit 10000p flow_limit > 100p buckets 1024 orphan_mask 1023 quantum 3028 > initial_quantum 15140 refill_delay 40.0ms > Sent 2533167 bytes 6731 pkt (dropped 0, overlimits 0 > requeues 0) > backlog 0b 0p requeues 0 > 24 flows (24 inactive, 0 throttled) > 0 gc, 2 highprio, 397 throttled > > > Do you have any suggestions? > > > Regards, > Hans-Kristian > > > > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat