From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-x22f.google.com (mail-oi0-x22f.google.com [IPv6:2607:f8b0:4003:c06::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 0354D3B2A4 for ; Thu, 26 Jan 2017 15:46:02 -0500 (EST) Received: by mail-oi0-x22f.google.com with SMTP id w204so145802646oiw.0 for ; Thu, 26 Jan 2017 12:46:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=YCv5FgPUELefhlhlrVFYsh9ZQpPtQNZ42OpsYY3uuM4=; b=eOtKsC5cXEzAS4gmZFHziSbJkWLhDZ0dMNjUDXqjsHTq9+cUGqfCl3WAp+TxCLT+dT MD+vl0nEwftQ6g5wjNhRTPuKytdUsy7um4QJngcFiqmT/exS7W5jdGEVUsyKSMILoxzX 7Xj1q/g/DnSack4a+6+dw/zgGJb32YeUeqg61Tsh6pnHJ8PckrvOoM9rO5IqygFDiDAw 7g3npKTzyVZjkCHAOqC2nJ83sZVlktt/yQ1fWIo6Tu+kuAY2dQTkxuMtMEXRugzjI58m VyKEIdIo4lt3Q2njH2UE8RXbM0p8q2NnrlSiyCpMaBHG2ZrvbigVqbp80SiMd6EqiA2/ O0wA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=YCv5FgPUELefhlhlrVFYsh9ZQpPtQNZ42OpsYY3uuM4=; b=YzllTM+303w0h4sD8LUEn4e3kBiWejXinDdcCzy5QyXUn2RXx+mS/GQhLy7k0xU9Qv Hdx52Bnd//+sePrXQhwm5sNP1khx5y9AvJ5XnsZA/xlNnlNqGLV09unJmLmhwf+FDREo dr/FiTsHEsSGpOa3JCtBs3CwRwhwSiX5vB3vBQ5idOuRuba8fP8KZSfQm9v8FE/AiopM 9zdQ3u3Qc87j6JVNirvrKOgzRfHZhUbWYr+uqlmtDQuJw+5kMs0ZQPOR7oVR6D8Y8OZi 4uNpxnRQ3bIJDsRiB9qnNzL4L+NVyuwyj+3lOf485GZKnxSmQgjsUUlcw9D/Nm8hoXN2 eM1A== X-Gm-Message-State: AIkVDXJhSociSvN++Fp8+9ZGtHmSYtU8UdaDLEpjp1POSSghNB+Xr4CgiuU/nGXJtEPWc+0Oa+TttF5ju8yBUg== X-Received: by 10.202.213.17 with SMTP id m17mr3174417oig.104.1485463562338; Thu, 26 Jan 2017 12:46:02 -0800 (PST) MIME-Version: 1.0 Received: by 10.157.27.72 with HTTP; Thu, 26 Jan 2017 12:46:01 -0800 (PST) In-Reply-To: <1485463281.5145.164.camel@edumazet-glaptop3.roam.corp.google.com> References: <1485458323.5145.151.camel@edumazet-glaptop3.roam.corp.google.com> <1485463281.5145.164.camel@edumazet-glaptop3.roam.corp.google.com> From: Hans-Kristian Bakke Date: Thu, 26 Jan 2017 21:46:01 +0100 Message-ID: To: Eric Dumazet Cc: David Lang , bloat Content-Type: multipart/alternative; boundary=001a113b0a38f5c99a0547056f9f Subject: Re: [Bloat] Excessive throttling with fq X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 Jan 2017 20:46:03 -0000 --001a113b0a38f5c99a0547056f9f Content-Type: text/plain; charset=UTF-8 # ethtool -i eth0 driver: e1000e version: 3.2.6-k firmware-version: 1.9-0 expansion-rom-version: bus-info: 0000:04:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: no # ethtool -k eth0 Features for eth0: rx-checksumming: on tx-checksumming: on tx-checksum-ipv4: off [fixed] tx-checksum-ip-generic: on tx-checksum-ipv6: off [fixed] tx-checksum-fcoe-crc: off [fixed] tx-checksum-sctp: off [fixed] scatter-gather: on tx-scatter-gather: on tx-scatter-gather-fraglist: off [fixed] tcp-segmentation-offload: on tx-tcp-segmentation: on tx-tcp-ecn-segmentation: off [fixed] tx-tcp-mangleid-segmentation: on tx-tcp6-segmentation: on udp-fragmentation-offload: off [fixed] generic-segmentation-offload: on generic-receive-offload: on large-receive-offload: off [fixed] rx-vlan-offload: on tx-vlan-offload: on ntuple-filters: off [fixed] receive-hashing: on highdma: on [fixed] rx-vlan-filter: on [fixed] vlan-challenged: off [fixed] tx-lockless: off [fixed] netns-local: off [fixed] tx-gso-robust: off [fixed] tx-fcoe-segmentation: off [fixed] tx-gre-segmentation: off [fixed] tx-gre-csum-segmentation: off [fixed] tx-ipxip4-segmentation: off [fixed] tx-ipxip6-segmentation: off [fixed] tx-udp_tnl-segmentation: off [fixed] tx-udp_tnl-csum-segmentation: off [fixed] tx-gso-partial: off [fixed] tx-sctp-segmentation: off [fixed] fcoe-mtu: off [fixed] tx-nocache-copy: off loopback: off [fixed] rx-fcs: off rx-all: off tx-vlan-stag-hw-insert: off [fixed] rx-vlan-stag-hw-parse: off [fixed] rx-vlan-stag-filter: off [fixed] l2-fwd-offload: off [fixed] busy-poll: off [fixed] hw-tc-offload: off [fixed] # grep HZ /boot/config-4.8.0-2-amd64 CONFIG_NO_HZ_COMMON=y # CONFIG_HZ_PERIODIC is not set CONFIG_NO_HZ_IDLE=y # CONFIG_NO_HZ_FULL is not set # CONFIG_NO_HZ is not set # CONFIG_HZ_100 is not set CONFIG_HZ_250=y # CONFIG_HZ_300 is not set # CONFIG_HZ_1000 is not set CONFIG_HZ=250 CONFIG_MACHZ_WDT=m On 26 January 2017 at 21:41, Eric Dumazet wrote: > > Can you post : > > ethtool -i eth0 > ethtool -k eth0 > > grep HZ /boot/config.... (what is the HZ value of your kernel) > > I suspect a possible problem with TSO autodefer when/if HZ < 1000 > > Thanks. > > On Thu, 2017-01-26 at 21:19 +0100, Hans-Kristian Bakke wrote: > > There are two packet captures from fq with and without pacing here: > > > > > > https://owncloud.proikt.com/index.php/s/KuXIl8h8bSFH1fM > > > > > > > > The server (with fq pacing/nopacing) is 10.0.5.10 and is running a > > Apache2 webserver at port tcp port 443. The tcp client is nginx > > reverse proxy at 10.0.5.13 on the same subnet which again is proxying > > the connection from the Windows 10 client. > > - I did try to connect directly to the server with the client (via a > > linux gateway router) avoiding the nginx proxy and just using plain > > no-ssl http. That did not change anything. > > - I also tried stopping the eth0 interface to force the traffic to the > > eth1 interface in the LACP which changed nothing. > > - I also pulled each of the cable on the switch to force the traffic > > to switch between interfaces in the LACP link between the client > > switch and the server switch. > > > > > > The CPU is a 5-6 year old Intel Xeon X3430 CPU @ 4x2.40GHz on a > > SuperMicro platform. It is not very loaded and the results are always > > in the same ballpark with fq pacing on. > > > > > > > > top - 21:12:38 up 12 days, 11:08, 4 users, load average: 0.56, 0.68, > > 0.77 > > Tasks: 1344 total, 1 running, 1343 sleeping, 0 stopped, 0 zombie > > %Cpu0 : 0.0 us, 1.0 sy, 0.0 ni, 99.0 id, 0.0 wa, 0.0 hi, 0.0 > > si, 0.0 st > > %Cpu1 : 0.0 us, 0.3 sy, 0.0 ni, 97.4 id, 2.0 wa, 0.0 hi, 0.3 > > si, 0.0 st > > %Cpu2 : 0.0 us, 2.0 sy, 0.0 ni, 96.4 id, 1.3 wa, 0.0 hi, 0.3 > > si, 0.0 st > > %Cpu3 : 0.7 us, 2.3 sy, 0.0 ni, 94.1 id, 3.0 wa, 0.0 hi, 0.0 > > si, 0.0 st > > KiB Mem : 16427572 total, 173712 free, 9739976 used, 6513884 > > buff/cache > > KiB Swap: 6369276 total, 6126736 free, 242540 used. 6224836 avail > > Mem > > > > > > This seems OK to me. It does have 24 drives in 3 ZFS pools at 144TB > > raw storage in total with several SAS HBAs that is pretty much always > > poking the system in some way or the other. > > > > > > There are around 32K interrupts when running @23 MB/s (as seen in > > chrome downloads) with pacing on and about 25K interrupts when running > > @105 MB/s with fq nopacing. Is that normal? > > > > > > Hans-Kristian > > > > > > > > On 26 January 2017 at 20:58, David Lang wrote: > > Is there any CPU bottleneck? > > > > pacing causing this sort of problem makes me thing that the > > CPU either can't keep up or that something (Hz setting type of > > thing) is delaying when the CPU can get used. > > > > It's not clear from the posts if the problem is with sending > > data or receiving data. > > > > David Lang > > > > > > On Thu, 26 Jan 2017, Eric Dumazet wrote: > > > > Nothing jumps on my head. > > > > We use FQ on links varying from 1Gbit to 100Gbit, and > > we have no such > > issues. > > > > You could probably check on the server the TCP various > > infos given by ss > > command > > > > > > ss -temoi dst > > > > > > pacing rate is shown. You might have some issues, but > > it is hard to say. > > > > > > On Thu, 2017-01-26 at 19:55 +0100, Hans-Kristian Bakke > > wrote: > > After some more testing I see that if I > > disable fq pacing the > > performance is restored to the expected > > levels: # for i in eth0 eth1; do tc qdisc > > replace dev $i root fq nopacing; > > done > > > > > > Is this expected behaviour? There is some > > background traffic, but only > > in the sub 100 mbit/s on the switches and > > gateway between the server > > and client. > > > > > > The chain: > > Windows 10 client -> 1000 mbit/s -> switch -> > > 2xgigabit LACP -> switch > > -> 4 x gigabit LACP -> gw (fq_codel on all > > nics) -> 4 x gigabit LACP > > (the same as in) -> switch -> 2 x lacp -> > > server (with misbehaving fq > > pacing) > > > > > > > > On 26 January 2017 at 19:38, Hans-Kristian > > Bakke > > wrote: > > I can add that this is without BBR, > > just plain old kernel 4.8 > > cubic. > > > > On 26 January 2017 at 19:36, > > Hans-Kristian Bakke > > wrote: > > Another day, another fq issue > > (or user error). > > > > > > I try to do the seeminlig > > simple task of downloading a > > single large file over local > > gigabit LAN from a > > physical server running kernel > > 4.8 and sch_fq on intel > > server NICs. > > > > > > For some reason it wouldn't go > > past around 25 MB/s. > > After having replaced SSL with > > no SSL, replaced apache > > with nginx and verified that > > there is plenty of > > bandwith available between my > > client and the server I > > tried to change qdisc from fq > > to pfifo_fast. It > > instantly shot up to around > > the expected 85-90 MB/s. > > The same happened with > > fq_codel in place of fq. > > > > > > I then checked the statistics > > for fq and the throttled > > counter is increasing > > massively every second (eth0 and > > eth1 is LACPed using Linux > > bonding so both is seen > > here): > > > > > > qdisc fq 8007: root refcnt 2 > > limit 10000p flow_limit > > 100p buckets 1024 orphan_mask > > 1023 quantum 3028 > > initial_quantum 15140 > > refill_delay 40.0ms > > Sent 787131797 bytes 520082 > > pkt (dropped 15, > > overlimits 0 requeues 0) > > backlog 98410b 65p requeues 0 > > 15 flows (14 inactive, 1 > > throttled) > > 0 gc, 2 highprio, 259920 > > throttled, 15 flows_plimit > > qdisc fq 8008: root refcnt 2 > > limit 10000p flow_limit > > 100p buckets 1024 orphan_mask > > 1023 quantum 3028 > > initial_quantum 15140 > > refill_delay 40.0ms > > Sent 2533167 bytes 6731 pkt > > (dropped 0, overlimits 0 > > requeues 0) > > backlog 0b 0p requeues 0 > > 24 flows (24 inactive, 0 > > throttled) > > 0 gc, 2 highprio, 397 > > throttled > > > > > > Do you have any suggestions? > > > > > > Regards, > > Hans-Kristian > > > > > > > > > > _______________________________________________ > > Bloat mailing list > > Bloat@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/bloat > > > > > > _______________________________________________ > > Bloat mailing list > > Bloat@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/bloat > > > > > > > --001a113b0a38f5c99a0547056f9f Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
# ethtool -i eth0
driver: e1000e<= /div>
= version: 3.2.6-k
firmware-version: 1.9-0
expansion-rom-version:
bus-info: 0= 000:04:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-acc= ess: yes
supports-register-dump: yes
supports-priv-flags: no

# ethtool -k eth0
Features for eth0:
rx-checksumming: on
tx-checksumming: on
tx-= checksum-ipv4: off [fixed]
= tx-checksum-ip-generic: on
= tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
= tx-tcp-mangleid-segmentation: on
tx-tcp6-segmentation: on
<= div>udp-fragmentation-offload: off [fixe= d]
generic-segmentation= -offload: on
generic-re= ceive-offload: on
large= -receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: on
highdma: on [fixed]
= rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
<= div>tx-gre-csum-segmentation: off [fixed= ]
tx-ipxip4-segmentatio= n: off [fixed]
tx-ipxip= 6-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
tx-udp_tnl-csum-segmentation: off [fixed]
tx-gso-partial: off [fixed]
tx-sctp-segmentation: off [fixed= ]
fcoe-mtu: off [fixed]=
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off
<= div>rx-all: off
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]=
rx-vlan-stag-filter: o= ff [fixed]
l2-fwd-offlo= ad: off [fixed]
busy-po= ll: off [fixed]
hw-tc-o= ffload: off [fixed]

#=C2=A0gre= p HZ /boot/config-4.8.0-2-amd64
CONFIG_NO_HZ_COMMON=3Dy
# CONFIG_HZ_PERIODIC is not = set
CONFIG_NO_HZ_IDLE=3Dy
# CONFIG_NO_HZ_FULL is not se= t
# CONFIG_NO_HZ is not set
# CONFIG_HZ_100 is not set<= /div>
CONFIG_HZ_250=3Dy
# CONFIG_HZ_300 is not set
= # CONFIG_HZ_1000 is not set
CONFIG_HZ=3D250
CONFIG_MACH= Z_WDT=3Dm


On = 26 January 2017 at 21:41, Eric Dumazet <eric.dumazet@gmail.com>= ; wrote:

Can you post :

ethtool -i eth0
ethtool -k eth0

grep HZ /boot/config.... (what is the HZ value of your kernel)

I suspect a possible problem with TSO autodefer when/if HZ < 1000

Thanks.

On Thu, 2017-01-26 at 21:19 +0100, Hans-Kristian Bakke wrote:
> There are two packet ca= ptures from fq with and without pacing here:
>
>
> https://owncloud.proikt.com/index.p= hp/s/KuXIl8h8bSFH1fM
>
>
>
> The server (with fq pacing/nopacing) is 10.0.5.10 and is running a
> Apache2 webserver at port tcp port 443. The tcp client is nginx
> reverse proxy at 10.0.5.13 on the same subnet which again is proxying<= br> > the connection from the Windows 10 client.
> - I did try to connect directly to the server with the client (via a > linux gateway router) avoiding the nginx proxy and just using plain > no-ssl http. That did not change anything.
> - I also tried stopping the eth0 interface to force the traffic to the=
> eth1 interface in the LACP which changed nothing.
> - I also pulled each of the cable on the switch to force the traffic > to switch between interfaces in the LACP link between the client
> switch and the server switch.
>
>
> The CPU is a 5-6 year old Intel Xeon X3430 CPU @ 4x2.40GHz on a
> SuperMicro platform. It is not very loaded and the results are always<= br> > in the same ballpark with fq pacing on.
>
>
>
> top - 21:12:38 up 12 days, 11:08,=C2=A0 4 users,=C2=A0 load average: 0= .56, 0.68,
> 0.77
> Tasks: 1344 total,=C2=A0 =C2=A01 running, 1343 sleeping,=C2=A0 =C2=A00= stopped,=C2=A0 =C2=A00 zombie
> %Cpu0=C2=A0 :=C2=A0 0.0 us,=C2=A0 1.0 sy,=C2=A0 0.0 ni, 99.0 id,=C2=A0= 0.0 wa,=C2=A0 0.0 hi,=C2=A0 0.0
> si,=C2=A0 0.0 st
> %Cpu1=C2=A0 :=C2=A0 0.0 us,=C2=A0 0.3 sy,=C2=A0 0.0 ni, 97.4 id,=C2=A0= 2.0 wa,=C2=A0 0.0 hi,=C2=A0 0.3
> si,=C2=A0 0.0 st
> %Cpu2=C2=A0 :=C2=A0 0.0 us,=C2=A0 2.0 sy,=C2=A0 0.0 ni, 96.4 id,=C2=A0= 1.3 wa,=C2=A0 0.0 hi,=C2=A0 0.3
> si,=C2=A0 0.0 st
> %Cpu3=C2=A0 :=C2=A0 0.7 us,=C2=A0 2.3 sy,=C2=A0 0.0 ni, 94.1 id,=C2=A0= 3.0 wa,=C2=A0 0.0 hi,=C2=A0 0.0
> si,=C2=A0 0.0 st
> KiB Mem : 16427572 total,=C2=A0 =C2=A0173712 free,=C2=A0 9739976 used,= =C2=A0 6513884
> buff/cache
> KiB Swap:=C2=A0 6369276 total,=C2=A0 6126736 free,=C2=A0 =C2=A0242540 = used.=C2=A0 6224836 avail
> Mem
>
>
> This seems OK to me. It does have 24 drives in 3 ZFS pools at 144TB > raw storage in total with several SAS HBAs that is pretty much always<= br> > poking the system in some way or the other.
>
>
> There are around 32K interrupts when running @23 MB/s (as seen in
> chrome downloads) with pacing on and about 25K interrupts when running=
> @105 MB/s with fq nopacing. Is that normal?
>
>
> Hans-Kristian
>
>
>
> On 26 January 2017 at 20:58, David Lang <david@lang.hm> wrote:
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Is there any CPU bottleneck?
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0pacing causing this sort of problem m= akes me thing that the
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0CPU either can't keep up or that = something (Hz setting type of
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0thing) is delaying when the CPU can g= et used.
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0It's not clear from the posts if = the problem is with sending
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0data or receiving data.
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0David Lang
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0On Thu, 26 Jan 2017, Eric Dumazet wro= te:
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Nothing j= umps on my head.
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0We use FQ= on links varying from 1Gbit to 100Gbit, and
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0we have n= o such
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0issues. >
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0You could= probably check on the server the TCP various
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0infos giv= en by ss
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0command >
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ss -temoi= dst <remoteip>
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0pacing ra= te is shown. You might have some issues, but
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0it is har= d to say.
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0On Thu, 2= 017-01-26 at 19:55 +0100, Hans-Kristian Bakke
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0wrote: >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0After some more testing I see that if I
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0disable fq pacing the
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0performance is restored to the expected
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0levels: # for i in eth0 eth1; do tc qdisc
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0replace dev $i root fq nopacing;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0done
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0Is this expected behaviour? There is some
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0background traffic, but only
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0in the sub 100 mbit/s on the switches and
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0gateway between the server
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0and client.
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0The chain:
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0Windows 10 client -> 1000 mbit/s -> switch -><= br> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A02xgigabit LACP -> switch
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0-> 4 x gigabit LACP -> gw (fq_codel on all
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0nics) -> 4 x gigabit LACP
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0(the same as in) -> switch -> 2 x lacp ->
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0server (with misbehaving fq
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0pacing)
>
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0On 26 January 2017 at 19:38, Hans-Kristian
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0Bakke <hkbakke@= gmail.com>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0wrote:
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0I can add that this is with= out BBR,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0just plain old kernel 4.8
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cubic.
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0On 26 January 2017 at 19:36= ,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0Hans-Kristian Bakke
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0<hkbakke@gmail.com> wrote:
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0Another day, another fq issue
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0(or user error).
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0I try to do the seeminlig
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0simple task of downloading a
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0single large file over local
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0gigabit=C2=A0 LAN from a
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0physical server running kernel
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A04.8 and sch_fq on intel
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0server NICs.
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0For some reason it wouldn't go
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0past around 25 MB/s.
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0After having replaced SSL with
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0no SSL, replaced apache
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0with nginx and verified that
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0there is plenty of
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0bandwith available between my
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0client and the server I
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0tried to change qdisc from fq
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0to pfifo_fast. It
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0instantly shot up to around
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0the expected 85-90 MB/s.
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0The same happened with
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0fq_codel in place of fq.
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0I then checked the statistics
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0for fq and the throttled
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0counter is increasing
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0massively every second (eth0 and
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0eth1 is LACPed using Linux
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0bonding so both is seen
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0here):
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0qdisc fq 8007: root refcnt 2
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0limit 10000p flow_limit
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0100p buckets 1024 orphan_mask
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A01023 quantum 3028
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0initial_quantum 15140
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0refill_delay 40.0ms
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 Sent 787131797 bytes 520082
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0pkt (dropped 15,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0overlimits 0 requeues 0)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 backlog 98410b 65p requeues 0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A015 flows (14 inactive, 1
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0throttled)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A00 gc, 2 highprio, 259920
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0throttled, 15 flows_plimit
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0qdisc fq 8008: root refcnt 2
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0limit 10000p flow_limit
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0100p buckets 1024 orphan_mask
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A01023 quantum 3028
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0initial_quantum 15140
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0refill_delay 40.0ms
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 Sent 2533167 bytes 6731 pkt
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0(dropped 0, overlimits 0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0requeues 0)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 backlog 0b 0p requeues 0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A024 flows (24 inactive, 0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0throttled)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A00 gc, 2 highprio, 397
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0throttled
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0Do you have any suggestions?
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0Regards,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0Hans-Kristian
>
>
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0_______________________________________________ >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0Bloat mailing list
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0Bloat@li= sts.bufferbloat.net
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0https://lists.bufferbloat.net/l= istinfo/bloat
>
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0_________= ______________________________________
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Bloat mai= ling list
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Bloat@lists.bufferbloat.net
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0https://lists.bufferbloat.net/listinfo/bloat
>
>



--001a113b0a38f5c99a0547056f9f--