From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com [IPv6:2a00:1450:4864:20::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id EF27C3B29E for ; Tue, 24 Jul 2018 20:11:41 -0400 (EDT) Received: by mail-lj1-x236.google.com with SMTP id j19-v6so5120657ljc.7 for ; Tue, 24 Jul 2018 17:11:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=+1Qi0BL1cSUu9fwwVfcDOE/lGs4HoKaE85QLCZv8eTk=; b=BSiRZyXTGErPIX0O+pc2fBOY4AH9yEhAxig/GYdB+mxH8+zCMleaKq2kuFrBsclJ9K wwgaWpbF8BvYpxEqU0zLhPUZYhenE/pKklFImqA+ZqDS7l9nbhPiAXb/Sc0FWWLaStWX Ctj9SaDAgghAhmyIZN8rQkViJkUIHT+VN77unTkHv7Pwb3DPVFVfyahqb4C0p32DBcuH uEuBRH4n0QFi3AtJAUKlzou8jI/s2kCuouiVhWZVMRiFpyZLp0VEvKOqZnLv3R1jtYPk IrsU8XNpqswf8jIoJ2kvXQ5fwXjoQZaP385adF4lLLAGJb1bmI6jZTPs7NaqiS8Caw1Y qyqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=+1Qi0BL1cSUu9fwwVfcDOE/lGs4HoKaE85QLCZv8eTk=; b=k7sH08y8/VbW91dn7iPTuXgvkuRQMSOFu3gQU2i/76wgyZbuOvGQccezhupIQiL/8L 3Ux+umV/XmeqLUp82ea+M6BYPwh2DzER/zPEYSrlEb1IBrApq9QT/zWsWGOlfkmob4QB G6id1BSNqIjcjClPHo3mDYyq06DToDPhtI8om5PKbcsYiXzQ0IdjNvHhAvUKNhv9vHDi HCWOPTx8oTimMgXRX7d4Lvwx4/NZdV8ZfCwn1YKJzidDejPA6e42i7hJiQJm47gZypL4 Ym81yV6N1HbY9ZgoTkpFkcnGWfgIns51t1QFBCQsR4TtAaPXyIGms8k0dCu5OeiYECpn Vdlg== X-Gm-Message-State: AOUpUlFPlYX0HIWxKf/LjIRd//KcRJWbgqk0rxtEVE5cvmg+r5KDfCBd kF/hRW5T1Hfa1ok7KGjqpcXOB1UtIOfGVfdoSVY= X-Google-Smtp-Source: AAOMgpf7HJ8V/bJDnMhzMos5k1Ll3NjgZZChgRoyTiplEn6+1NbXwQvBYRP085ghBcainJoxF33LHO+S/t22Cc7hF6A= X-Received: by 2002:a2e:7c12:: with SMTP id x18-v6mr13041140ljc.71.1532477500795; Tue, 24 Jul 2018 17:11:40 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Benjamin Cronce Date: Tue, 24 Jul 2018 19:11:29 -0500 Message-ID: To: Dave Taht Cc: bloat Content-Type: multipart/alternative; boundary="0000000000000fb0c60571c7b958" Subject: Re: [Bloat] No backpressure "shaper"+AQM X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Jul 2018 00:11:42 -0000 --0000000000000fb0c60571c7b958 Content-Type: text/plain; charset="UTF-8" On Tue, Jul 24, 2018 at 4:44 PM Jonathan Morton wrote: > > On 25 Jul, 2018, at 12:39 am, Benjamin Cronce wrote: > > > > Just looking visual at the DSLReport graphs, I more normally see maybe a > few 40ms-150ms ping spikes, while my own attempts to shape can get me > several 300ms spikes. I would really need a lot more samples and actually > run the numbers on them, but just causally looking at them, I get the sense > that mine is worse. > > That could just be an artefact of your browser's scheduling latency. Try > running an independent ping test alongside for verification. > > Currently one of my machines has Chrome exhibiting frequent and very > noticeable "hitching", while Firefox on the same machine is much smoother. > Similar behaviour would easily be enough to cause such data anomalies. > > - Jonathan Morton Challenge accepted. 10 pings per second at my ISP's speedtest server. My wife was watching Sing for the millionth time on Netflix during these tests. Idle Packets: sent=300, rcvd=300, error=0, lost=0 (0.0% loss) in 29.903240 sec RTTs in ms: min/avg/max/dev: 1.554 / 2.160 / 3.368 / 0.179 Bandwidth in kbytes/sec: sent=0.601, rcvd=0.601 shaping ------------------------ During download Packets: sent=123, rcvd=122, error=0, lost=1 (0.8% loss) in 12.203803 sec RTTs in ms: min/avg/max/dev: 1.459 / 2.831 / 8.281 / 0.955 Bandwidth in kbytes/sec: sent=0.604, rcvd=0.599 During upload Packets: sent=196, rcvd=195, error=0, lost=1 (0.5% loss) in 19.503948 sec RTTs in ms: min/avg/max/dev: 1.608 / 3.247 / 5.471 / 0.853 Bandwidth in kbytes/sec: sent=0.602, rcvd=0.599 no shaping ----------------------------- During download Packets: sent=147, rcvd=147, error=0, lost=0 (0.0% loss) in 14.604027 sec RTTs in ms: min/avg/max/dev: 1.161 / 2.110 / 13.525 / 1.069 Bandwidth in kbytes/sec: sent=0.603, rcvd=0.603 During upload Packets: sent=199, rcvd=199, error=0, lost=0 (0.0% loss) in 19.802377 sec RTTs in ms: min/avg/max/dev: 1.238 / 2.071 / 4.715 / 0.373 Bandwidth in kbytes/sec: sent=0.602, rcvd=0.602 Now I really feel like disabling shaping on my end. The TCP streams have increased loss without shaping, but my ICMP looks better. Better flow isolation? Need me some fq_Codel or Cake. Going to set fq_Codel to something like target 3ms and 45ms RTT. Due to CDNs and regional gaming servers, something like 95% of everything is less than 30ms away and something like 80% is like less than 15ms away. Akamai 1-2ms Netflix 2-3ms Hulu 2-3ms Cloudflare 9ms Discord 9ms World of Warcraft/Battle.Net 9ms Youtube 12ms Too short of tests, but interesting. On Tue, Jul 24, 2018 at 4:58 PM Dave Taht wrote: > On Tue, Jul 24, 2018 at 2:39 PM Benjamin Cronce wrote: > > Just looking visual at the DSLReport graphs, I more normally see maybe a > few 40ms-150ms ping spikes, while my own attempts to shape can get me > several 300ms spikes. I would really need a lot more samples and actually > run the numbers on them, but just causally looking at them, I get the sense > that mine is worse. > > too gentle we are perhaps. out of cpu you may be. > > Possible FairQ uses more CPU than expected, but I have load tested my firewall, using HFSC with ~ 8 queues shaped to 1Gb/s and Codel on the queues. Using iperf, I was able to send ~1.4Mil pps, about 1Gb/s of 64byte UDP packets. pfSense was claiming about 1.4Mpps ingress the LAN and 1.4Mpps egress the WAN. CPU was hovering around 17% on my quad core with CPU load roughly equal across all 4 cores. Core i5 with low latency DDR3 and Intel i350 NIC is nice. MTU sized packets iperf using bi-directional TCP results in about 1.85Gb/s, which is inline with the ~940Mb/s per direction on Ethernet, and something ridiculous like 4% CPU and 150 interrupts per second. This NIC is magical. I'm assuming soft interrupts. --0000000000000fb0c60571c7b958 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
On Tue, Jul 2= 4, 2018 at 4:44 PM Jonathan Morton <chromatix99@gmail.com> wrote:
> On 25 Jul, 2018, at 12:39 am, Benjamin Cronce <bcron= ce@gmail.com> wrote:
>=C2=A0
> Just looking= visual at the DSLReport graphs, I more normally see maybe a few 40ms-150ms= ping spikes, while my own attempts to shape can get me several 300ms spike= s. I would really need a lot more samples and actually run the numbers on t= hem, but just causally looking at them, I get the sense that mine is worse.=

That could just be an artefact of your browser's scheduling lat= ency.=C2=A0 Try running an independent ping test alongside for verification= .

Currently one of my machines has Chrome exhibiting frequent and ve= ry noticeable "hitching", while Firefox on the same machine is mu= ch smoother.=C2=A0 Similar behaviour would easily be enough to cause such d= ata anomalies.

=C2=A0- Jonathan Morton
Challenge accepted. 10 pings per second at my ISP's spee= dtest server. My wife was watching Sing for the millionth time on Netflix d= uring these tests.

Idle
Packets: sent=3D= 300, rcvd=3D300, error=3D0, lost=3D0 (0.0% loss) in 29.903240 sec
RTTs in ms: min/avg/max/dev: 1.554 / 2.160 / 3.368 / 0.179
Bandw= idth in kbytes/sec: sent=3D0.601, rcvd=3D0.601
=C2=A0
s= haping
------------------------
During download
Packets: sent=3D123, rcvd=3D122, error=3D0, lost=3D1 (0.8% loss) in 12.2= 03803 sec
RTTs in ms: min/avg/max/dev: 1.459 / 2.831 / 8.281 / 0.= 955
Bandwidth in kbytes/sec: sent=3D0.604, rcvd=3D0.599

During upload
Packets: sent=3D196, rcvd=3D195, er= ror=3D0, lost=3D1 (0.5% loss) in 19.503948 sec
RTTs in ms: min/av= g/max/dev: 1.608 / 3.247 / 5.471 / 0.853
Bandwidth in kbytes/sec:= sent=3D0.602, rcvd=3D0.599

no shaping
-= ----------------------------
During download
Packets: s= ent=3D147, rcvd=3D147, error=3D0, lost=3D0 (0.0% loss) in 14.604027 sec
RTTs in ms: min/avg/max/dev: 1.161 / 2.110 / 13.525 / 1.069
Bandwidth in kbytes/sec: sent=3D0.603, rcvd=3D0.603

<= div>During upload
Packets: sent=3D199, rcvd=3D199, error=3D0, los= t=3D0 (0.0% loss) in 19.802377 sec
RTTs in ms: min/avg/max/dev: 1= .238 / 2.071 / 4.715 / 0.373
Bandwidth in kbytes/sec: sent=3D0.60= 2, rcvd=3D0.602

Now I really feel like disabling s= haping on my end. The TCP streams have increased loss without shaping, but = my ICMP looks better. Better flow isolation? Need me some fq_Codel or Cake.= Going to set fq_Codel to something like target 3ms and 45ms RTT. Due to CD= Ns and regional gaming servers, something like 95% of everything is less th= an 30ms away and something like 80% is like less than 15ms away.=C2=A0

Akamai 1-2ms=C2=A0
Netflix 2-3ms
H= ulu 2-3ms
Cloudflare 9ms
Discord 9ms
World of= Warcraft/Battle.Net 9ms
Youtube 12ms

To= o short of tests, but interesting.

On Tue, Jul 24, 2018 at 4:58 PM Dave Taht <dave.taht@gmail.com> wrote:=
On Tue, Jul 24,= 2018 at 2:39 PM Benjamin Cronce <bcronce@gmail.com> wrote:
> Just looking visual at the DSLReport graphs, I more normally see maybe= a few 40ms-150ms ping spikes, while my own attempts to shape can get me se= veral 300ms spikes. I would really need a lot more samples and actually run= the numbers on them, but just causally looking at them, I get the sense th= at mine is worse.

too gentle we are perhaps. out of cpu you may be.

= Possible FairQ uses more CPU than expected, but I have load tested my firew= all, using HFSC=C2=A0 with ~ 8 queues shaped to 1Gb/s and Codel on the queu= es. Using iperf, I was able to send ~1.4Mil pps, about 1Gb/s of 64byte UDP = packets. pfSense was claiming about 1.4Mpps ingress the LAN and 1.4Mpps egr= ess the WAN. CPU was hovering around 17% on my quad core with CPU load roug= hly equal across all 4 cores. Core i5 with low latency DDR3 and Intel i350 = NIC is nice.

MTU sized packets iperf using bi-dire= ctional TCP results in about 1.85Gb/s, which is inline with the ~940Mb/s pe= r direction on Ethernet, and something ridiculous like 4% CPU and 150 inter= rupts per second. This NIC is magical. I'm assuming soft interrupts.
--0000000000000fb0c60571c7b958--