From: Benjamin Cronce <bcronce@gmail.com>
To: Dave Taht <dave.taht@gmail.com>
Cc: bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] No backpressure "shaper"+AQM
Date: Tue, 24 Jul 2018 19:11:29 -0500 [thread overview]
Message-ID: <CAJ_ENFE-AeyrQqPe=-Lq3oLhys6QN9GQTjztCLpUWi-A=k8KxA@mail.gmail.com> (raw)
In-Reply-To: <CAA93jw6Vrjw+3=t8RkBwFNmGyJZqKurNL8PZUcdgSA+8-teZEg@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 3874 bytes --]
On Tue, Jul 24, 2018 at 4:44 PM Jonathan Morton <chromatix99@gmail.com>
wrote:
> > On 25 Jul, 2018, at 12:39 am, Benjamin Cronce <bcronce@gmail.com> wrote:
> >
> > Just looking visual at the DSLReport graphs, I more normally see maybe a
> few 40ms-150ms ping spikes, while my own attempts to shape can get me
> several 300ms spikes. I would really need a lot more samples and actually
> run the numbers on them, but just causally looking at them, I get the sense
> that mine is worse.
>
> That could just be an artefact of your browser's scheduling latency. Try
> running an independent ping test alongside for verification.
>
> Currently one of my machines has Chrome exhibiting frequent and very
> noticeable "hitching", while Firefox on the same machine is much smoother.
> Similar behaviour would easily be enough to cause such data anomalies.
>
> - Jonathan Morton
Challenge accepted. 10 pings per second at my ISP's speedtest server. My
wife was watching Sing for the millionth time on Netflix during these tests.
Idle
Packets: sent=300, rcvd=300, error=0, lost=0 (0.0% loss) in 29.903240 sec
RTTs in ms: min/avg/max/dev: 1.554 / 2.160 / 3.368 / 0.179
Bandwidth in kbytes/sec: sent=0.601, rcvd=0.601
shaping
------------------------
During download
Packets: sent=123, rcvd=122, error=0, lost=1 (0.8% loss) in 12.203803 sec
RTTs in ms: min/avg/max/dev: 1.459 / 2.831 / 8.281 / 0.955
Bandwidth in kbytes/sec: sent=0.604, rcvd=0.599
During upload
Packets: sent=196, rcvd=195, error=0, lost=1 (0.5% loss) in 19.503948 sec
RTTs in ms: min/avg/max/dev: 1.608 / 3.247 / 5.471 / 0.853
Bandwidth in kbytes/sec: sent=0.602, rcvd=0.599
no shaping
-----------------------------
During download
Packets: sent=147, rcvd=147, error=0, lost=0 (0.0% loss) in 14.604027 sec
RTTs in ms: min/avg/max/dev: 1.161 / 2.110 / 13.525 / 1.069
Bandwidth in kbytes/sec: sent=0.603, rcvd=0.603
During upload
Packets: sent=199, rcvd=199, error=0, lost=0 (0.0% loss) in 19.802377 sec
RTTs in ms: min/avg/max/dev: 1.238 / 2.071 / 4.715 / 0.373
Bandwidth in kbytes/sec: sent=0.602, rcvd=0.602
Now I really feel like disabling shaping on my end. The TCP streams have
increased loss without shaping, but my ICMP looks better. Better flow
isolation? Need me some fq_Codel or Cake. Going to set fq_Codel to
something like target 3ms and 45ms RTT. Due to CDNs and regional gaming
servers, something like 95% of everything is less than 30ms away and
something like 80% is like less than 15ms away.
Akamai 1-2ms
Netflix 2-3ms
Hulu 2-3ms
Cloudflare 9ms
Discord 9ms
World of Warcraft/Battle.Net 9ms
Youtube 12ms
Too short of tests, but interesting.
On Tue, Jul 24, 2018 at 4:58 PM Dave Taht <dave.taht@gmail.com> wrote:
> On Tue, Jul 24, 2018 at 2:39 PM Benjamin Cronce <bcronce@gmail.com> wrote:
> > Just looking visual at the DSLReport graphs, I more normally see maybe a
> few 40ms-150ms ping spikes, while my own attempts to shape can get me
> several 300ms spikes. I would really need a lot more samples and actually
> run the numbers on them, but just causally looking at them, I get the sense
> that mine is worse.
>
> too gentle we are perhaps. out of cpu you may be.
>
> Possible FairQ uses more CPU than expected, but I have load tested my
firewall, using HFSC with ~ 8 queues shaped to 1Gb/s and Codel on the
queues. Using iperf, I was able to send ~1.4Mil pps, about 1Gb/s of 64byte
UDP packets. pfSense was claiming about 1.4Mpps ingress the LAN and 1.4Mpps
egress the WAN. CPU was hovering around 17% on my quad core with CPU load
roughly equal across all 4 cores. Core i5 with low latency DDR3 and Intel
i350 NIC is nice.
MTU sized packets iperf using bi-directional TCP results in about 1.85Gb/s,
which is inline with the ~940Mb/s per direction on Ethernet, and something
ridiculous like 4% CPU and 150 interrupts per second. This NIC is magical.
I'm assuming soft interrupts.
[-- Attachment #2: Type: text/html, Size: 5311 bytes --]
prev parent reply other threads:[~2018-07-25 0:11 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-24 20:11 Benjamin Cronce
2018-07-24 20:33 ` Jonathan Morton
2018-07-24 20:48 ` Benjamin Cronce
2018-07-24 20:57 ` Dave Taht
2018-07-24 21:31 ` Dave Taht
2018-07-26 4:42 ` Dave Taht
2018-07-24 21:39 ` Benjamin Cronce
2018-07-24 21:44 ` Jonathan Morton
2018-07-24 21:58 ` Dave Taht
2018-07-24 22:12 ` Dave Taht
2018-07-25 0:11 ` Benjamin Cronce [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJ_ENFE-AeyrQqPe=-Lq3oLhys6QN9GQTjztCLpUWi-A=k8KxA@mail.gmail.com' \
--to=bcronce@gmail.com \
--cc=bloat@lists.bufferbloat.net \
--cc=dave.taht@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox