[Bloat] Updated Bufferbloat Test
Sina Khanifar
sina at waveform.com
Fri Feb 26 03:20:47 EST 2021
Hi Daniel, and thanks for chiming in!
> I have used the waveform.com test myself, and found that it didn't do a good job of measuring latency in terms of having rather high outlying tails, which were not realistic for the actual traffic of interest.
I think the reason that this is happening may be due to CPU
throttling, and potentially a limitation of the fact that ours is a
browser-based test, rather than anything else.
I am on a cable modem with about 900 Mbps down and 35 Mbps up. If I
use Chrome's developer tools to I limit my connection to 700 Mbps down
and 25 Mbps up, I end up with no bufferbloat as the tool can no longer
saturate my connection. I don't see those unusual latency spikes.
This CPU-throttling issue is one of the biggest problems with
browser-based tests. We spent a lot of time trying to minimize CPU
usage, but this was the best we could manage (and it's quite a bit
better than earlier versions of the test).
I think that given this limitation of browser-based tests, maybe the
users you have in mind are not our target audience. Our goal is to
make an easy-to-use bufferbloat test "for the rest of us." For
technically-minded gamers, running flent will likely give much better
results. I'd love for it to be otherwise, so we may try to tinker with
things to see if we can get CPU usage down and simultaneously deal
with the UDP question below.
> I think the ideal test would use webRTC connection using UDP.
I definitely agree that UDP traffic is the most latency-sensitive. But
bufferbloat also affects TCP traffic.
Our thinking went something like this: QoS features on some routers
prioritize UDP traffic. But bufferbloat affects all traffic. If we
only tested UDP, QoS prioritization of UDP would mean that some
routers would show no bufferbloat in our test, but nevertheless
exhibit increased latency on TCP traffic. By testing TCP, we could
ensure that both TCP and UDP traffic weren't affected by bufferbloat.
That being said: the ideal case would be to test both TCP and UDP
traffic, and then combine the results from both to rate bufferbloat
and grade the "real-world impact" section. I believe Cloudflare is
adding support for WebSockets and UDP traffic to the workers, which
should mean that we can add a UDP test when that happens.
We'll revisit this in the future, but given the CPU throttling issue
discussed above, I'm not sure it's going to get our test to exactly
where you'd like it to be.
> Supposing your test runs for say 10 seconds, you'll have 640 RTT samples. Resampling 64 of them randomly say 100 times and calculate how many of these resamples have less than 1/64 = 15ms of increased latency. Then express something like 97% chance of lag free each second, or 45% chance of lag free each second or whatever.
I'm not quite sure I understand this part of your suggested protocol.
What do you mean by resampling? Why is the resamplling necessary?
Would like to grok this in case in the future we're able to deal with
the other issues and can implement an updated set of criteria for
tech-savvy gamers.
Best,
Sina.
On Thu, Feb 25, 2021 at 5:06 PM Daniel Lakeland
<contact at lakelandappliedsciences.com> wrote:
>
> On 2/25/21 12:14 PM, Sebastian Moeller wrote:
> > Hi Sina,
> >
> > let me try to invite Daniel Lakeland (cc'd) into this discussion. He is doing tremendous work in the OpenWrt forum to single handedly help gamers getting the most out of their connections. I think he might have some opinion and data on latency requirements for modern gaming.
> > @Daniel, this discussion is about a new and really nice speedtest (that I already plugging in the OpenWrt forum, as you probably recall) that has a strong focus on latency under load increases. We are currently discussion what latency increase limits to use to rate a connection for on-line gaming
> >
> > Now, as a non-gamer, I would assume that gaming has at least similarly strict latency requirements as VoIP, as in most games even a slight delay at the wrong time can direct;y translate into a "game-over". But as I said, I stopped reflex gaming pretty much when I realized how badly I was doing in doom/quake.
> >
> > Best Regards
> > Sebastian
> >
> Thanks Sebastian,
>
> I have used the waveform.com test myself, and found that it didn't do a
> good job of measuring latency in terms of having rather high outlying
> tails, which were not realistic for the actual traffic of interest.
>
> Here's a test. I ran this test while doing a ping flood, with attached
> txt file
>
> https://www.waveform.com/tools/bufferbloat?test-id=e2fb822d-458b-43c6-984a-92694333ae92
>
>
> Now this is with a QFQ on my desktop, and a custom HFSC shaper on my WAN
> router. This is somewhat more relaxed than I used to run things (I used
> to HFSC shape my desktop too, but now I just reschedule with QFQ). Pings
> get highest priority along with interactive traffic in the QFQ system,
> and they get low-latency but not realtime treatment at the WAN boundary.
> Basically you can see this never went above 44ms ping time, whereas the
> waveform test had outliers up to 228/231 ms
>
> Almost all latency sensitive traffic will be UDP. Specifically the voice
> in VOIP, and the control packets in games, those are all UDP. However it
> seems like the waveform test measures http connection open/close, and
> I'm thinking something about that is causing extreme outliers. From the
> test description:
>
> "We are using HTTP requests instead of WebSockets for our latency and
> speed test. This has both advantages and disadvantages in terms of
> measuring bufferbloat. We hope to improve this test in the future to
> measure latency both via WebSockets, HTTP requests, and WebRTC Data
> Channels."
>
> I think the ideal test would use webRTC connection using UDP.
>
> A lot of the games I've seen packet captures from have a fixed clock
> tick in the vicinity of 60-65Hz with a UDP packet sent every tick. Even
> 1 packet lost per second would generally feel not that good to a player,
> and it doesn't even need to be lost, just delayed by a large fraction of
> 1/64 = 15.6ms... High performance games will use closer to 120Hz.
>
> So to guarantee good network performance for a game, you really want to
> ensure less than say 10ms of increasing latency at the say 99.5%tile
> (that's 1 packet delayed 67% or more of the between-packet time every
> 200 packets or so, or every ~3 seconds at 64Hz)
>
> To measure this would really require WebRTC sending ~ 300 byte packets
> at say 64Hz to a reflector that would send them back, and then compare
> the send time to the receive time.
>
> Rather than picking a strict percentile, I'd recommend trying to give a
> "probability of no noticeable lag in a 1 second interval" or something
> like that.
>
> Supposing your test runs for say 10 seconds, you'll have 640 RTT
> samples. Resampling 64 of them randomly say 100 times and calculate how
> many of these resamples have less than 1/64 = 15ms of increased latency.
> Then express something like 97% chance of lag free each second, or 45%
> chance of lag free each second or whatever.
>
> This is a quite stringent requirement. But you know what? The message
> boards are FLOODED with people unhappy with their gaming performance, so
> I think it's realistic. Honestly to tell a technically minded gamer "Hey
> 5% of your packets will be delayed more than 400ms" they'd say THAT"S
> HORRIBLE not "oh good".
>
> If a gamer can keep their round trip time below 10ms of latency increase
> at the 99.5%tile level they'll probably be feeling good. If they get
> above 20ms of increase for more than 1% of the time, they'll be really
> irritated (this is more or less equivalent to 1% packet drop)
>
>
More information about the Bloat
mailing list