On 2/25/21 12:14 PM, Sebastian Moeller wrote: > Hi Sina, > > let me try to invite Daniel Lakeland (cc'd) into this discussion. He is doing tremendous work in the OpenWrt forum to single handedly help gamers getting the most out of their connections. I think he might have some opinion and data on latency requirements for modern gaming. > @Daniel, this discussion is about a new and really nice speedtest (that I already plugging in the OpenWrt forum, as you probably recall) that has a strong focus on latency under load increases. We are currently discussion what latency increase limits to use to rate a connection for on-line gaming > > Now, as a non-gamer, I would assume that gaming has at least similarly strict latency requirements as VoIP, as in most games even a slight delay at the wrong time can direct;y translate into a "game-over". But as I said, I stopped reflex gaming pretty much when I realized how badly I was doing in doom/quake. > > Best Regards > Sebastian > Thanks Sebastian, I have used the waveform.com test myself, and found that it didn't do a good job of measuring latency in terms of having rather high outlying tails, which were not realistic for the actual traffic of interest. Here's a test. I ran this test while doing a ping flood, with attached txt file https://www.waveform.com/tools/bufferbloat?test-id=e2fb822d-458b-43c6-984a-92694333ae92 Now this is with a QFQ on my desktop, and a custom HFSC shaper on my WAN router. This is somewhat more relaxed than I used to run things (I used to HFSC shape my desktop too, but now I just reschedule with QFQ). Pings get highest priority along with interactive traffic in the QFQ system, and they get low-latency but not realtime treatment at the WAN boundary. Basically you can see this never went above 44ms ping time, whereas the waveform test had outliers up to 228/231 ms Almost all latency sensitive traffic will be UDP. Specifically the voice in VOIP, and the control packets in games, those are all UDP. However it seems like the waveform test measures http connection open/close, and I'm thinking something about that is causing extreme outliers. From the test description: "We are using HTTP requests instead of WebSockets for our latency and speed test. This has both advantages and disadvantages in terms of measuring bufferbloat. We hope to improve this test in the future to measure latency both via WebSockets, HTTP requests, and WebRTC Data Channels." I think the ideal test would use webRTC connection using UDP. A lot of the games I've seen packet captures from have a fixed clock tick in the vicinity of 60-65Hz with a UDP packet sent every tick. Even 1 packet lost per second would generally feel not that good to a player, and it doesn't even need to be lost, just delayed by a large fraction of 1/64 = 15.6ms... High performance games will use closer to 120Hz. So to guarantee good network performance for a game, you really want to ensure less than say 10ms of increasing latency at the say 99.5%tile (that's 1 packet delayed 67% or more of the between-packet time every 200 packets or so, or every ~3 seconds at 64Hz) To measure this would really require WebRTC sending ~ 300 byte packets at say 64Hz to a reflector that would send them back, and then compare the send time to the receive time. Rather than picking a strict percentile, I'd recommend trying to give a "probability of no noticeable lag in a 1 second interval" or something like that. Supposing your test runs for say 10 seconds, you'll have 640 RTT samples. Resampling 64 of them randomly say 100 times and calculate how many of these resamples have less than 1/64 = 15ms of increased latency. Then express something like 97% chance of lag free each second, or 45% chance of lag free each second or whatever. This is a quite stringent requirement. But you know what? The message boards are FLOODED with people unhappy with their gaming performance, so I think it's realistic. Honestly to tell a technically minded gamer "Hey 5% of your packets will be delayed more than 400ms"  they'd say THAT"S HORRIBLE not "oh good". If a gamer can keep their round trip time below 10ms of latency increase at the 99.5%tile level they'll probably be feeling good. If they get above 20ms of increase for more than 1% of the time, they'll be really irritated (this is more or less equivalent to 1% packet drop)