Having more detail available but not shown by default on the main page might keep the geeks happy and make diagnosis easier. Simon On February 25, 2021 12:11:02 PM Sina Khanifar wrote: > Thanks for the kind words, Simon! > >> Since you are measuring buffer bloat - how much latency *can* be caused by >> the excessive buffering, expressing the jitter number in terms of 95%ile >> would be appropriate - as that’s closely related to how large the excessive >> buffer is. The average jitter is more related to how the competing TCP >> streams have some gaps due to congestion control and these gaps can >> temporarily lower the buffer occupancy and result in a lower average jitter >> number. > > I'm thinking that we might even remove jitter altogether from the UI, > and instead just show 95%ile latency. 95%ile latency and 95%ile jitter > should be equivalent, but 95% latency is really the more meaningful > measure for real-time communications, it feels like? > > On Thu, Feb 25, 2021 at 11:57 AM Simon Barber wrote: >> >> Hi Sina, >> >> That sounds great, and I understand the desire to separate the fixed >> component of latency and the buffer bloat / variable part. Messaging that >> in a way that accurately conveys the end user impact and the impact due to >> unmitigated buffers while being easy to understand is tricky. >> >> Since you are measuring buffer bloat - how much latency *can* be caused by >> the excessive buffering, expressing the jitter number in terms of 95%ile >> would be appropriate - as that’s closely related to how large the excessive >> buffer is. The average jitter is more related to how the competing TCP >> streams have some gaps due to congestion control and these gaps can >> temporarily lower the buffer occupancy and result in a lower average jitter >> number. >> >> Really appreciate this work, and the interface and ‘latency first’ nature >> of this test. It’s a great contribution, and will hopefully help drive ISPs >> to reducing their bloat, helping everyone. >> >> Simon >> >> >> > On Feb 25, 2021, at 11:47 AM, Sina Khanifar wrote: >> > >> >> So perhaps this can feed into the rating system, total latency < 50mS is >> an A, < 150mS is a B, 600mS is a C or something like that. >> > >> > The "grade" we give is purely a measure of bufferbloat. If you start >> > with a latency of 500 ms on your connection, it wouldn't be fair for >> > us to give you an F grade even if there is no increase in latency due >> > to bufferbloat. >> > >> > This is why we added the "Real-World Impact" table below the grade - >> > in many cases people may start with a connection that is already >> > problematic for video conferencing, VoIP, and gaming. >> > >> > I think we're going to change the conditions on that table to have >> > high 95%ile latency trigger the degraded performance shield warnings. >> > In the future it might be neat for us to move to grades on the table >> > as well. >> > >> > >> > On Thu, Feb 25, 2021 at 5:53 AM Simon Barber wrote: >> >> >> >> So perhaps this can feed into the rating system, total latency < 50mS is >> an A, < 150mS is a B, 600mS is a C or something like that. >> >> >> >> Simon >> >> >> >> On February 25, 2021 5:49:26 AM Mikael Abrahamsson wrote: >> >> >> >>> On Thu, 25 Feb 2021, Simon Barber wrote: >> >>> >> >>>> The ITU say voice should be <150mS, however in the real world people are >> >>>> a lot more tolerant. A GSM -> GSM phone call is ~350mS, and very few >> >>>> people complain about that. That said the quality of the conversation is >> >>>> affected, and staying under 150mS is better for a fast free flowing >> >>>> conversation. Most people won't have a problem at 600mS and will have a >> >>>> problem at 1000mS. That is for a 2 party voice call. A large group >> >>>> presentation over video can tolerate more, but may have issues with >> >>>> talking over when switching from presenter to questioner for example. >> >>> >> >>> >> >>> I worked at a phone company 10+ years ago. We had some equipment that >> >>> internally was ATM based and each "hop" added 7ms. This in combination >> >>> with IP based telephony at the end points that added 40ms one-way per >> >>> end-point (PDV buffer) caused people to complain when RTT started creeping >> >>> up to 300-400ms. This was for PSTN calls. >> >>> >> >>> Yes, people might have more tolerance with mobile phone calls because they >> >>> have lower expectations when out and about, but my experience is that >> >>> people will definitely notice 300-400ms RTT but they might not get upset >> >>> enough to open a support ticket until 600ms or more. >> >>> >> >>> -- >> >>> Mikael Abrahamsson email: swmike@swm.pp.se >> >> >> >> >>