On Wed, 6 May 2015, Jonathan Morton wrote:
So, as a proposed methodology, how does this sound:
Determine a reasonable ballpark figure for typical codec and jitter-buffer
delay (one way). Fix this as a constant value for the benchmark.
Commercial grade VoIP systems running in a controlled environment typically (in my experience) come with 40ms PDV (Packet Delay Variation, let's not call it jitter, the timing people get upset if you call it jitter) buffer. These systems typically do not work well over the Internet as we here all know, 40ms is quite low PDV on a FIFO based Internet access. Applications actually designed to work on the Internet have PDV buffers that adapt according to what PDV is seen, and so they can both increase and decrease in size over the time of a call.
I'd say ballpark reasonable figure for VoIP and video conferencing of reasonable PDV is in the 50-100ms range or so, where lower of course is better. It's basically impossible to have really low PDV on a 1 megabit/s link because a full size 1500 byte packet will take close to 10ms to transmit, but it's perfectly feasable to keep it under 10-20ms when the link speed increases. If we say that 1 megabit/s (typical ADSL up speed)is the lower bound of speed where one can expect VoIP to work together with other Internet traffic, then 50-100ms should be technically attainable if the vendor/operator actually tries to reduce bufferbloat/PDV.
Measure the maximum induced delays in each direction.
Depending on the length of the test, it might make sense to aim for 95th or 99th percentile, ie throw away the one or few worst values as these might be outliers. But generally I agree with your proposed terminology.
--
Mikael Abrahamsson email: swmike@swm.pp.se_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat