[Rpm] very good article on webrtc bandwidth estimation
Sebastian Moeller
moeller0 at gmx.de
Thu Jan 11 14:26:17 EST 2024
Hi Dave,
> On Jan 11, 2024, at 14:53, Dave Taht via Rpm <rpm at lists.bufferbloat.net> wrote:
>
> This was quite excellent, and did go into a delay based controller a bit.
>
> https://www.meetecho.com/blog/bwe-janus/
Yes, excellent article. I noted that the GCC draft contained this though:
The loss-based controller SHOULD run every time feedback from the
receiver is received.
o If 2-10% of the packets have been lost since the previous report
from the receiver, the sender available bandwidth estimate
As_hat(i) will be kept unchanged.
o If more than 10% of the packets have been lost a new estimate is
calculated as As_hat(i) = As_hat(i-1)(1-0.5p), where p is the loss
ratio.
o As long as less than 2% of the packets have been lost As_hat(i)
will be increased as As_hat(i) = 1.05(As_hat(i-1))
The loss-based estimate As_hat is compared with the delay-based
estimate A_hat. The actual sending rate is set as the minimum
between As_hat and A_hat.
We motivate the packet loss thresholds by noting that if the
transmission channel has a small amount of packet loss due to over-
use, that amount will soon increase if the sender does not adjust his
bitrate. Therefore we will soon enough reach above the 10% threshold
and adjust As_hat(i). However, if the packet loss ratio does not
increase, the losses are probably not related to self-inflicted
congestion and therefore we should not react on them.
Without even a hint of source where in the world 10% random (non-congestive) packet loss is "normal". (The justification given for the 10% would just works as badly for any other number < 100%...)
Just having had ~0.8% random loss on a link, I am unhappy to report that single flow tcp throughput was visibly affected (as expected from the Mathis equation).
Taking 2% packet loss as an indicator that one should increase the sending rate, IMHO should be backed by data, and preferably tons of it... as should the recommendation to simply soldier on with loss in the range of 2-10%.... I am not saying there might not be a rational argument to be made for those values, but that draft certainly did not make those arguments...
Side question: when I do MTRs to DNS servers I typically get 0% random packet loss, is there any study what level of non-congestive packet loss is typical for different parts of the internet?
>
> --
> 40 years of net history, a couple songs:
> https://www.youtube.com/watch?v=D9RGX6QFm5E
> Dave Täht CSO, LibreQos
> _______________________________________________
> Rpm mailing list
> Rpm at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
More information about the Rpm
mailing list