[Bloat] Steam's distribution service - exceeding inbound policiers and bufferbloat

Benjamin Cronce bcronce at gmail.com
Sun May 21 10:04:08 EDT 2017


(1500 segment*2)/5ms=4.8Mb/sec minimum per connection. 20 connections is
96Mb/sec

On May 21, 2017 8:08 AM, "cloneman" <cloneman at gmail.com> wrote:

> thanks for the info. This is a possibility, as I have 5ms latency to their
> servers with 50mbit of bandwidth.
>
> On May 21, 2017 8:49 AM, "Benjamin Cronce" <bcronce at gmail.com> wrote:
>
>> All current TCP implementations have a minimum window size of two
>> segments. If you have 20 open connections, then the minimum bandwidth TCP
>> will attempt to consume is (2 segments * 20 connection)/latency. If you
>> have very low latency relative to your bandwidth, the sender will not
>> respond to congestion.
>>
>> On Thu, Apr 6, 2017 at 9:47 PM, cloneman <cloneman at gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Appologies in advance if this is the wrong place to ask, I haven't been
>>> able to locate an official discussion board.
>>>
>>> I'm looking for any comments on Steam's game distribution download
>>> system - specifically how it defeats any bufferbloat mitigation system I've
>>> used.
>>>
>>> It seems to push past inbound policers, exceeding them by about 40%.
>>> That is to say, if you police steam traffic to half of your line rate,
>>> enough capacity will remain to avoid packet loss, latency, jitter etc.
>>> Obviously this is too much bandwidth to reserve.
>>>
>>> Without any inbound control, you can expect very heavy packet loss and
>>> jitter. With fq_codel or sfq and taking the usual recommended 15% off the
>>> table, you get improved, but still unacceptable performance in your small
>>> flows / ping etc.
>>>
>>> The behavior can be observed by downloading any free game on their
>>> platform. I'm trying to figure out how they've accomplished this and how to
>>> mitigate this behavior. It operates with 20 http connections
>>> simultaneously, which is normally not an issue (multiple web downloads
>>>  perform well under fq_codel)
>>>
>>> Note: in my testing cable and vdsl below 100mbit were vulnerable to this
>>> behavior, while fiber was immune.
>>>
>>> Basically there are edge cases on the internet that like to push too
>>> many bytes down a line that is dropping or delaying packets. I would like
>>> to see more discussion on this issue.
>>>
>>> I haven't tried tweaking any of the parameters / latency targets in
>>> fq_codel.
>>>
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20170521/90d857fa/attachment.html>


More information about the Bloat mailing list