From: Benjamin Cronce <bcronce@gmail.com>
To: cloneman <cloneman@gmail.com>
Cc: bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] Steam's distribution service - exceeding inbound policiers and bufferbloat
Date: Sun, 21 May 2017 07:49:39 -0500 [thread overview]
Message-ID: <CAJ_ENFF-E-r6U22LCkU-ix7wecoV=afiePOspo-XcUMKhV8-Rw@mail.gmail.com> (raw)
In-Reply-To: <CABQZMoKDe0C_LXZ05t+3qVOt9SLMHcBgOv+y1jR5ayVCSWuv7w@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 2036 bytes --]
All current TCP implementations have a minimum window size of two segments.
If you have 20 open connections, then the minimum bandwidth TCP will
attempt to consume is (2 segments * 20 connection)/latency. If you have
very low latency relative to your bandwidth, the sender will not respond to
congestion.
On Thu, Apr 6, 2017 at 9:47 PM, cloneman <cloneman@gmail.com> wrote:
> Hi,
>
> Appologies in advance if this is the wrong place to ask, I haven't been
> able to locate an official discussion board.
>
> I'm looking for any comments on Steam's game distribution download system
> - specifically how it defeats any bufferbloat mitigation system I've used.
>
> It seems to push past inbound policers, exceeding them by about 40%. That
> is to say, if you police steam traffic to half of your line rate, enough
> capacity will remain to avoid packet loss, latency, jitter etc. Obviously
> this is too much bandwidth to reserve.
>
> Without any inbound control, you can expect very heavy packet loss and
> jitter. With fq_codel or sfq and taking the usual recommended 15% off the
> table, you get improved, but still unacceptable performance in your small
> flows / ping etc.
>
> The behavior can be observed by downloading any free game on their
> platform. I'm trying to figure out how they've accomplished this and how to
> mitigate this behavior. It operates with 20 http connections
> simultaneously, which is normally not an issue (multiple web downloads
> perform well under fq_codel)
>
> Note: in my testing cable and vdsl below 100mbit were vulnerable to this
> behavior, while fiber was immune.
>
> Basically there are edge cases on the internet that like to push too many
> bytes down a line that is dropping or delaying packets. I would like to see
> more discussion on this issue.
>
> I haven't tried tweaking any of the parameters / latency targets in
> fq_codel.
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
[-- Attachment #2: Type: text/html, Size: 2897 bytes --]
next prev parent reply other threads:[~2017-05-21 12:49 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CABQZMo+5QNS5CiZF3+rhx8vTaHvzOS-Ftx6uuyVcEO9cwLYgPg@mail.gmail.com>
[not found] ` <CABQZMoKF3pa_HN3EmGxuk44BmOobN0dAbg+uyMm9vkMHoyYWyA@mail.gmail.com>
2017-04-07 2:47 ` cloneman
2017-05-21 12:49 ` Benjamin Cronce [this message]
2017-05-21 13:08 ` cloneman
2017-05-21 14:04 ` Benjamin Cronce
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJ_ENFF-E-r6U22LCkU-ix7wecoV=afiePOspo-XcUMKhV8-Rw@mail.gmail.com' \
--to=bcronce@gmail.com \
--cc=bloat@lists.bufferbloat.net \
--cc=cloneman@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox