From: Erik Auerswald <auerswal@unix-ag.uni-kl.de>
To: Jonathan Morton <chromatix99@gmail.com>
Cc: Sebastian Moeller <moeller0@gmx.de>,
ECN-Sane <ecn-sane@lists.bufferbloat.net>,
bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Ecn-sane] [Bloat] quick question
Date: Sat, 26 Aug 2023 12:42:38 -0000 [thread overview]
Message-ID: <20230826124233.GA30081@unix-ag.uni-kl.de> (raw)
In-Reply-To: <66279D1A-5B4B-482E-91B2-86AFCFB6C20E@gmail.com>
Hi,
On Sat, Aug 26, 2023 at 03:06:09PM +0300, Jonathan Morton via Bloat wrote:
> > On 26 Aug, 2023, at 2:48 pm, Sebastian Moeller via Ecn-sane <ecn-sane@lists.bufferbloat.net> wrote:
> >
> > percentage of packets marked: 100 * (2346329 / 3259777) = 72%
> >
> > This seems like too high a marking rate to me. I would naively expect
> > that a flow on getting a mark scale back by its cwin by 20-50% and
> > then slowly increaer it again, so I expect the actual marking rate
> > to be considerably below 50% per flow...
>
> > My gut feeling is that these steam flows do not obey RFC3168 ECN
> > (or something wipes the CE marks my router sends upstream along the
> > path)... but without a good model what marking rate I should expect
> > this is very hand-wavy, so if anybody could help me out with an easy
> > derivation of the expected average marking rate I would be grateful.
>
> Yeah, that's definitely too much marking. We've actually seen this
> behaviour from Steam servers before, but they had fixed it at some
> point. Perhaps they've unfixed it again.
>
> My best guess is that they're running an old version of BBR with ECN
> negotiation left on. BBRv1, at least, completely ignores ECE responses.
> Fortunately BBR itself does a good job of congestion control in the
> FQ environment which Cake provides, as you can tell by the fact that
> the queues never get full enough to trigger heavy dropping.
>
> The CUBIC RFC offers an answer to your question:
> [small screenshot attached to email]
I find the attached screenshot quite unreadable. It seems to be
taken starting from the paragraph above section 5.2 of RFC 9438
<https://www.rfc-editor.org/rfc/rfc9438#section-5.2>. In UTF-8 text it
looks as follows:
------------------------------------------------------------------------
_C_ determines the aggressiveness of CUBIC in competing with other
congestion control algorithms for bandwidth. CUBIC is more friendly
to Reno TCP if the value of _C_ is lower. However, it is NOT
RECOMMENDED to set _C_ to a very low value like 0.04, since CUBIC
with a low _C_ cannot efficiently use the bandwidth in fast and long-
distance networks. Based on these observations and extensive
deployment experience, _C_=0.4 seems to provide a good balance
between Reno-friendliness and aggressiveness of window increase.
Therefore, _C_ SHOULD be set to 0.4. With _C_ set to 0.4, Figure 7
is reduced to
4 ┌────┐
╲ │ 3
╲│RTT
AVG_W = 1.054 * ────────
cubic 4 ┌──┐
╲ │ 3
╲│p
Figure 8
Figure 8 is then used in the next subsection to show the scalability
of CUBIC.
5.2. Using Spare Capacity
CUBIC uses a more aggressive window increase function than Reno for
fast and long-distance networks.
Table 3 shows that to achieve the 10 Gbps rate, Reno TCP requires a
packet loss rate of 2.0e-10, while CUBIC TCP requires a packet loss
rate of 2.9e-8.
+===================+===========+=========+=========+=========+
| Throughput (Mbps) | Average W | Reno P | HSTCP P | CUBIC P |
+===================+===========+=========+=========+=========+
| 1 | 8.3 | 2.0e-2 | 2.0e-2 | 2.0e-2 |
+-------------------+-----------+---------+---------+---------+
| 10 | 83.3 | 2.0e-4 | 3.9e-4 | 2.9e-4 |
+-------------------+-----------+---------+---------+---------+
| 100 | 833.3 | 2.0e-6 | 2.5e-5 | 1.4e-5 |
+-------------------+-----------+---------+---------+---------+
| 1000 | 8333.3 | 2.0e-8 | 1.5e-6 | 6.3e-7 |
+-------------------+-----------+---------+---------+---------+
| 10000 | 83333.3 | 2.0e-10 | 1.0e-7 | 2.9e-8 |
+-------------------+-----------+---------+---------+---------+
Table 3: Required Packet Loss Rate for Reno TCP, HSTCP, and
CUBIC to Achieve a Certain Throughput
Table 3 describes the required packet loss rate for Reno TCP, HSTCP,
and CUBIC to achieve a certain throughput, with 1500-byte packets and
an _RTT_ of 0.1 seconds.
------------------------------------------------------------------------
(extracted using:
wget -q -O- 'https://www.rfc-editor.org/rfc/rfc9438.txt' \
| sed -En '/^ +_C_ determines the aggressiveness of CUBIC/,/of 0\.1 seconds\.$/p'
)
> Reading the table, for RTT of 100ms and throughput 100Mbps in a single
> flow, a "loss rate" (equivalent to a marking rate) of about 1 per
> 7000 packets is required. The formula can be rearranged to find a
> more general answer.
HTH,
Erik
next prev parent reply other threads:[~2023-08-26 12:42 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-26 11:48 [Ecn-sane] " Sebastian Moeller
2023-08-26 12:06 ` Jonathan Morton
2023-08-26 12:34 ` Sebastian Moeller
2023-08-26 12:42 ` Erik Auerswald [this message]
2023-08-26 12:51 ` [Ecn-sane] [Bloat] " Jonathan Morton
2023-08-26 15:35 ` Sebastian Moeller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/ecn-sane.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230826124233.GA30081@unix-ag.uni-kl.de \
--to=auerswal@unix-ag.uni-kl.de \
--cc=bloat@lists.bufferbloat.net \
--cc=chromatix99@gmail.com \
--cc=ecn-sane@lists.bufferbloat.net \
--cc=moeller0@gmx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox