From: Dave Taht <dave.taht@gmail.com>
To: netdev@vger.kernel.org, cake@lists.bufferbloat.net
Subject: [Cake] [PATCH net-next] sch_cake: Make gso-splitting configurable from userspace
Date: Thu, 26 Jul 2018 19:45:10 -0700 [thread overview]
Message-ID: <1532659510-17385-2-git-send-email-dave.taht@gmail.com> (raw)
In-Reply-To: <1532659510-17385-1-git-send-email-dave.taht@gmail.com>
This patch restores cake's deployed behavior at line rate to always
split gso, and makes gso splitting configurable from userspace.
running cake unlimited (unshaped) at 1gigE, local traffic:
no-split-gso bql limit: 131966
split-gso bql limit: ~42392-45420
On this 4 stream test splitting gso apart results in halving the
observed interpacket latency at no loss in throughput.
Summary of tcp_nup test run 'gso-split' (at 2018-07-26 16:03:51.824728):
Ping (ms) ICMP : 0.83 0.81 ms 341
TCP upload avg : 235.43 235.39 Mbits/s 301
TCP upload sum : 941.71 941.56 Mbits/s 301
TCP upload::1 : 235.45 235.43 Mbits/s 271
TCP upload::2 : 235.45 235.41 Mbits/s 289
TCP upload::3 : 235.40 235.40 Mbits/s 288
TCP upload::4 : 235.41 235.40 Mbits/s 291
verses
Summary of tcp_nup test run 'no-split-gso' (at 2018-07-26 16:37:23.563960):
avg median # data pts
Ping (ms) ICMP : 1.67 1.73 ms 348
TCP upload avg : 234.56 235.37 Mbits/s 301
TCP upload sum : 938.24 941.49 Mbits/s 301
TCP upload::1 : 234.55 235.38 Mbits/s 285
TCP upload::2 : 234.57 235.37 Mbits/s 286
TCP upload::3 : 234.58 235.37 Mbits/s 274
TCP upload::4 : 234.54 235.42 Mbits/s 288
---
net/sched/sch_cake.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index 539c949..35fc725 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -80,7 +80,6 @@
#define CAKE_QUEUES (1024)
#define CAKE_FLOW_MASK 63
#define CAKE_FLOW_NAT_FLAG 64
-#define CAKE_SPLIT_GSO_THRESHOLD (125000000) /* 1Gbps */
/* struct cobalt_params - contains codel and blue parameters
* @interval: codel initial drop rate
@@ -2569,10 +2568,12 @@ static int cake_change(struct Qdisc *sch, struct nlattr *opt,
if (tb[TCA_CAKE_MEMORY])
q->buffer_config_limit = nla_get_u32(tb[TCA_CAKE_MEMORY]);
- if (q->rate_bps && q->rate_bps <= CAKE_SPLIT_GSO_THRESHOLD)
- q->rate_flags |= CAKE_FLAG_SPLIT_GSO;
- else
- q->rate_flags &= ~CAKE_FLAG_SPLIT_GSO;
+ if (tb[TCA_CAKE_SPLIT_GSO]) {
+ if (!!nla_get_u32(tb[TCA_CAKE_SPLIT_GSO]))
+ q->rate_flags |= CAKE_FLAG_SPLIT_GSO;
+ else
+ q->rate_flags &= ~CAKE_FLAG_SPLIT_GSO;
+ }
if (q->tins) {
sch_tree_lock(sch);
@@ -2608,7 +2609,7 @@ static int cake_init(struct Qdisc *sch, struct nlattr *opt,
q->target = 5000; /* 5ms: codel RFC argues
* for 5 to 10% of interval
*/
-
+ q->rate_flags |= CAKE_FLAG_SPLIT_GSO;
q->cur_tin = 0;
q->cur_flow = 0;
--
2.7.4
next prev parent reply other threads:[~2018-07-27 2:45 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-27 2:45 Dave Taht
2018-07-27 2:45 ` Dave Taht [this message]
2018-07-27 3:24 ` Dave Taht
2018-07-27 20:39 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/cake.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1532659510-17385-2-git-send-email-dave.taht@gmail.com \
--to=dave.taht@gmail.com \
--cc=cake@lists.bufferbloat.net \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox