From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id EE22C3B2A4 for ; Thu, 9 Jan 2025 11:09:59 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1736438999; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=s7Ff5q9rkJbIvC8NZwUqyOwv4+qS/F8+N/B+qJ0BveY=; b=QfQ3C6DkncLB2gtzeEr7jIaLHRnZSg7MbsBwqfhQ+1SHFZhsthTEpLzg3jCuYPuePlYW9x k1jU2nE6mzbtbhDlFfj0M/vd7Il0Nuwbyseurh3sTx7/x3ddasvOwpPcduN+pFMzCI08UQ 0pGSfuadBqaakRgovV7/yHvoWumKjvE= Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com [209.85.218.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-590-qR06-5h-PA28BcMJ1CvwDg-1; Thu, 09 Jan 2025 11:09:31 -0500 X-MC-Unique: qR06-5h-PA28BcMJ1CvwDg-1 X-Mimecast-MFC-AGG-ID: qR06-5h-PA28BcMJ1CvwDg Received: by mail-ej1-f70.google.com with SMTP id a640c23a62f3a-aa680e17f6dso85194466b.1 for ; Thu, 09 Jan 2025 08:09:28 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736438963; x=1737043763; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=s7Ff5q9rkJbIvC8NZwUqyOwv4+qS/F8+N/B+qJ0BveY=; b=kNnTh8ooiZDb06Aqgc0SO7SMFgEImtbDQ9O10yD5zD7ULifE9nM0W4Rvtr0LRN/IcW Ggfs8rRKafYpcEksujK2wr3Pccy6IbvtuNZFxZ2vVXrKrpyhFhL68E8qXPs8kM5zbpnN Yh4sgc7DPZ1PqQP3VEA8cRUBZ+Biuu5sOtr5Tozos/OPbapQOskBKi0GdvdO30pRWi5Q zKXmyE5pRW5RAsHClFsnY5QxUne3O6KwB5FyPGT619SBtPYYa/uTMu7ZReZh2WlyN59V n5MOCieekOkB9icUwK6mnoSeATiNZ3QKbPqwYjLjyjf3aNKOEyCLP/ntrBwYdfAc8p+z 7GmA== X-Forwarded-Encrypted: i=1; AJvYcCU5/VGzZvejiSi1cdBROkkfu3pyi/Qe8gWxciGVcEAnJgH5cu3Tj6kDsBtlOI6XtJmP7mdh@lists.bufferbloat.net X-Gm-Message-State: AOJu0YwY+QYrxxD6BPBAm+zXHz6B0pNjaoRLaM3hkiiF9Yf0VJjeC2fR ZylQayvL0gatgvxiPwvrioVrSI7otr/8/NxSzKhc4xdQ2j51pR5MShR+yE20gPlK7Ej1wDSo8zb bhWSFYn8JkoXlRvDKijJ8q4MvQua4dqrcRHZb4jh5x24yuko/gKnEBmepeNs= X-Gm-Gg: ASbGncvOPkjJgokfWIwO1WBBbCW2c/EaGHmmLUHhgQT0l2jEOI45TAtdqv+B2MRM9ST Fh2EjKbBrwHa1WHx0e4bh9enqAe20mLXqAWs79RFknVJtrRhPoNEG6OOFiUhEYI/NQblkPTy82o ou6rqoRBaq15u6BtXICJfKFACH9Ps1sXIadSk02tJkqhHFwhSKUpXpGLjxTG3kyZVqsJWmyO7bg 87orH5z9efjiZCkYdc5nypVOK/aB6BqKr7nMvUZGeTrGxCK5ghDrq7VmEB86andMLm1n+AUjd9E uzKXSg== X-Received: by 2002:a05:6402:4415:b0:5d0:abb8:7a3 with SMTP id 4fb4d7f45d1cf-5d972e000c5mr17046372a12.6.1736438963376; Thu, 09 Jan 2025 08:09:23 -0800 (PST) X-Google-Smtp-Source: AGHT+IEs/F/Oyj00ptdXnI+y37mRpEHAYkEwvAGhLWl/C9WE00KJ2awn08XEU1fgzT457+MxvSbEWQ== X-Received: by 2002:a05:6402:4415:b0:5d0:abb8:7a3 with SMTP id 4fb4d7f45d1cf-5d972e000c5mr17046306a12.6.1736438962870; Thu, 09 Jan 2025 08:09:22 -0800 (PST) Received: from alrua-x1.borgediget.toke.dk ([45.145.92.2]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5d99008c35bsm750825a12.2.2025.01.09.08.09.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Jan 2025 08:09:21 -0800 (PST) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id AF84A177E385; Thu, 09 Jan 2025 17:09:19 +0100 (CET) From: =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= To: =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= , Jamal Hadi Salim , Cong Wang , Jiri Pirko , Paolo Abeni Cc: =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= , syzbot+f63600d288bfb7057424@syzkaller.appspotmail.com, Dave Taht , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Simon Horman , cake@lists.bufferbloat.net, netdev@vger.kernel.org Date: Thu, 9 Jan 2025 17:08:59 +0100 Message-ID: <20250109160900.192138-1-toke@redhat.com> X-Mailer: git-send-email 2.47.1 MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 41O4690QGYjBRkYaVJvVJj7GfW5TRqyAg-XU88TiW2U_1736438967 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [Cake] [PATCH net v3] sched: sch_cake: add bounds checks to host bulk flow fairness counts X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jan 2025 16:10:00 -0000 Even though we fixed a logic error in the commit cited below, syzbot still managed to trigger an underflow of the per-host bulk flow counters, leading to an out of bounds memory access. To avoid any such logic errors causing out of bounds memory accesses, this commit factors out all accesses to the per-host bulk flow counters to a series of helpers that perform bounds-checking before any increments and decrements. This also has the benefit of improving readability by moving the conditional checks for the flow mode into these helpers, instead of having them spread out throughout the code (which was the cause of the original logic error). As part of this change, the flow quantum calculation is consolidated into a helper function, which means that the dithering applied to the host load scaling is now applied both in the DRR rotation and when a sparse flow's quantum is first initiated. The only user-visible effect of this is that the maximum packet size that can be sent while a flow stays sparse will now vary with +/- one byte in some cases. This should not make a noticeable difference in practice, and thus it's not worth complicating the code to preserve the old behaviour. Fixes: 546ea84d07e3 ("sched: sch_cake: fix bulk flow accounting logic for host fairness") Reported-by: syzbot+f63600d288bfb7057424@syzkaller.appspotmail.com Acked-By: Dave Taht Signed-off-by: Toke Høiland-Jørgensen --- v3: - Add note about the dithering change to the commit message - Move changelog out of the commit message - Collect Dave'sACK v2: - Remove now-unused srchost and dsthost local variables in cake_dequeue() net/sched/sch_cake.c | 140 +++++++++++++++++++++++-------------------- 1 file changed, 75 insertions(+), 65 deletions(-) diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c index 8d8b2db4653c..2c2e2a67f3b2 100644 --- a/net/sched/sch_cake.c +++ b/net/sched/sch_cake.c @@ -627,6 +627,63 @@ static bool cake_ddst(int flow_mode) return (flow_mode & CAKE_FLOW_DUAL_DST) == CAKE_FLOW_DUAL_DST; } +static void cake_dec_srchost_bulk_flow_count(struct cake_tin_data *q, + struct cake_flow *flow, + int flow_mode) +{ + if (likely(cake_dsrc(flow_mode) && + q->hosts[flow->srchost].srchost_bulk_flow_count)) + q->hosts[flow->srchost].srchost_bulk_flow_count--; +} + +static void cake_inc_srchost_bulk_flow_count(struct cake_tin_data *q, + struct cake_flow *flow, + int flow_mode) +{ + if (likely(cake_dsrc(flow_mode) && + q->hosts[flow->srchost].srchost_bulk_flow_count < CAKE_QUEUES)) + q->hosts[flow->srchost].srchost_bulk_flow_count++; +} + +static void cake_dec_dsthost_bulk_flow_count(struct cake_tin_data *q, + struct cake_flow *flow, + int flow_mode) +{ + if (likely(cake_ddst(flow_mode) && + q->hosts[flow->dsthost].dsthost_bulk_flow_count)) + q->hosts[flow->dsthost].dsthost_bulk_flow_count--; +} + +static void cake_inc_dsthost_bulk_flow_count(struct cake_tin_data *q, + struct cake_flow *flow, + int flow_mode) +{ + if (likely(cake_ddst(flow_mode) && + q->hosts[flow->dsthost].dsthost_bulk_flow_count < CAKE_QUEUES)) + q->hosts[flow->dsthost].dsthost_bulk_flow_count++; +} + +static u16 cake_get_flow_quantum(struct cake_tin_data *q, + struct cake_flow *flow, + int flow_mode) +{ + u16 host_load = 1; + + if (cake_dsrc(flow_mode)) + host_load = max(host_load, + q->hosts[flow->srchost].srchost_bulk_flow_count); + + if (cake_ddst(flow_mode)) + host_load = max(host_load, + q->hosts[flow->dsthost].dsthost_bulk_flow_count); + + /* The get_random_u16() is a way to apply dithering to avoid + * accumulating roundoff errors + */ + return (q->flow_quantum * quantum_div[host_load] + + get_random_u16()) >> 16; +} + static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb, int flow_mode, u16 flow_override, u16 host_override) { @@ -773,10 +830,8 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb, allocate_dst = cake_ddst(flow_mode); if (q->flows[outer_hash + k].set == CAKE_SET_BULK) { - if (allocate_src) - q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--; - if (allocate_dst) - q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--; + cake_dec_srchost_bulk_flow_count(q, &q->flows[outer_hash + k], flow_mode); + cake_dec_dsthost_bulk_flow_count(q, &q->flows[outer_hash + k], flow_mode); } found: /* reserve queue for future packets in same flow */ @@ -801,9 +856,10 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb, q->hosts[outer_hash + k].srchost_tag = srchost_hash; found_src: srchost_idx = outer_hash + k; - if (q->flows[reduced_hash].set == CAKE_SET_BULK) - q->hosts[srchost_idx].srchost_bulk_flow_count++; q->flows[reduced_hash].srchost = srchost_idx; + + if (q->flows[reduced_hash].set == CAKE_SET_BULK) + cake_inc_srchost_bulk_flow_count(q, &q->flows[reduced_hash], flow_mode); } if (allocate_dst) { @@ -824,9 +880,10 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb, q->hosts[outer_hash + k].dsthost_tag = dsthost_hash; found_dst: dsthost_idx = outer_hash + k; - if (q->flows[reduced_hash].set == CAKE_SET_BULK) - q->hosts[dsthost_idx].dsthost_bulk_flow_count++; q->flows[reduced_hash].dsthost = dsthost_idx; + + if (q->flows[reduced_hash].set == CAKE_SET_BULK) + cake_inc_dsthost_bulk_flow_count(q, &q->flows[reduced_hash], flow_mode); } } @@ -1839,10 +1896,6 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, /* flowchain */ if (!flow->set || flow->set == CAKE_SET_DECAYING) { - struct cake_host *srchost = &b->hosts[flow->srchost]; - struct cake_host *dsthost = &b->hosts[flow->dsthost]; - u16 host_load = 1; - if (!flow->set) { list_add_tail(&flow->flowchain, &b->new_flows); } else { @@ -1852,18 +1905,8 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, flow->set = CAKE_SET_SPARSE; b->sparse_flow_count++; - if (cake_dsrc(q->flow_mode)) - host_load = max(host_load, srchost->srchost_bulk_flow_count); - - if (cake_ddst(q->flow_mode)) - host_load = max(host_load, dsthost->dsthost_bulk_flow_count); - - flow->deficit = (b->flow_quantum * - quantum_div[host_load]) >> 16; + flow->deficit = cake_get_flow_quantum(b, flow, q->flow_mode); } else if (flow->set == CAKE_SET_SPARSE_WAIT) { - struct cake_host *srchost = &b->hosts[flow->srchost]; - struct cake_host *dsthost = &b->hosts[flow->dsthost]; - /* this flow was empty, accounted as a sparse flow, but actually * in the bulk rotation. */ @@ -1871,12 +1914,8 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, b->sparse_flow_count--; b->bulk_flow_count++; - if (cake_dsrc(q->flow_mode)) - srchost->srchost_bulk_flow_count++; - - if (cake_ddst(q->flow_mode)) - dsthost->dsthost_bulk_flow_count++; - + cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode); + cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode); } if (q->buffer_used > q->buffer_max_used) @@ -1933,13 +1972,11 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch) { struct cake_sched_data *q = qdisc_priv(sch); struct cake_tin_data *b = &q->tins[q->cur_tin]; - struct cake_host *srchost, *dsthost; ktime_t now = ktime_get(); struct cake_flow *flow; struct list_head *head; bool first_flow = true; struct sk_buff *skb; - u16 host_load; u64 delay; u32 len; @@ -2039,11 +2076,6 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch) q->cur_flow = flow - b->flows; first_flow = false; - /* triple isolation (modified DRR++) */ - srchost = &b->hosts[flow->srchost]; - dsthost = &b->hosts[flow->dsthost]; - host_load = 1; - /* flow isolation (DRR++) */ if (flow->deficit <= 0) { /* Keep all flows with deficits out of the sparse and decaying @@ -2055,11 +2087,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch) b->sparse_flow_count--; b->bulk_flow_count++; - if (cake_dsrc(q->flow_mode)) - srchost->srchost_bulk_flow_count++; - - if (cake_ddst(q->flow_mode)) - dsthost->dsthost_bulk_flow_count++; + cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode); + cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode); flow->set = CAKE_SET_BULK; } else { @@ -2071,19 +2100,7 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch) } } - if (cake_dsrc(q->flow_mode)) - host_load = max(host_load, srchost->srchost_bulk_flow_count); - - if (cake_ddst(q->flow_mode)) - host_load = max(host_load, dsthost->dsthost_bulk_flow_count); - - WARN_ON(host_load > CAKE_QUEUES); - - /* The get_random_u16() is a way to apply dithering to avoid - * accumulating roundoff errors - */ - flow->deficit += (b->flow_quantum * quantum_div[host_load] + - get_random_u16()) >> 16; + flow->deficit += cake_get_flow_quantum(b, flow, q->flow_mode); list_move_tail(&flow->flowchain, &b->old_flows); goto retry; @@ -2107,11 +2124,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch) if (flow->set == CAKE_SET_BULK) { b->bulk_flow_count--; - if (cake_dsrc(q->flow_mode)) - srchost->srchost_bulk_flow_count--; - - if (cake_ddst(q->flow_mode)) - dsthost->dsthost_bulk_flow_count--; + cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode); + cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode); b->decaying_flow_count++; } else if (flow->set == CAKE_SET_SPARSE || @@ -2129,12 +2143,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch) else if (flow->set == CAKE_SET_BULK) { b->bulk_flow_count--; - if (cake_dsrc(q->flow_mode)) - srchost->srchost_bulk_flow_count--; - - if (cake_ddst(q->flow_mode)) - dsthost->dsthost_bulk_flow_count--; - + cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode); + cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode); } else b->decaying_flow_count--; -- 2.47.1