From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oo1-xc33.google.com (mail-oo1-xc33.google.com [IPv6:2607:f8b0:4864:20::c33]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 12F073B29D for ; Wed, 8 Jan 2025 11:10:49 -0500 (EST) Received: by mail-oo1-xc33.google.com with SMTP id 006d021491bc7-5f321876499so535925eaf.1 for ; Wed, 08 Jan 2025 08:10:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736352648; x=1736957448; darn=lists.bufferbloat.net; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=DPtSTUIAPGBBTRy2J0lBosbiPvH9eGM/VsJyPdKLfFo=; b=jRxniFgrtRQww6XfSZLmGaafO+wV3sOO8Xpy9SKgU3l9HURYImvOZtuUr5ryQzkMAV hzqHemQyfuRkSy8oI2XFgALqiy5VnPEFnN+rRUY3XBeqbju5Z95wlZJf9PS34Xi8jcLZ 3IjlEFEG5kAZun8rWNia8IjusBQoEH4gI2HbTsZYgkj94Jzo0k1dZ71NO1ghE7xRSB9e SQE5jWhJLodqcIfaITy6gtOOuH5TsbFrQzQFI1j1lp72UdY3RLAXbX+kaYREuNSsDNpg yFS6ulX9gJ40DltlYdPj16wUZrHnhU6MZze4XAHKcwGjZcTFoEyM5JmeBt1IWjglHHu6 JvoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736352648; x=1736957448; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DPtSTUIAPGBBTRy2J0lBosbiPvH9eGM/VsJyPdKLfFo=; b=uz4AyhjnOgxgwilUhXydY7ihsQllAACRdOdoAq8WHM4+v3uBgrdvv5Z1eRBT7Sl2ne zccI6SN69nghoMzLcfv8wkKwzJLnAn2Xv3q9GuXOP+l1M3/retKofz/uGrpL+MuTgXcL PMQJHT+XpV0SoqNHfGJjtXtUq97ijtKcDcBGSVjpivbMws6hnkcFshIyGnLVZirRm5vQ ZYsLoBRa3T4PN2lDGE2OSY/DGExEcaGFQgzdo3/8XDj4w2zDxgsAXPQI5Ox5HeeFnB8U 7wupt7n5xmB+qjk5EsN5NtVfWqq/tZFYG+OIC+Gg0vf4zMw66ykWudq3i973QwHTOk7o YA4Q== X-Forwarded-Encrypted: i=1; AJvYcCVRftRCPQWfT8TI50CkZE/I8eGYY7PUVhzs9hdkhLXxInVOlCRM9KPPHvFqhoppo3eCqp80@lists.bufferbloat.net X-Gm-Message-State: AOJu0YzdzGs+TSVu3gJBH2uIAMFSfOOKsliMItlLJ7u663L+4mANa8nV Wu0qTQ1mswFSIBwOdPnEA7OBkQTGei6GD1SEZ20uooISmMR+hPmOJJ80bKEBs6jjx2S1wmUu6AR +WvugemgFlBRICAEr+sKMDTbnxEY= X-Gm-Gg: ASbGncsSswNX3BNuUN8F03NrpIVohlB5EcPt14W/cjOwvBZptfgMcgfHtrJ2iM/yoeJ qFxlZVGgHgruHQe/eUs23xbB8HMTSvTzjEAUM8adLXDWBiOc/JaH29eP1wpcehANssk3zx4Mk X-Google-Smtp-Source: AGHT+IFVL68aUt8UrL4CyInyAhni4Rg+m1OY/4cJ3XWsQSRbrFaxVPlTTBniUJ0kE/3mL4m9St0ZQe/D88Z9yTizv20= X-Received: by 2002:a05:6870:2f0b:b0:297:683:8b5b with SMTP id 586e51a60fabf-2a9eab27e66mr3896617fac.10.1736352648176; Wed, 08 Jan 2025 08:10:48 -0800 (PST) MIME-Version: 1.0 References: <20250107120105.70685-1-toke@redhat.com> In-Reply-To: <20250107120105.70685-1-toke@redhat.com> From: Dave Taht Date: Wed, 8 Jan 2025 08:10:36 -0800 X-Gm-Features: AbW1kvZ1n8EFC7vOnu3ElN0IgDGZKcpL6qnOmOY6nRxve3H4wPGpS2_wjBxL_mQ Message-ID: To: =?UTF-8?B?VG9rZSBIw7hpbGFuZC1Kw7hyZ2Vuc2Vu?= Cc: =?UTF-8?B?VG9rZSBIw7hpbGFuZC1Kw7hyZ2Vuc2Vu?= , Jamal Hadi Salim , Cong Wang , Jiri Pirko , Paolo Abeni , netdev@vger.kernel.org, cake@lists.bufferbloat.net, Eric Dumazet , Simon Horman , Jakub Kicinski , syzbot+f63600d288bfb7057424@syzkaller.appspotmail.com, "David S. Miller" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [Cake] [PATCH net v2] sched: sch_cake: add bounds checks to host bulk flow fairness counts X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jan 2025 16:10:49 -0000 On Tue, Jan 7, 2025 at 4:01=E2=80=AFAM Toke H=C3=B8iland-J=C3=B8rgensen via= Cake wrote: > > Even though we fixed a logic error in the commit cited below, syzbot > still managed to trigger an underflow of the per-host bulk flow > counters, leading to an out of bounds memory access. > > To avoid any such logic errors causing out of bounds memory accesses, > this commit factors out all accesses to the per-host bulk flow counters > to a series of helpers that perform bounds-checking before any > increments and decrements. This also has the benefit of improving > readability by moving the conditional checks for the flow mode into > these helpers, instead of having them spread out throughout the > code (which was the cause of the original logic error). > > v2: > - Remove now-unused srchost and dsthost local variables in cake_dequeue() > > Fixes: 546ea84d07e3 ("sched: sch_cake: fix bulk flow accounting logic for= host fairness") > Reported-by: syzbot+f63600d288bfb7057424@syzkaller.appspotmail.com > Signed-off-by: Toke H=C3=B8iland-J=C3=B8rgensen > --- > net/sched/sch_cake.c | 140 +++++++++++++++++++++++-------------------- > 1 file changed, 75 insertions(+), 65 deletions(-) > > diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c > index 8d8b2db4653c..2c2e2a67f3b2 100644 > --- a/net/sched/sch_cake.c > +++ b/net/sched/sch_cake.c > @@ -627,6 +627,63 @@ static bool cake_ddst(int flow_mode) > return (flow_mode & CAKE_FLOW_DUAL_DST) =3D=3D CAKE_FLOW_DUAL_DST= ; > } > > +static void cake_dec_srchost_bulk_flow_count(struct cake_tin_data *q, > + struct cake_flow *flow, > + int flow_mode) > +{ > + if (likely(cake_dsrc(flow_mode) && > + q->hosts[flow->srchost].srchost_bulk_flow_count)) > + q->hosts[flow->srchost].srchost_bulk_flow_count--; > +} > + > +static void cake_inc_srchost_bulk_flow_count(struct cake_tin_data *q, > + struct cake_flow *flow, > + int flow_mode) > +{ > + if (likely(cake_dsrc(flow_mode) && > + q->hosts[flow->srchost].srchost_bulk_flow_count < CAKE= _QUEUES)) > + q->hosts[flow->srchost].srchost_bulk_flow_count++; > +} > + > +static void cake_dec_dsthost_bulk_flow_count(struct cake_tin_data *q, > + struct cake_flow *flow, > + int flow_mode) > +{ > + if (likely(cake_ddst(flow_mode) && > + q->hosts[flow->dsthost].dsthost_bulk_flow_count)) > + q->hosts[flow->dsthost].dsthost_bulk_flow_count--; > +} > + > +static void cake_inc_dsthost_bulk_flow_count(struct cake_tin_data *q, > + struct cake_flow *flow, > + int flow_mode) > +{ > + if (likely(cake_ddst(flow_mode) && > + q->hosts[flow->dsthost].dsthost_bulk_flow_count < CAKE= _QUEUES)) > + q->hosts[flow->dsthost].dsthost_bulk_flow_count++; > +} > + > +static u16 cake_get_flow_quantum(struct cake_tin_data *q, > + struct cake_flow *flow, > + int flow_mode) > +{ > + u16 host_load =3D 1; > + > + if (cake_dsrc(flow_mode)) > + host_load =3D max(host_load, > + q->hosts[flow->srchost].srchost_bulk_flow= _count); > + > + if (cake_ddst(flow_mode)) > + host_load =3D max(host_load, > + q->hosts[flow->dsthost].dsthost_bulk_flow= _count); > + > + /* The get_random_u16() is a way to apply dithering to avoid > + * accumulating roundoff errors > + */ > + return (q->flow_quantum * quantum_div[host_load] + > + get_random_u16()) >> 16; > +} > + > static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb, > int flow_mode, u16 flow_override, u16 host_override) > { > @@ -773,10 +830,8 @@ static u32 cake_hash(struct cake_tin_data *q, const = struct sk_buff *skb, > allocate_dst =3D cake_ddst(flow_mode); > > if (q->flows[outer_hash + k].set =3D=3D CAKE_SET_BULK) { > - if (allocate_src) > - q->hosts[q->flows[reduced_hash].srchost].= srchost_bulk_flow_count--; > - if (allocate_dst) > - q->hosts[q->flows[reduced_hash].dsthost].= dsthost_bulk_flow_count--; > + cake_dec_srchost_bulk_flow_count(q, &q->flows[out= er_hash + k], flow_mode); > + cake_dec_dsthost_bulk_flow_count(q, &q->flows[out= er_hash + k], flow_mode); > } > found: > /* reserve queue for future packets in same flow */ > @@ -801,9 +856,10 @@ static u32 cake_hash(struct cake_tin_data *q, const = struct sk_buff *skb, > q->hosts[outer_hash + k].srchost_tag =3D srchost_= hash; > found_src: > srchost_idx =3D outer_hash + k; > - if (q->flows[reduced_hash].set =3D=3D CAKE_SET_BU= LK) > - q->hosts[srchost_idx].srchost_bulk_flow_c= ount++; > q->flows[reduced_hash].srchost =3D srchost_idx; > + > + if (q->flows[reduced_hash].set =3D=3D CAKE_SET_BU= LK) > + cake_inc_srchost_bulk_flow_count(q, &q->f= lows[reduced_hash], flow_mode); > } > > if (allocate_dst) { > @@ -824,9 +880,10 @@ static u32 cake_hash(struct cake_tin_data *q, const = struct sk_buff *skb, > q->hosts[outer_hash + k].dsthost_tag =3D dsthost_= hash; > found_dst: > dsthost_idx =3D outer_hash + k; > - if (q->flows[reduced_hash].set =3D=3D CAKE_SET_BU= LK) > - q->hosts[dsthost_idx].dsthost_bulk_flow_c= ount++; > q->flows[reduced_hash].dsthost =3D dsthost_idx; > + > + if (q->flows[reduced_hash].set =3D=3D CAKE_SET_BU= LK) > + cake_inc_dsthost_bulk_flow_count(q, &q->f= lows[reduced_hash], flow_mode); > } > } > > @@ -1839,10 +1896,6 @@ static s32 cake_enqueue(struct sk_buff *skb, struc= t Qdisc *sch, > > /* flowchain */ > if (!flow->set || flow->set =3D=3D CAKE_SET_DECAYING) { > - struct cake_host *srchost =3D &b->hosts[flow->srchost]; > - struct cake_host *dsthost =3D &b->hosts[flow->dsthost]; > - u16 host_load =3D 1; > - > if (!flow->set) { > list_add_tail(&flow->flowchain, &b->new_flows); > } else { > @@ -1852,18 +1905,8 @@ static s32 cake_enqueue(struct sk_buff *skb, struc= t Qdisc *sch, > flow->set =3D CAKE_SET_SPARSE; > b->sparse_flow_count++; > > - if (cake_dsrc(q->flow_mode)) > - host_load =3D max(host_load, srchost->srchost_bul= k_flow_count); > - > - if (cake_ddst(q->flow_mode)) > - host_load =3D max(host_load, dsthost->dsthost_bul= k_flow_count); > - > - flow->deficit =3D (b->flow_quantum * > - quantum_div[host_load]) >> 16; > + flow->deficit =3D cake_get_flow_quantum(b, flow, q->flow_= mode); > } else if (flow->set =3D=3D CAKE_SET_SPARSE_WAIT) { > - struct cake_host *srchost =3D &b->hosts[flow->srchost]; > - struct cake_host *dsthost =3D &b->hosts[flow->dsthost]; > - > /* this flow was empty, accounted as a sparse flow, but a= ctually > * in the bulk rotation. > */ > @@ -1871,12 +1914,8 @@ static s32 cake_enqueue(struct sk_buff *skb, struc= t Qdisc *sch, > b->sparse_flow_count--; > b->bulk_flow_count++; > > - if (cake_dsrc(q->flow_mode)) > - srchost->srchost_bulk_flow_count++; > - > - if (cake_ddst(q->flow_mode)) > - dsthost->dsthost_bulk_flow_count++; > - > + cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode); > + cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode); > } > > if (q->buffer_used > q->buffer_max_used) > @@ -1933,13 +1972,11 @@ static struct sk_buff *cake_dequeue(struct Qdisc = *sch) > { > struct cake_sched_data *q =3D qdisc_priv(sch); > struct cake_tin_data *b =3D &q->tins[q->cur_tin]; > - struct cake_host *srchost, *dsthost; > ktime_t now =3D ktime_get(); > struct cake_flow *flow; > struct list_head *head; > bool first_flow =3D true; > struct sk_buff *skb; > - u16 host_load; > u64 delay; > u32 len; > > @@ -2039,11 +2076,6 @@ static struct sk_buff *cake_dequeue(struct Qdisc *= sch) > q->cur_flow =3D flow - b->flows; > first_flow =3D false; > > - /* triple isolation (modified DRR++) */ > - srchost =3D &b->hosts[flow->srchost]; > - dsthost =3D &b->hosts[flow->dsthost]; > - host_load =3D 1; > - > /* flow isolation (DRR++) */ > if (flow->deficit <=3D 0) { > /* Keep all flows with deficits out of the sparse and dec= aying > @@ -2055,11 +2087,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *= sch) > b->sparse_flow_count--; > b->bulk_flow_count++; > > - if (cake_dsrc(q->flow_mode)) > - srchost->srchost_bulk_flow_count+= +; > - > - if (cake_ddst(q->flow_mode)) > - dsthost->dsthost_bulk_flow_count+= +; > + cake_inc_srchost_bulk_flow_count(b, flow,= q->flow_mode); > + cake_inc_dsthost_bulk_flow_count(b, flow,= q->flow_mode); > > flow->set =3D CAKE_SET_BULK; > } else { > @@ -2071,19 +2100,7 @@ static struct sk_buff *cake_dequeue(struct Qdisc *= sch) > } > } > > - if (cake_dsrc(q->flow_mode)) > - host_load =3D max(host_load, srchost->srchost_bul= k_flow_count); > - > - if (cake_ddst(q->flow_mode)) > - host_load =3D max(host_load, dsthost->dsthost_bul= k_flow_count); > - > - WARN_ON(host_load > CAKE_QUEUES); > - > - /* The get_random_u16() is a way to apply dithering to av= oid > - * accumulating roundoff errors > - */ > - flow->deficit +=3D (b->flow_quantum * quantum_div[host_lo= ad] + > - get_random_u16()) >> 16; > + flow->deficit +=3D cake_get_flow_quantum(b, flow, q->flow= _mode); > list_move_tail(&flow->flowchain, &b->old_flows); > > goto retry; > @@ -2107,11 +2124,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *= sch) > if (flow->set =3D=3D CAKE_SET_BULK) { > b->bulk_flow_count--; > > - if (cake_dsrc(q->flow_mode)) > - srchost->srchost_bulk_flo= w_count--; > - > - if (cake_ddst(q->flow_mode)) > - dsthost->dsthost_bulk_flo= w_count--; > + cake_dec_srchost_bulk_flow_count(= b, flow, q->flow_mode); > + cake_dec_dsthost_bulk_flow_count(= b, flow, q->flow_mode); > > b->decaying_flow_count++; > } else if (flow->set =3D=3D CAKE_SET_SPAR= SE || > @@ -2129,12 +2143,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *= sch) > else if (flow->set =3D=3D CAKE_SET_BULK) = { > b->bulk_flow_count--; > > - if (cake_dsrc(q->flow_mode)) > - srchost->srchost_bulk_flo= w_count--; > - > - if (cake_ddst(q->flow_mode)) > - dsthost->dsthost_bulk_flo= w_count--; > - > + cake_dec_srchost_bulk_flow_count(= b, flow, q->flow_mode); > + cake_dec_dsthost_bulk_flow_count(= b, flow, q->flow_mode); > } else > b->decaying_flow_count--; > > -- > 2.47.1 > Acked-By: Dave Taht --=20 Dave T=C3=A4ht CSO, LibreQos