From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 8C1A83B29D for ; Wed, 30 Nov 2022 00:51:26 -0500 (EST) Received: by mail-wr1-x432.google.com with SMTP id bs21so25397074wrb.4 for ; Tue, 29 Nov 2022 21:51:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=UxKmcUaPDvMrJLyLWKW+cX9S/XW2V1kmNbU87QNkgY4=; b=ZSxJXLhm2QoiXRCd5s5GpkrJF6OvdRupy3XSutVg3Rwr9JzDkbiGvtp3x4e5xmhjRh 2su3O+8hSNAHWUUBlxZmU1V0jnlbxDW2ZLqCbNOLk0US7iD8QeYjMv+PuPhP35Mim2tW 6ve4yO159myZwrB+PzEBFGiOkyWBeXpUo3Wcvjwhp8aF1s/QGQRd8Fx7+LhsdCd88gu2 Mj9IizBKBt/WnkKIm/ZmfCNIXaNQ3xcHNmAYcsqg9ZmBYKF1FcEQ2lxQOjJq9ZQMl4AK ktesNyx6nwcXtQ/Jadb+0SNpNJKRNbBjWg9H5AONuHN63A1PGbEex2B4CvYtQXv0Y9XX ZOng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UxKmcUaPDvMrJLyLWKW+cX9S/XW2V1kmNbU87QNkgY4=; b=bXZQKUilq7K7MTPFCP3abcS/CaZH+xbjser52ym4PO3e9N94F7DZv45qK06jcxVvri VnDMmjBko2xT6mUqaOnJAFtkRHEVDskv6D/RA6eBZaaDjx1Uk3+1CGeNYAvD5kt3R95q quz2mhMbSXlYxiWQbKZ8m0825I7MuoonLPeqVelsjLpJ9EpQUymoQi491BX1DlNVQU8C CiMMk2Itlzmv4fKb9GWXuA26Qrw2+aEWC8ghOuV4uKU/SnEmZ1OV9VAaVfGUpVimdt04 zdMXfl3WE9KPVV9FR4O/Va05Ke4iUxG/GRNZDjkchk8HteNqcrili7fxKoj1SWgkCPhF omnA== X-Gm-Message-State: ANoB5plOfr+PGY3Xuer6yaaKqA1uuDB+MWgCYKGxgHfeHWb1b03kWvK1 uLaXunbTd6hrqvr+DxheB1PIhlBJ8ICjhwgFshPSSUSz2OU= X-Google-Smtp-Source: AA0mqf4FaT4K0eALSjw4ai4i/BtyzoJww84xtV9TcHm4QUV5TMibT3oN2Fo4Glra/FYedPGO7aMGzDz5bYLV/xlm3XQ= X-Received: by 2002:adf:ead2:0:b0:23a:ce24:1bf0 with SMTP id o18-20020adfead2000000b0023ace241bf0mr5052035wrn.383.1669787484915; Tue, 29 Nov 2022 21:51:24 -0800 (PST) MIME-Version: 1.0 References: <20221129193452.3448944-1-sdf@google.com> <20221129193452.3448944-11-sdf@google.com> In-Reply-To: <20221129193452.3448944-11-sdf@google.com> From: Dave Taht Date: Tue, 29 Nov 2022 21:51:09 -0800 Message-ID: To: libreqos Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: [LibreQoS] Fwd: [PATCH bpf-next v3 10/11] mlx5: Support RX XDP metadata X-BeenThere: libreqos@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Many ISPs need the kinds of quality shaping cake can do List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Nov 2022 05:51:26 -0000 ---------- Forwarded message --------- From: Stanislav Fomichev Date: Tue, Nov 29, 2022 at 11:48 AM Subject: [PATCH bpf-next v3 10/11] mlx5: Support RX XDP metadata To: Cc: , , , , , , , , , , , Toke H=C3=B8iland-J=C3=B8rgensen , Saeed Mahameed , David Ahern , Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Magnus Karlsson , Maryam Tahhan , , From: Toke H=C3=B8iland-J=C3=B8rgensen Support RX hash and timestamp metadata kfuncs. We need to pass in the cqe pointer to the mlx5e_skb_from* functions so it can be retrieved from the XDP ctx to do this. Cc: Saeed Mahameed Cc: John Fastabend Cc: David Ahern Cc: Martin KaFai Lau Cc: Jakub Kicinski Cc: Willem de Bruijn Cc: Jesper Dangaard Brouer Cc: Anatoly Burakov Cc: Alexander Lobakin Cc: Magnus Karlsson Cc: Maryam Tahhan Cc: xdp-hints@xdp-project.net Cc: netdev@vger.kernel.org Signed-off-by: Toke H=C3=B8iland-J=C3=B8rgensen --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 10 ++++- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 29 +++++++++++++ .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 7 ++++ .../ethernet/mellanox/mlx5/core/en/xsk/rx.c | 10 +++++ .../ethernet/mellanox/mlx5/core/en/xsk/rx.h | 2 + .../net/ethernet/mellanox/mlx5/core/en_main.c | 4 ++ .../net/ethernet/mellanox/mlx5/core/en_rx.c | 42 ++++++++++--------- 7 files changed, 82 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index cdbaac5f6d25..8337ff0cacd5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -627,10 +627,11 @@ struct mlx5e_rq; typedef void (*mlx5e_fp_handle_rx_cqe)(struct mlx5e_rq*, struct mlx5_cqe64= *); typedef struct sk_buff * (*mlx5e_fp_skb_from_cqe_mpwrq)(struct mlx5e_rq *rq, struct mlx5e_mpw_info = *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx)= ; + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, + u32 head_offset, u32 page_idx); typedef struct sk_buff * (*mlx5e_fp_skb_from_cqe)(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *= wi, - u32 cqe_bcnt); + struct mlx5_cqe64 *cqe, u32 cqe_bcnt); typedef bool (*mlx5e_fp_post_rx_wqes)(struct mlx5e_rq *rq); typedef void (*mlx5e_fp_dealloc_wqe)(struct mlx5e_rq*, u16); typedef void (*mlx5e_fp_shampo_dealloc_hd)(struct mlx5e_rq*, u16, u16, boo= l); @@ -1036,6 +1037,11 @@ int mlx5e_vlan_rx_kill_vid(struct net_device *dev, __always_unused __be16 proto, u16 vid); void mlx5e_timestamp_init(struct mlx5e_priv *priv); +static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config) +{ + return config->rx_filter =3D=3D HWTSTAMP_FILTER_ALL; +} + struct mlx5e_xsk_param; struct mlx5e_rq_param; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index db49b813bcb5..2a4700b3695a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -156,6 +156,35 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, return true; } +bool mlx5e_xdp_rx_timestamp_supported(const struct xdp_md *ctx) +{ + const struct mlx5_xdp_buff *_ctx =3D (void *)ctx; + + return mlx5e_rx_hw_stamp(_ctx->rq->tstamp); +} + +u64 mlx5e_xdp_rx_timestamp(const struct xdp_md *ctx) +{ + const struct mlx5_xdp_buff *_ctx =3D (void *)ctx; + + return mlx5e_cqe_ts_to_ns(_ctx->rq->ptp_cyc2time, + _ctx->rq->clock, get_cqe_ts(_ctx->cqe)); +} + +bool mlx5e_xdp_rx_hash_supported(const struct xdp_md *ctx) +{ + const struct mlx5_xdp_buff *_ctx =3D (void *)ctx; + + return _ctx->xdp.rxq->dev->features & NETIF_F_RXHASH; +} + +u32 mlx5e_xdp_rx_hash(const struct xdp_md *ctx) +{ + const struct mlx5_xdp_buff *_ctx =3D (void *)ctx; + + return be32_to_cpu(_ctx->cqe->rss_hash_result); +} + /* returns true if packet was consumed by xdp */ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, struct bpf_prog *prog, struct mlx5_xdp_buff *mxbuf) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index a33b448d542d..a5fc30b07617 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -46,6 +46,8 @@ struct mlx5_xdp_buff { struct xdp_buff xdp; + struct mlx5_cqe64 *cqe; + struct mlx5e_rq *rq; }; struct mlx5e_xsk_param; @@ -60,6 +62,11 @@ void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq); int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frame= s, u32 flags); +bool mlx5e_xdp_rx_hash_supported(const struct xdp_md *ctx); +u32 mlx5e_xdp_rx_hash(const struct xdp_md *ctx); +bool mlx5e_xdp_rx_timestamp_supported(const struct xdp_md *ctx); +u64 mlx5e_xdp_rx_timestamp(const struct xdp_md *ctx); + INDIRECT_CALLABLE_DECLARE(bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, struct skb_shared_info *sinfo, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index 5e88dc61824e..05cf7987585a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -49,6 +49,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) umr_wqe->inline_mtts[i] =3D (struct mlx5_mtt) { .ptag =3D cpu_to_be64(addr | MLX5_EN_WR), }; + wi->alloc_units[i].mxbuf->rq =3D rq; } } else if (unlikely(rq->mpwqe.umr_mode =3D=3D MLX5E_MPWRQ_UMR_MODE_UNALIGNED)) { for (i =3D 0; i < batch; i++) { @@ -58,6 +59,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) .key =3D rq->mkey_be, .va =3D cpu_to_be64(addr), }; + wi->alloc_units[i].mxbuf->rq =3D rq; } } else if (likely(rq->mpwqe.umr_mode =3D=3D MLX5E_MPWRQ_UMR_MODE_TR= IPLE)) { u32 mapping_size =3D 1 << (rq->mpwqe.page_shift - 2); @@ -81,6 +83,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) .key =3D rq->mkey_be, .va =3D cpu_to_be64(rq->wqe_overflow.addr), }; + wi->alloc_units[i].mxbuf->rq =3D rq; } } else { __be32 pad_size =3D cpu_to_be32((1 << rq->mpwqe.page_shift)= - @@ -100,6 +103,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 i= x) .va =3D cpu_to_be64(rq->wqe_overflow.addr), .bcount =3D pad_size, }; + wi->alloc_units[i].mxbuf->rq =3D rq; } } @@ -230,6 +234,7 @@ static struct sk_buff *mlx5e_xsk_construct_skb(struct mlx5e_rq *rq, struct xdp_b struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *= wi, + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, u32 page_idx) @@ -250,6 +255,8 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, */ WARN_ON_ONCE(head_offset); + /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here */ + mxbuf->cqe =3D cqe; xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool); net_prefetch(mxbuf->xdp.data); @@ -284,6 +291,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *w= i, + struct mlx5_cqe64 *cqe, u32 cqe_bcnt) { struct mlx5_xdp_buff *mxbuf =3D wi->au->mxbuf; @@ -296,6 +304,8 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, */ WARN_ON_ONCE(wi->offset); + /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here */ + mxbuf->cqe =3D cqe; xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool); net_prefetch(mxbuf->xdp.data); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h index 087c943bd8e9..cefc0ef6105d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h @@ -13,11 +13,13 @@ int mlx5e_xsk_alloc_rx_wqes_batched(struct mlx5e_rq *rq, u16 ix, int wqe_bulk); int mlx5e_xsk_alloc_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk); struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *= wi, + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, u32 page_idx); struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *w= i, + struct mlx5_cqe64 *cqe, u32 cqe_bcnt); #endif /* __MLX5_EN_XSK_RX_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 14bd86e368d5..015bfe891458 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -4890,6 +4890,10 @@ const struct net_device_ops mlx5e_netdev_ops =3D { .ndo_tx_timeout =3D mlx5e_tx_timeout, .ndo_bpf =3D mlx5e_xdp, .ndo_xdp_xmit =3D mlx5e_xdp_xmit, + .ndo_xdp_rx_timestamp_supported =3D mlx5e_xdp_rx_timestamp_supporte= d, + .ndo_xdp_rx_timestamp =3D mlx5e_xdp_rx_timestamp, + .ndo_xdp_rx_hash_supported =3D mlx5e_xdp_rx_hash_supported, + .ndo_xdp_rx_hash =3D mlx5e_xdp_rx_hash, .ndo_xsk_wakeup =3D mlx5e_xsk_wakeup, #ifdef CONFIG_MLX5_EN_ARFS .ndo_rx_flow_steer =3D mlx5e_rx_flow_steer, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 434025703e50..a85f82efbc4f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -62,10 +62,12 @@ static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info= *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx= ); + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, + u32 page_idx); static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_= idx); + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, + u32 page_idx); static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq= e); static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); @@ -76,11 +78,6 @@ const struct mlx5e_rx_handlers mlx5e_rx_handlers_nic =3D= { .handle_rx_cqe_mpwqe_shampo =3D mlx5e_handle_rx_cqe_mpwrq_shampo, }; -static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config) -{ - return config->rx_filter =3D=3D HWTSTAMP_FILTER_ALL; -} - static inline void mlx5e_read_cqe_slot(struct mlx5_cqwq *wq, u32 cqcc, void *data) { @@ -1564,16 +1561,18 @@ struct sk_buff *mlx5e_build_linear_skb(struct mlx5e_rq *rq, void *va, return skb; } -static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroo= m, +static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, void *va, u16 headroom, u32 len, struct mlx5_xdp_buff *mxbuf) { xdp_init_buff(&mxbuf->xdp, rq->buff.frame0_sz, &rq->xdp_rxq); xdp_prepare_buff(&mxbuf->xdp, va, headroom, len, true); + mxbuf->cqe =3D cqe; + mxbuf->rq =3D rq; } static struct sk_buff * mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info = *wi, - u32 cqe_bcnt) + struct mlx5_cqe64 *cqe, u32 cqe_bcnt) { union mlx5e_alloc_unit *au =3D wi->au; u16 rx_headroom =3D rq->buff.headroom; @@ -1598,7 +1597,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, struct mlx5_xdp_buff mxbuf; net_prefetchw(va); /* xdp_frame data area */ - mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &mxbuf); + mlx5e_fill_xdp_buff(rq, cqe, va, rx_headroom, cqe_bcnt, &mx= buf); if (mlx5e_xdp_handle(rq, au->page, prog, &mxbuf)) return NULL; /* page/packet was consumed by XDP */ @@ -1619,7 +1618,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, static struct sk_buff * mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - u32 cqe_bcnt) + struct mlx5_cqe64 *cqe, u32 cqe_bcnt) { struct mlx5e_rq_frag_info *frag_info =3D &rq->wqe.info.arr[0]; struct mlx5e_wqe_frag_info *head_wi =3D wi; @@ -1643,7 +1642,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi net_prefetchw(va); /* xdp_frame data area */ net_prefetch(va + rx_headroom); - mlx5e_fill_xdp_buff(rq, va, rx_headroom, frag_consumed_bytes, &mxbu= f); + mlx5e_fill_xdp_buff(rq, cqe, va, rx_headroom, frag_consumed_bytes, &mxbuf); sinfo =3D xdp_get_shared_info_from_buff(&mxbuf.xdp); truesize =3D 0; @@ -1766,7 +1765,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, mlx5e_xsk_skb_from_cqe_linear, - rq, wi, cqe_bcnt); + rq, wi, cqe, cqe_bcnt); if (!skb) { /* probably for XDP */ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)= ) { @@ -1929,7 +1928,8 @@ mlx5e_fill_skb_data(struct sk_buff *skb, struct mlx5e_rq *rq, static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_= idx) + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, + u32 page_idx) { union mlx5e_alloc_unit *au =3D &wi->alloc_units[page_idx]; u16 headlen =3D min_t(u16, MLX5E_RX_MAX_HEAD, cqe_bcnt); @@ -1968,7 +1968,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info= *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx= ) + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, + u32 page_idx) { union mlx5e_alloc_unit *au =3D &wi->alloc_units[page_idx]; u16 rx_headroom =3D rq->buff.headroom; @@ -1999,7 +2000,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_xdp_buff mxbuf; net_prefetchw(va); /* xdp_frame data area */ - mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &mxbuf); + mlx5e_fill_xdp_buff(rq, cqe, va, rx_headroom, cqe_bcnt, &mx= buf); if (mlx5e_xdp_handle(rq, au->page, prog, &mxbuf)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ @@ -2163,8 +2164,8 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq if (likely(head_size)) *skb =3D mlx5e_skb_from_cqe_shampo(rq, wi, cqe, header_index); else - *skb =3D mlx5e_skb_from_cqe_mpwrq_nonlinear(rq, wi, cqe_bcnt, data_offset, - page_idx)= ; + *skb =3D mlx5e_skb_from_cqe_mpwrq_nonlinear(rq, wi, cqe, cqe_bcnt, + data_offset, page_idx); if (unlikely(!*skb)) goto free_hd_entry; @@ -2238,7 +2239,8 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq mlx5e_skb_from_cqe_mpwrq_linear, mlx5e_skb_from_cqe_mpwrq_nonlinear, mlx5e_xsk_skb_from_cqe_mpwrq_linear, - rq, wi, cqe_bcnt, head_offset, page_idx); + rq, wi, cqe, cqe_bcnt, head_offset, + page_idx); if (!skb) goto mpwrq_cqe_out; @@ -2575,7 +2577,7 @@ static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe goto free_wqe; } - skb =3D mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe_bcnt); + skb =3D mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe, cqe_bcnt); if (!skb) goto free_wqe; -- 2.38.1.584.g0f3c55d4c2-goog --=20 This song goes out to all the folk that thought Stadia would work: https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-69813666656= 07352320-FXtz Dave T=C3=A4ht CEO, TekLibre, LLC