[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250731123734.GA8494@horms.kernel.org>
Date: Thu, 31 Jul 2025 13:37:34 +0100
From: Simon Horman <horms@...nel.org>
To: Alexander Lobakin <aleksander.lobakin@...el.com>
Cc: intel-wired-lan@...ts.osuosl.org,
Michal Kubiak <michal.kubiak@...el.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>,
Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
nxne.cnse.osdt.itp.upstreaming@...el.com, bpf@...r.kernel.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Kees Cook <kees@...nel.org>, linux-hardening@...r.kernel.org
Subject: Re: [PATCH iwl-next v3 16/18] idpf: add support for XDP on Rx
+ Kees, linux-hardening
On Wed, Jul 30, 2025 at 06:07:15PM +0200, Alexander Lobakin wrote:
> Use libeth XDP infra to support running XDP program on Rx polling.
> This includes all of the possible verdicts/actions.
> XDP Tx queues are cleaned only in "lazy" mode when there are less than
> 1/4 free descriptors left on the ring. libeth helper macros to define
> driver-specific XDP functions make sure the compiler could uninline
> them when needed.
> Use __LIBETH_WORD_ACCESS to parse descriptors more efficiently when
> applicable. It really gives some good boosts and code size reduction
> on x86_64.
>
> Co-developed-by: Michal Kubiak <michal.kubiak@...el.com>
> Signed-off-by: Michal Kubiak <michal.kubiak@...el.com>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@...el.com>
Hi Alexander, all,
Sorry for providing review of __LIBETH_WORD_ACCESS[1] after the fact.
I had missed it earlier.
While I appreciate the desire for improved performance and nicer code
generation. I think the idea of writing 64 bits of data to the
address of a 32 bit member of a structure goes against the direction
of hardening work by Kees and others.
Indeed, it seems to me this is the kind of thing that struct_group()
aims to avoid.
In this case struct group() doesn't seem like the best option,
because it would provide a 64-bit buffer that we can memcpy into.
But it seems altogether better to simply assign u64 value to a u64 member.
So I'm wondering if an approach along the following lines is appropriate
(Very lightly compile tested only!).
And yes, there is room for improvement of the wording of the comment
I included below.
diff --git a/include/net/libeth/xdp.h b/include/net/libeth/xdp.h
index f4880b50e804..a7d3d8e44aa6 100644
--- a/include/net/libeth/xdp.h
+++ b/include/net/libeth/xdp.h
@@ -1283,11 +1283,7 @@ static inline void libeth_xdp_prepare_buff(struct libeth_xdp_buff *xdp,
const struct page *page = __netmem_to_page(fqe->netmem);
#ifdef __LIBETH_WORD_ACCESS
- static_assert(offsetofend(typeof(xdp->base), flags) -
- offsetof(typeof(xdp->base), frame_sz) ==
- sizeof(u64));
-
- *(u64 *)&xdp->base.frame_sz = fqe->truesize;
+ xdp->base.frame_sz_le_qword = fqe->truesize;
#else
xdp_init_buff(&xdp->base, fqe->truesize, xdp->base.rxq);
#endif
diff --git a/include/net/xdp.h b/include/net/xdp.h
index b40f1f96cb11..b5eedeb82c9b 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -85,8 +85,19 @@ struct xdp_buff {
void *data_hard_start;
struct xdp_rxq_info *rxq;
struct xdp_txq_info *txq;
- u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/
- u32 flags; /* supported values defined in xdp_buff_flags */
+ union {
+ /* Allow setting frame_sz and flags as a single u64 on
+ * little endian systems. This may may give optimal
+ * performance. */
+ u64 frame_sz_le_qword;
+ struct {
+ /* Frame size to deduce data_hard_end/reserved
+ * tailroom. */
+ u32 frame_sz;
+ /* Supported values defined in xdp_buff_flags. */
+ u32 flags;
+ };
+ };
};
static __always_inline bool xdp_buff_has_frags(const struct xdp_buff *xdp)
[1] https://git.kernel.org/torvalds/c/80bae9df2108
...
Powered by blists - more mailing lists