[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251003161026.5190fcd2@kernel.org>
Date: Fri, 3 Oct 2025 16:10:26 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: bpf@...r.kernel.org, ast@...nel.org, daniel@...earbox.net,
hawk@...nel.org, ilias.apalodimas@...aro.org, toke@...hat.com,
lorenzo@...nel.org, netdev@...r.kernel.org, magnus.karlsson@...el.com,
andrii@...nel.org, stfomichev@...il.com, aleksander.lobakin@...el.com
Subject: Re: [PATCH bpf 2/2] veth: update mem type in xdp_buff
On Fri, 3 Oct 2025 16:02:43 +0200 Maciej Fijalkowski wrote:
> + xdp_update_mem_type(xdp);
> +
> act = bpf_prog_run_xdp(xdp_prog, xdp);
The new helper doesn't really express what's going on. Developers
won't know what are we updating mem_type() to, and why. Right?
My thinking was that we should try to bake the rxq into "conversion"
APIs, draft diff below, very much unfinished and I'm probably missing
some cases but hopefully gets the point across:
diff --git a/include/net/xdp.h b/include/net/xdp.h
index aa742f413c35..e7f75d551d8f 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -384,9 +384,21 @@ struct sk_buff *xdp_build_skb_from_frame(struct xdp_frame *xdpf,
struct net_device *dev);
struct xdp_frame *xdpf_clone(struct xdp_frame *xdpf);
+/* Initialize rxq struct on the stack for processing @frame.
+ * Not necessary when processing in context of a driver which has a real rxq,
+ * and passes it to xdp_convert_frame_to_buff().
+ */
+static inline
+void xdp_rxq_prep_on_stack(const struct xdp_frame *frame,
+ struct xdp_rxq_info *rxq)
+{
+ rxq->dev = xdpf->dev_rx;
+ /* TODO: report queue_index to xdp_rxq_info */
+}
+
static inline
void xdp_convert_frame_to_buff(const struct xdp_frame *frame,
- struct xdp_buff *xdp)
+ struct xdp_buff *xdp, struct xdp_rxq_info *rxq)
{
xdp->data_hard_start = frame->data - frame->headroom - sizeof(*frame);
xdp->data = frame->data;
@@ -394,6 +406,22 @@ void xdp_convert_frame_to_buff(const struct xdp_frame *frame,
xdp->data_meta = frame->data - frame->metasize;
xdp->frame_sz = frame->frame_sz;
xdp->flags = frame->flags;
+
+ rxq->mem.type = xdpf->mem_type;
+}
+
+/* Initialize an xdp_buff from an skb.
+ *
+ * Note: if skb has frags skb_cow_data_for_xdp() must be called first,
+ * or caller must otherwise guarantee that the frags come from a page pool
+ */
+static inline
+void xdp_convert_skb_to_buff(const struct xdp_frame *frame,
+ struct xdp_buff *xdp, struct xdp_rxq_info *rxq)
+{
+ // copy the init_buff / prep_buff here
+
+ rxq->mem.type = MEM_TYPE_PAGE_POOL; /* see note above the function */
}
static inline
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index 703e5df1f4ef..60ba15bbec59 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -193,11 +193,8 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
u32 act;
int err;
- rxq.dev = xdpf->dev_rx;
- rxq.mem.type = xdpf->mem_type;
- /* TODO: report queue_index to xdp_rxq_info */
-
- xdp_convert_frame_to_buff(xdpf, &xdp);
+ xdp_rxq_prep_on_stack(xdpf, &rxq);
+ xdp_convert_frame_to_buff(xdpf, &xdp, &rxq);
act = bpf_prog_run_xdp(rcpu->prog, &xdp);
switch (act) {
Powered by blists - more mailing lists