[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZIzr5ffeHsUqppaS@google.com>
Date: Fri, 16 Jun 2023 16:10:29 -0700
From: Stanislav Fomichev <sdf@...gle.com>
To: Magnus Karlsson <magnus.karlsson@...il.com>
Cc: "Toke Høiland-Jørgensen" <toke@...nel.org>, bpf@...r.kernel.org, ast@...nel.org,
daniel@...earbox.net, andrii@...nel.org, martin.lau@...ux.dev,
song@...nel.org, yhs@...com, john.fastabend@...il.com, kpsingh@...nel.org,
haoluo@...gle.com, jolsa@...nel.org, willemb@...gle.com, dsahern@...nel.org,
magnus.karlsson@...el.com, bjorn@...nel.org, maciej.fijalkowski@...el.com,
netdev@...r.kernel.org
Subject: Re: [RFC bpf-next 0/7] bpf: netdev TX metadata
On 06/16, Stanislav Fomichev wrote:
> On Fri, Jun 16, 2023 at 1:13 AM Magnus Karlsson
> <magnus.karlsson@...il.com> wrote:
> >
> > On Fri, 16 Jun 2023 at 02:09, Stanislav Fomichev <sdf@...gle.com> wrote:
> > >
> > > On Mon, Jun 12, 2023 at 2:01 PM Toke Høiland-Jørgensen <toke@...nel.org> wrote:
> > > >
> > > > Some immediate thoughts after glancing through this:
> > > >
> > > > > --- Use cases ---
> > > > >
> > > > > The goal of this series is to add two new standard-ish places
> > > > > in the transmit path:
> > > > >
> > > > > 1. Right before the packet is transmitted (with access to TX
> > > > > descriptors)
> > > > > 2. Right after the packet is actually transmitted and we've received the
> > > > > completion (again, with access to TX completion descriptors)
> > > > >
> > > > > Accessing TX descriptors unlocks the following use-cases:
> > > > >
> > > > > - Setting device hints at TX: XDP/AF_XDP might use these new hooks to
> > > > > use device offloads. The existing case implements TX timestamp.
> > > > > - Observability: global per-netdev hooks can be used for tracing
> > > > > the packets and exploring completion descriptors for all sorts of
> > > > > device errors.
> > > > >
> > > > > Accessing TX descriptors also means that the hooks have to be called
> > > > > from the drivers.
> > > > >
> > > > > The hooks are a light-weight alternative to XDP at egress and currently
> > > > > don't provide any packet modification abilities. However, eventually,
> > > > > can expose new kfuncs to operate on the packet (or, rather, the actual
> > > > > descriptors; for performance sake).
> > > >
> > > > dynptr?
> > > >
> > > > > --- UAPI ---
> > > > >
> > > > > The hooks are implemented in a HID-BPF style. Meaning they don't
> > > > > expose any UAPI and are implemented as tracing programs that call
> > > > > a bunch of kfuncs. The attach/detach operation happen via BPF syscall
> > > > > programs. The series expands device-bound infrastructure to tracing
> > > > > programs.
> > > >
> > > > Not a fan of the "attach from BPF syscall program" thing. These are part
> > > > of the XDP data path API, and I think we should expose them as proper
> > > > bpf_link attachments from userspace with introspection etc. But I guess
> > > > the bpf_mprog thing will give us that?
> > > >
> > > > > --- skb vs xdp ---
> > > > >
> > > > > The hooks operate on a new light-weight devtx_frame which contains:
> > > > > - data
> > > > > - len
> > > > > - sinfo
> > > > >
> > > > > This should allow us to have a unified (from BPF POW) place at TX
> > > > > and not be super-taxing (we need to copy 2 pointers + len to the stack
> > > > > for each invocation).
> > > >
> > > > Not sure what I think about this one. At the very least I think we
> > > > should expose xdp->data_meta as well. I'm not sure what the use case for
> > > > accessing skbs is? If that *is* indeed useful, probably there will also
> > > > end up being a use case for accessing the full skb?
> > >
> > > I spent some time looking at data_meta story on AF_XDP TX and it
> > > doesn't look like it's supported (at least in a general way).
> > > You obviously get some data_meta when you do XDP_TX, but if you want
> > > to pass something to the bpf prog when doing TX via the AF_XDP ring,
> > > it gets complicated.
> >
> > When we designed this some 5 - 6 years ago, we thought that there
> > would be an XDP for egress action in the "nearish" future that could
> > be used to interpret the metadata field in front of the packet.
> > Basically, the user would load an XDP egress program that would define
> > the metadata layout by the operations it would perform on the metadata
> > area. But since XDP on egress has not happened, you are right, there
> > is definitely something missing to be able to use metadata on Tx. Or
> > could your proposed hook points be used for something like this?
>
> Thanks for the context!
> Yes, the proposal is to use these new tx hooks to read out af_xdp
> metadata and apply it to the packet via a bunch of tbd kfuncs.
> AF_XDP and BPF programs would have to have a contract about the
> metadata layout (same as we have on rx).
>
> > > In zerocopy mode, we can probably use XDP_UMEM_UNALIGNED_CHUNK_FLAG
> > > and pass something in the headroom.
> >
> > This feature is mainly used to allow for multiple packets on the same
> > chunk (to save space) and also to be able to have packets spanning two
> > chunks. Even in aligned mode, you can start a packet at an arbitrary
> > address in the chunk as long as the whole packet fits into the chunk.
> > So no problem having headroom in any of the modes.
>
> But if I put it into the headroom it will only be passed down to the
> driver in zero-copy mode, right?
> If I do tx_desc->addr = packet_start, no medata (that goes prior to
> packet_start) gets copied into skb in the copy mode (it seems).
> Or do you suggest that the interface should be tx_desc->addr =
> metadata_start and the bpf program should call the equivalent of
> bpf_xdp_adjust_head to consume this metadata?
For copy-mode, here is what I've prototyped. That seems to work.
For zero-copy, I don't think we need anything extra (besides exposing
xsk->tx_meta_len at the hook point, tbd). Does the patch below make
sense?
diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
index e96a1151ec75..30018b3b862d 100644
--- a/include/net/xdp_sock.h
+++ b/include/net/xdp_sock.h
@@ -51,6 +51,7 @@ struct xdp_sock {
struct list_head flush_node;
struct xsk_buff_pool *pool;
u16 queue_id;
+ u8 tx_metadata_len;
bool zc;
enum {
XSK_READY = 0,
diff --git a/include/uapi/linux/if_xdp.h b/include/uapi/linux/if_xdp.h
index a78a8096f4ce..2374eafff7db 100644
--- a/include/uapi/linux/if_xdp.h
+++ b/include/uapi/linux/if_xdp.h
@@ -63,6 +63,7 @@ struct xdp_mmap_offsets {
#define XDP_UMEM_COMPLETION_RING 6
#define XDP_STATISTICS 7
#define XDP_OPTIONS 8
+#define XDP_TX_METADATA_LEN 9
struct xdp_umem_reg {
__u64 addr; /* Start of packet data area */
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index cc1e7f15fa73..a95872712547 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -493,14 +493,21 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
return ERR_PTR(err);
skb_reserve(skb, hr);
- skb_put(skb, len);
+ skb_put(skb, len + xs->tx_metadata_len);
buffer = xsk_buff_raw_get_data(xs->pool, desc->addr);
+ buffer -= xs->tx_metadata_len;
+
err = skb_store_bits(skb, 0, buffer, len);
if (unlikely(err)) {
kfree_skb(skb);
return ERR_PTR(err);
}
+
+ if (xs->tx_metadata_len) {
+ skb_metadata_set(skb, xs->tx_metadata_len);
+ __skb_pull(skb, xs->tx_metadata_len);
+ }
}
skb->dev = dev;
@@ -1137,6 +1144,27 @@ static int xsk_setsockopt(struct socket *sock, int level, int optname,
mutex_unlock(&xs->mutex);
return err;
}
+ case XDP_TX_METADATA_LEN:
+ {
+ int val;
+
+ if (optlen < sizeof(val))
+ return -EINVAL;
+ if (copy_from_sockptr(&val, optval, sizeof(val)))
+ return -EFAULT;
+
+ if (val >= 256)
+ return -EINVAL;
+
+ mutex_lock(&xs->mutex);
+ if (xs->state != XSK_READY) {
+ mutex_unlock(&xs->mutex);
+ return -EBUSY;
+ }
+ xs->tx_metadata_len = val;
+ mutex_unlock(&xs->mutex);
+ return err;
+ }
default:
break;
}
Powered by blists - more mailing lists