[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <D8CPETTWTCDX.23AMDJAQOJX8I@arthurfabre.com>
Date: Mon, 10 Mar 2025 16:50:26 +0100
From: "Arthur Fabre" <arthur@...hurfabre.com>
To: "Lorenzo Bianconi" <lorenzo.bianconi@...hat.com>
Cc: <netdev@...r.kernel.org>, <bpf@...r.kernel.org>, <jakub@...udflare.com>,
<hawk@...nel.org>, <yan@...udflare.com>, <jbrandeburg@...udflare.com>,
<thoiland@...hat.com>, <lbiancon@...hat.com>, "Arthur Fabre"
<afabre@...udflare.com>
Subject: Re: [PATCH RFC bpf-next 02/20] trait: XDP support
On Fri Mar 7, 2025 at 8:14 PM CET, Lorenzo Bianconi wrote:
> On Mar 05, arthur@...hurfabre.com wrote:
> > From: Arthur Fabre <afabre@...udflare.com>
> >
>
> [...]
>
> > +static __always_inline void *xdp_buff_traits(const struct xdp_buff *xdp)
> > +{
> > + return xdp->data_hard_start + _XDP_FRAME_SIZE;
> > +}
> > +
> > static __always_inline void
> > xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
> > {
> > @@ -133,6 +139,13 @@ xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start,
> > xdp->data = data;
> > xdp->data_end = data + data_len;
> > xdp->data_meta = meta_valid ? data : data + 1;
> > +
> > + if (meta_valid) {
>
> can we relax this constraint and use xdp->data as end boundary here?
The problem isn't having a boundary, it's patching all the drivers to
propagate that traits are present to the skb layer. See patch 8 "trait:
Propagate presence of traits to sk_buff", and patches 9-15 for driver
changes.
There's some discussion around updating all the remaining drivers to
support XDP metadata, if instead of making them call skb_metadata_set()
we use a more "generic" hook like "xdp_buff_update_skb()" from this
series, we can use it for traits later.
>
> > + /* We assume drivers reserve enough headroom to store xdp_frame
> > + * and the traits header.
> > + */
> > + traits_init(xdp_buff_traits(xdp), xdp->data_meta);
> > + }
> > }
> >
> > /* Reserve memory area at end-of data area.
> > @@ -267,6 +280,8 @@ struct xdp_frame {
> > u32 flags; /* supported values defined in xdp_buff_flags */
> > };
> >
> > +static_assert(sizeof(struct xdp_frame) == _XDP_FRAME_SIZE);
> > +
> > static __always_inline bool xdp_frame_has_frags(const struct xdp_frame *frame)
> > {
> > return !!(frame->flags & XDP_FLAGS_HAS_FRAGS);
> > @@ -517,6 +532,11 @@ static inline bool xdp_metalen_invalid(unsigned long metalen)
> > return !IS_ALIGNED(metalen, sizeof(u32)) || metalen > meta_max;
> > }
> >
> > +static __always_inline void *xdp_meta_hard_start(const struct xdp_buff *xdp)
> > +{
> > + return xdp_buff_traits(xdp) + traits_size(xdp_buff_traits(xdp));
>
> here we are always consuming sizeof(struct __trait_hdr)), right? We can do
> somehing smarter and check if traits are really used? (e.g. adding in the flags
> in xdp_buff)?
Yes, we're always taking space from the headroom for struct __trait_hdr.
I think it's impossible to tell if traits are used or not early enough:
users could be setting a trait for the first time in iptables or TC. But
we don't know that in XDP.
>
> > +}
> > +
> > struct xdp_attachment_info {
> > struct bpf_prog *prog;
> > u32 flags;
> > diff --git a/net/core/filter.c b/net/core/filter.c
> > index dcc53ac5c5458f67a422453134665d43d466a02e..79b78e7cd57fd78c6cc8443da54ae96408c496b0 100644
> > --- a/net/core/filter.c
> > +++ b/net/core/filter.c
> > @@ -85,6 +85,7 @@
> > #include <linux/un.h>
> > #include <net/xdp_sock_drv.h>
> > #include <net/inet_dscp.h>
> > +#include <net/trait.h>
> >
> > #include "dev.h"
> >
> > @@ -3935,9 +3936,8 @@ static unsigned long xdp_get_metalen(const struct xdp_buff *xdp)
> >
> > BPF_CALL_2(bpf_xdp_adjust_head, struct xdp_buff *, xdp, int, offset)
> > {
> > - void *xdp_frame_end = xdp->data_hard_start + sizeof(struct xdp_frame);
> > unsigned long metalen = xdp_get_metalen(xdp);
> > - void *data_start = xdp_frame_end + metalen;
> > + void *data_start = xdp_meta_hard_start(xdp) + metalen;
>
> We could waste 16byte here, right?
If traits aren't being used?
[...]
Powered by blists - more mailing lists