[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKhg4tJPjcShkw4-FHFkKOcgzHK27A5pMu9FP7OWj4qJUX1ApA@mail.gmail.com>
Date: Fri, 9 Feb 2024 18:39:33 +0800
From: Liang Chen <liangchen.linux@...il.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: Jesper Dangaard Brouer <hawk@...nel.org>, mst@...hat.com, jasowang@...hat.com,
xuanzhuo@...ux.alibaba.com, hengqi@...ux.alibaba.com, davem@...emloft.net,
edumazet@...gle.com, kuba@...nel.org, netdev@...r.kernel.org,
virtualization@...ts.linux.dev, linux-kernel@...r.kernel.org,
bpf@...r.kernel.org, john.fastabend@...il.com, daniel@...earbox.net,
ast@...nel.org
Subject: Re: [PATCH net-next v5] virtio_net: Support RX hash XDP hint
On Wed, Feb 7, 2024 at 10:27 PM Paolo Abeni <pabeni@...hat.com> wrote:
>
> On Wed, 2024-02-07 at 10:54 +0800, Liang Chen wrote:
> > On Tue, Feb 6, 2024 at 6:44 PM Paolo Abeni <pabeni@...hat.com> wrote:
> > >
> > > On Sat, 2024-02-03 at 10:56 +0800, Liang Chen wrote:
> > > > On Sat, Feb 3, 2024 at 12:20 AM Jesper Dangaard Brouer <hawk@...nel.org> wrote:
> > > > > On 02/02/2024 13.11, Liang Chen wrote:
> > > [...]
> > > > > > @@ -1033,6 +1039,16 @@ static void put_xdp_frags(struct xdp_buff *xdp)
> > > > > > }
> > > > > > }
> > > > > >
> > > > > > +static void virtnet_xdp_save_rx_hash(struct virtnet_xdp_buff *virtnet_xdp,
> > > > > > + struct net_device *dev,
> > > > > > + struct virtio_net_hdr_v1_hash *hdr_hash)
> > > > > > +{
> > > > > > + if (dev->features & NETIF_F_RXHASH) {
> > > > > > + virtnet_xdp->hash_value = hdr_hash->hash_value;
> > > > > > + virtnet_xdp->hash_report = hdr_hash->hash_report;
> > > > > > + }
> > > > > > +}
> > > > > > +
> > > > >
> > > > > Would it be possible to store a pointer to hdr_hash in virtnet_xdp_buff,
> > > > > with the purpose of delaying extracting this, until and only if XDP
> > > > > bpf_prog calls the kfunc?
> > > > >
> > > >
> > > > That seems to be the way v1 works,
> > > > https://lore.kernel.org/all/20240122102256.261374-1-liangchen.linux@gmail.com/
> > > > . But it was pointed out that the inline header may be overwritten by
> > > > the xdp prog, so the hash is copied out to maintain its integrity.
> > >
> > > Why? isn't XDP supposed to get write access only to the pkt
> > > contents/buffer?
> > >
> >
> > Normally, an XDP program accesses only the packet data. However,
> > there's also an XDP RX Metadata area, referenced by the data_meta
> > pointer. This pointer can be adjusted with bpf_xdp_adjust_meta to
> > point somewhere ahead of the data buffer, thereby granting the XDP
> > program access to the virtio header located immediately before the
>
> AFAICS bpf_xdp_adjust_meta() does not allow moving the meta_data before
> xdp->data_hard_start:
>
> https://elixir.bootlin.com/linux/latest/source/net/core/filter.c#L4210
>
> and virtio net set such field after the virtio_net_hdr:
>
> https://elixir.bootlin.com/linux/latest/source/drivers/net/virtio_net.c#L1218
> https://elixir.bootlin.com/linux/latest/source/drivers/net/virtio_net.c#L1420
>
> I don't see how the virtio hdr could be touched? Possibly even more
> important: if such thing is possible, I think is should be somewhat
> denied (for the same reason an H/W nic should prevent XDP from
> modifying its own buffer descriptor).
Thank you for highlighting this concern. The header layout differs
slightly between small and mergeable mode. Taking 'mergeable mode' as
an example, after calling xdp_prepare_buff the layout of xdp_buff
would be as depicted in the diagram below,
buf
|
v
+--------------+--------------+-------------+
| xdp headroom | virtio header| packet |
| (256 bytes) | (20 bytes) | content |
+--------------+--------------+-------------+
^ ^
| |
data_hard_start data
data_meta
If 'bpf_xdp_adjust_meta' repositions the 'data_meta' pointer a little
towards 'data_hard_start', it would point to the inline header, thus
potentially allowing the XDP program to access the inline header.
We will take a closer look on how to prevent the inline header from
being altered, possibly by borrowing some ideas from other
xdp_metadata_ops implementation.
Thanks,
Liang
>
> Cheers,
>
> Paolo
>
Powered by blists - more mailing lists