lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACGkMEsKTLfD1nz-CQdn5+ZmxyWdVDwhBOAcB9fO4TUcwzuLPA@mail.gmail.com>
Date: Mon, 16 Jun 2025 11:01:41 +0800
From: Jason Wang <jasowang@...hat.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: mst@...hat.com, eperezma@...hat.com, kvm@...r.kernel.org, 
	virtualization@...ts.linux.dev, netdev@...r.kernel.org, davem@...emloft.net, 
	andrew+netdev@...n.ch, edumazet@...gle.com, kuba@...nel.org, 
	pabeni@...hat.com
Subject: Re: [PATCH net-next 2/2] vhost-net: reduce one userspace copy when
 building XDP buff

On Fri, Jun 13, 2025 at 10:16 AM Willem de Bruijn
<willemdebruijn.kernel@...il.com> wrote:
>
> Jason Wang wrote:
> > We used to do twice copy_from_iter() to copy virtio-net and packet
> > separately. This introduce overheads for userspace access hardening as
> > well as SMAP (for x86 it's stac/clac). So this patch tries to use one
> > copy_from_iter() to copy them once and move the virtio-net header
> > afterwards to reduce overheads.
> >
> > Testpmd + vhost_net shows 10% improvement from 5.45Mpps to 6.0Mpps.
> >
> > Signed-off-by: Jason Wang <jasowang@...hat.com>
>
> Acked-by: Willem de Bruijn <willemb@...gle.com>
>
> > ---
> >  drivers/vhost/net.c | 13 ++++---------
> >  1 file changed, 4 insertions(+), 9 deletions(-)
> >
> > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> > index 777eb6193985..2845e0a473ea 100644
> > --- a/drivers/vhost/net.c
> > +++ b/drivers/vhost/net.c
> > @@ -690,13 +690,13 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq,
> >       if (unlikely(!buf))
> >               return -ENOMEM;
> >
> > -     copied = copy_from_iter(buf, sock_hlen, from);
> > -     if (copied != sock_hlen) {
> > +     copied = copy_from_iter(buf + pad - sock_hlen, len, from);
> > +     if (copied != len) {
> >               ret = -EFAULT;
> >               goto err;
> >       }
> >
> > -     gso = buf;
> > +     gso = buf + pad - sock_hlen;
> >
> >       if (!sock_hlen)
> >               memset(buf, 0, pad);
> > @@ -715,12 +715,7 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq,
> >               }
> >       }
> >
> > -     len -= sock_hlen;
> > -     copied = copy_from_iter(buf + pad, len, from);
> > -     if (copied != len) {
> > -             ret = -EFAULT;
> > -             goto err;
> > -     }
> > +     memcpy(buf, buf + pad - sock_hlen, sock_hlen);
>
> It's not trivial to see that the dst and src do not overlap, and does
> does not need memmove.
>
> Minimal pad that I can find is 32B and and maximal sock_hlen is 12B.
>
> So this is safe. But not obviously so. Unfortunately, these offsets
> are not all known at compile time, so a BUILD_BUG_ON is not possible.

We had this:

int pad = SKB_DATA_ALIGN(VHOST_NET_RX_PAD + headroom + nvq->sock_hlen);
int sock_hlen = nvq->sock_hlen;

So pad - sock_len is guaranteed to be greater than zero.

If this is not obvious, I can add a comment in the next version.

Thanks

>
> >       xdp_init_buff(xdp, buflen, NULL);
> >       xdp_prepare_buff(xdp, buf, pad, len, true);
> > --
> > 2.34.1
> >
>
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ