lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <449099311.10687151.1582702090890.JavaMail.zimbra@redhat.com>
Date:   Wed, 26 Feb 2020 02:28:10 -0500 (EST)
From:   Jason Wang <jasowang@...hat.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     David Ahern <dsahern@...nel.org>, netdev@...r.kernel.org,
        davem@...emloft.net, kuba@...nel.org,
        David Ahern <dahern@...italocean.com>
Subject: Re: [PATCH RFC net-next] virtio_net: Relax queue requirement for
 using XDP



----- Original Message -----
> On Wed, Feb 26, 2020 at 11:00:40AM +0800, Jason Wang wrote:
> > 
> > On 2020/2/26 上午8:57, David Ahern wrote:
> > > From: David Ahern <dahern@...italocean.com>
> > > 
> > > virtio_net currently requires extra queues to install an XDP program,
> > > with the rule being twice as many queues as vcpus. From a host
> > > perspective this means the VM needs to have 2*vcpus vhost threads
> > > for each guest NIC for which XDP is to be allowed. For example, a
> > > 16 vcpu VM with 2 tap devices needs 64 vhost threads.
> > > 
> > > The extra queues are only needed in case an XDP program wants to
> > > return XDP_TX. XDP_PASS, XDP_DROP and XDP_REDIRECT do not need
> > > additional queues. Relax the queue requirement and allow XDP
> > > functionality based on resources. If an XDP program is loaded and
> > > there are insufficient queues, then return a warning to the user
> > > and if a program returns XDP_TX just drop the packet. This allows
> > > the use of the rest of the XDP functionality to work without
> > > putting an unreasonable burden on the host.
> > > 
> > > Cc: Jason Wang <jasowang@...hat.com>
> > > Cc: Michael S. Tsirkin <mst@...hat.com>
> > > Signed-off-by: David Ahern <dahern@...italocean.com>
> > > ---
> > >   drivers/net/virtio_net.c | 14 ++++++++++----
> > >   1 file changed, 10 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index 2fe7a3188282..2f4c5b2e674d 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -190,6 +190,8 @@ struct virtnet_info {
> > >   	/* # of XDP queue pairs currently used by the driver */
> > >   	u16 xdp_queue_pairs;
> > > +	bool can_do_xdp_tx;
> > > +
> > >   	/* I like... big packets and I cannot lie! */
> > >   	bool big_packets;
> > > @@ -697,6 +699,8 @@ static struct sk_buff *receive_small(struct
> > > net_device *dev,
> > >   			len = xdp.data_end - xdp.data;
> > >   			break;
> > >   		case XDP_TX:
> > > +			if (!vi->can_do_xdp_tx)
> > > +				goto err_xdp;
> > 
> > 
> > I wonder if using spinlock to synchronize XDP_TX is better than dropping
> > here?
> > 
> > Thanks
> 
> I think it's less a problem with locking, and more a problem
> with queue being potentially full and XDP being unable to
> transmit.

I'm not sure we need care about this. Even XDP_TX with dedicated queue
can meet this. And XDP generic work like this.

> 
> From that POV just sharing the queue would already be better than just
> an uncondiitonal drop, however I think this is not what XDP users came
> to expect. So at this point, partitioning the queue might be reasonable.
> When XDP attaches we could block until queue is mostly empty.

This mean XDP_TX have a higher priority which I'm not sure is good.

> However,
> how exactly to partition the queue remains open.

It would be not easy unless we have support from virtio layer.


> Maybe it's reasonable
> to limit number of RX buffers to achieve balance.
>

If I understand this correctly, this can only help to throttle
XDP_TX. But we may have XDP_REDIRECT ...

So consider either dropping or sharing is much better than not enable
XDP, we may start from them.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ