lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 Feb 2020 09:34:51 +0100
From:   Toke Høiland-Jørgensen <toke@...hat.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     David Ahern <dsahern@...il.com>, Jason Wang <jasowang@...hat.com>,
        David Ahern <dsahern@...nel.org>, netdev@...r.kernel.org,
        davem@...emloft.net, kuba@...nel.org,
        David Ahern <dahern@...italocean.com>
Subject: Re: [PATCH RFC net-next] virtio_net: Relax queue requirement for using XDP

"Michael S. Tsirkin" <mst@...hat.com> writes:

> On Wed, Feb 26, 2020 at 09:19:45AM +0100, Toke Høiland-Jørgensen wrote:
>> David Ahern <dsahern@...il.com> writes:
>> 
>> > On 2/25/20 8:00 PM, Jason Wang wrote:
>> >> 
>> >> On 2020/2/26 上午8:57, David Ahern wrote:
>> >>> From: David Ahern <dahern@...italocean.com>
>> >>>
>> >>> virtio_net currently requires extra queues to install an XDP program,
>> >>> with the rule being twice as many queues as vcpus. From a host
>> >>> perspective this means the VM needs to have 2*vcpus vhost threads
>> >>> for each guest NIC for which XDP is to be allowed. For example, a
>> >>> 16 vcpu VM with 2 tap devices needs 64 vhost threads.
>> >>>
>> >>> The extra queues are only needed in case an XDP program wants to
>> >>> return XDP_TX. XDP_PASS, XDP_DROP and XDP_REDIRECT do not need
>> >>> additional queues. Relax the queue requirement and allow XDP
>> >>> functionality based on resources. If an XDP program is loaded and
>> >>> there are insufficient queues, then return a warning to the user
>> >>> and if a program returns XDP_TX just drop the packet. This allows
>> >>> the use of the rest of the XDP functionality to work without
>> >>> putting an unreasonable burden on the host.
>> >>>
>> >>> Cc: Jason Wang <jasowang@...hat.com>
>> >>> Cc: Michael S. Tsirkin <mst@...hat.com>
>> >>> Signed-off-by: David Ahern <dahern@...italocean.com>
>> >>> ---
>> >>>   drivers/net/virtio_net.c | 14 ++++++++++----
>> >>>   1 file changed, 10 insertions(+), 4 deletions(-)
>> >>>
>> >>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>> >>> index 2fe7a3188282..2f4c5b2e674d 100644
>> >>> --- a/drivers/net/virtio_net.c
>> >>> +++ b/drivers/net/virtio_net.c
>> >>> @@ -190,6 +190,8 @@ struct virtnet_info {
>> >>>       /* # of XDP queue pairs currently used by the driver */
>> >>>       u16 xdp_queue_pairs;
>> >>>   +    bool can_do_xdp_tx;
>> >>> +
>> >>>       /* I like... big packets and I cannot lie! */
>> >>>       bool big_packets;
>> >>>   @@ -697,6 +699,8 @@ static struct sk_buff *receive_small(struct
>> >>> net_device *dev,
>> >>>               len = xdp.data_end - xdp.data;
>> >>>               break;
>> >>>           case XDP_TX:
>> >>> +            if (!vi->can_do_xdp_tx)
>> >>> +                goto err_xdp;
>> >> 
>> >> 
>> >> I wonder if using spinlock to synchronize XDP_TX is better than dropping
>> >> here?
>> >
>> > I recall you suggesting that. Sure, it makes for a friendlier user
>> > experience, but if a spinlock makes this slower then it goes against the
>> > core idea of XDP.
>> 
>> IMO a spinlock-arbitrated TX queue is something that should be available
>> to the user if configured (using that queue abstraction Magnus is
>> working on), but not the default, since as you say that goes against the
>> "performance first" mantra of XDP.
>> 
>> -Toke
>
> OK so basically there would be commands to configure which TX queue is
> used by XDP. With enough resources default is to use dedicated queues.
> With not enough resources default is to fail binding xdp program
> unless queues are specified. Does this sound reasonable?

Yeah, that was the idea. See this talk from LPC last year for more
details: https://linuxplumbersconf.org/event/4/contributions/462/

> It remains to define how are changes in TX queue config handled.
> Should they just be disallowed as long as there's an active XDP program?

Well that would depend on what is possible for each device, I guess? But
we already do block some reconfiguration if an XDP program is loaded
(such as MTU changes), so there is some precedence for that :)

-Toke

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ