[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wo89zroe.fsf@toke.dk>
Date: Wed, 26 Feb 2020 09:19:45 +0100
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: David Ahern <dsahern@...il.com>, Jason Wang <jasowang@...hat.com>,
David Ahern <dsahern@...nel.org>, netdev@...r.kernel.org
Cc: davem@...emloft.net, kuba@...nel.org,
David Ahern <dahern@...italocean.com>,
"Michael S . Tsirkin" <mst@...hat.com>
Subject: Re: [PATCH RFC net-next] virtio_net: Relax queue requirement for using XDP
David Ahern <dsahern@...il.com> writes:
> On 2/25/20 8:00 PM, Jason Wang wrote:
>>
>> On 2020/2/26 上午8:57, David Ahern wrote:
>>> From: David Ahern <dahern@...italocean.com>
>>>
>>> virtio_net currently requires extra queues to install an XDP program,
>>> with the rule being twice as many queues as vcpus. From a host
>>> perspective this means the VM needs to have 2*vcpus vhost threads
>>> for each guest NIC for which XDP is to be allowed. For example, a
>>> 16 vcpu VM with 2 tap devices needs 64 vhost threads.
>>>
>>> The extra queues are only needed in case an XDP program wants to
>>> return XDP_TX. XDP_PASS, XDP_DROP and XDP_REDIRECT do not need
>>> additional queues. Relax the queue requirement and allow XDP
>>> functionality based on resources. If an XDP program is loaded and
>>> there are insufficient queues, then return a warning to the user
>>> and if a program returns XDP_TX just drop the packet. This allows
>>> the use of the rest of the XDP functionality to work without
>>> putting an unreasonable burden on the host.
>>>
>>> Cc: Jason Wang <jasowang@...hat.com>
>>> Cc: Michael S. Tsirkin <mst@...hat.com>
>>> Signed-off-by: David Ahern <dahern@...italocean.com>
>>> ---
>>> drivers/net/virtio_net.c | 14 ++++++++++----
>>> 1 file changed, 10 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>> index 2fe7a3188282..2f4c5b2e674d 100644
>>> --- a/drivers/net/virtio_net.c
>>> +++ b/drivers/net/virtio_net.c
>>> @@ -190,6 +190,8 @@ struct virtnet_info {
>>> /* # of XDP queue pairs currently used by the driver */
>>> u16 xdp_queue_pairs;
>>> + bool can_do_xdp_tx;
>>> +
>>> /* I like... big packets and I cannot lie! */
>>> bool big_packets;
>>> @@ -697,6 +699,8 @@ static struct sk_buff *receive_small(struct
>>> net_device *dev,
>>> len = xdp.data_end - xdp.data;
>>> break;
>>> case XDP_TX:
>>> + if (!vi->can_do_xdp_tx)
>>> + goto err_xdp;
>>
>>
>> I wonder if using spinlock to synchronize XDP_TX is better than dropping
>> here?
>
> I recall you suggesting that. Sure, it makes for a friendlier user
> experience, but if a spinlock makes this slower then it goes against the
> core idea of XDP.
IMO a spinlock-arbitrated TX queue is something that should be available
to the user if configured (using that queue abstraction Magnus is
working on), but not the default, since as you say that goes against the
"performance first" mantra of XDP.
-Toke
Powered by blists - more mailing lists