[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <772b6d6f-0728-c338-b541-fcf4114a1d32@redhat.com>
Date: Wed, 26 Feb 2020 11:29:44 +0800
From: Jason Wang <jasowang@...hat.com>
To: David Ahern <dsahern@...il.com>, David Ahern <dsahern@...nel.org>,
netdev@...r.kernel.org
Cc: davem@...emloft.net, kuba@...nel.org,
David Ahern <dahern@...italocean.com>,
"Michael S . Tsirkin" <mst@...hat.com>
Subject: Re: [PATCH RFC net-next] virtio_net: Relax queue requirement for
using XDP
On 2020/2/26 上午11:24, David Ahern wrote:
> On 2/25/20 8:00 PM, Jason Wang wrote:
>> On 2020/2/26 上午8:57, David Ahern wrote:
>>> From: David Ahern<dahern@...italocean.com>
>>>
>>> virtio_net currently requires extra queues to install an XDP program,
>>> with the rule being twice as many queues as vcpus. From a host
>>> perspective this means the VM needs to have 2*vcpus vhost threads
>>> for each guest NIC for which XDP is to be allowed. For example, a
>>> 16 vcpu VM with 2 tap devices needs 64 vhost threads.
>>>
>>> The extra queues are only needed in case an XDP program wants to
>>> return XDP_TX. XDP_PASS, XDP_DROP and XDP_REDIRECT do not need
>>> additional queues. Relax the queue requirement and allow XDP
>>> functionality based on resources. If an XDP program is loaded and
>>> there are insufficient queues, then return a warning to the user
>>> and if a program returns XDP_TX just drop the packet. This allows
>>> the use of the rest of the XDP functionality to work without
>>> putting an unreasonable burden on the host.
>>>
>>> Cc: Jason Wang<jasowang@...hat.com>
>>> Cc: Michael S. Tsirkin<mst@...hat.com>
>>> Signed-off-by: David Ahern<dahern@...italocean.com>
>>> ---
>>> drivers/net/virtio_net.c | 14 ++++++++++----
>>> 1 file changed, 10 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>> index 2fe7a3188282..2f4c5b2e674d 100644
>>> --- a/drivers/net/virtio_net.c
>>> +++ b/drivers/net/virtio_net.c
>>> @@ -190,6 +190,8 @@ struct virtnet_info {
>>> /* # of XDP queue pairs currently used by the driver */
>>> u16 xdp_queue_pairs;
>>> + bool can_do_xdp_tx;
>>> +
>>> /* I like... big packets and I cannot lie! */
>>> bool big_packets;
>>> @@ -697,6 +699,8 @@ static struct sk_buff *receive_small(struct
>>> net_device *dev,
>>> len = xdp.data_end - xdp.data;
>>> break;
>>> case XDP_TX:
>>> + if (!vi->can_do_xdp_tx)
>>> + goto err_xdp;
>> I wonder if using spinlock to synchronize XDP_TX is better than dropping
>> here?
> I recall you suggesting that. Sure, it makes for a friendlier user
> experience, but if a spinlock makes this slower then it goes against the
> core idea of XDP.
>
>
Maybe we can do some benchmark. TAP uses spinlock for XDP_TX. If my
memory is correct, for the best case (no queue contention), it can only
have ~10% PPS drop on heavy workload.
Thanks
Powered by blists - more mailing lists