[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9a5391fb-1d80-43d1-5e88-902738cc2528@gmail.com>
Date: Tue, 25 Feb 2020 20:24:29 -0700
From: David Ahern <dsahern@...il.com>
To: Jason Wang <jasowang@...hat.com>, David Ahern <dsahern@...nel.org>,
netdev@...r.kernel.org
Cc: davem@...emloft.net, kuba@...nel.org,
David Ahern <dahern@...italocean.com>,
"Michael S . Tsirkin" <mst@...hat.com>
Subject: Re: [PATCH RFC net-next] virtio_net: Relax queue requirement for
using XDP
On 2/25/20 8:00 PM, Jason Wang wrote:
>
> On 2020/2/26 上午8:57, David Ahern wrote:
>> From: David Ahern <dahern@...italocean.com>
>>
>> virtio_net currently requires extra queues to install an XDP program,
>> with the rule being twice as many queues as vcpus. From a host
>> perspective this means the VM needs to have 2*vcpus vhost threads
>> for each guest NIC for which XDP is to be allowed. For example, a
>> 16 vcpu VM with 2 tap devices needs 64 vhost threads.
>>
>> The extra queues are only needed in case an XDP program wants to
>> return XDP_TX. XDP_PASS, XDP_DROP and XDP_REDIRECT do not need
>> additional queues. Relax the queue requirement and allow XDP
>> functionality based on resources. If an XDP program is loaded and
>> there are insufficient queues, then return a warning to the user
>> and if a program returns XDP_TX just drop the packet. This allows
>> the use of the rest of the XDP functionality to work without
>> putting an unreasonable burden on the host.
>>
>> Cc: Jason Wang <jasowang@...hat.com>
>> Cc: Michael S. Tsirkin <mst@...hat.com>
>> Signed-off-by: David Ahern <dahern@...italocean.com>
>> ---
>> drivers/net/virtio_net.c | 14 ++++++++++----
>> 1 file changed, 10 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>> index 2fe7a3188282..2f4c5b2e674d 100644
>> --- a/drivers/net/virtio_net.c
>> +++ b/drivers/net/virtio_net.c
>> @@ -190,6 +190,8 @@ struct virtnet_info {
>> /* # of XDP queue pairs currently used by the driver */
>> u16 xdp_queue_pairs;
>> + bool can_do_xdp_tx;
>> +
>> /* I like... big packets and I cannot lie! */
>> bool big_packets;
>> @@ -697,6 +699,8 @@ static struct sk_buff *receive_small(struct
>> net_device *dev,
>> len = xdp.data_end - xdp.data;
>> break;
>> case XDP_TX:
>> + if (!vi->can_do_xdp_tx)
>> + goto err_xdp;
>
>
> I wonder if using spinlock to synchronize XDP_TX is better than dropping
> here?
I recall you suggesting that. Sure, it makes for a friendlier user
experience, but if a spinlock makes this slower then it goes against the
core idea of XDP.
>
> Thanks
>
>
>> stats->xdp_tx++;
>> xdpf = convert_to_xdp_frame(&xdp);
>> if (unlikely(!xdpf))
>> @@ -870,6 +874,8 @@ static struct sk_buff *receive_mergeable(struct
>> net_device *dev,
>> }
>> break;
>> case XDP_TX:
>> + if (!vi->can_do_xdp_tx)
>> + goto err_xdp;
>> stats->xdp_tx++;
>> xdpf = convert_to_xdp_frame(&xdp);
>> if (unlikely(!xdpf))
>> @@ -2435,10 +2441,10 @@ static int virtnet_xdp_set(struct net_device
>> *dev, struct bpf_prog *prog,
>> /* XDP requires extra queues for XDP_TX */
>> if (curr_qp + xdp_qp > vi->max_queue_pairs) {
>> - NL_SET_ERR_MSG_MOD(extack, "Too few free TX rings available");
>> - netdev_warn(dev, "request %i queues but max is %i\n",
>> - curr_qp + xdp_qp, vi->max_queue_pairs);
>> - return -ENOMEM;
>> + NL_SET_ERR_MSG_MOD(extack, "Too few free TX rings available;
>> XDP_TX will not be allowed");
>> + vi->can_do_xdp_tx = false;
>> + } else {
>> + vi->can_do_xdp_tx = true;
>> }
>> old_prog = rtnl_dereference(vi->rq[0].xdp_prog);
>
Powered by blists - more mailing lists