lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0933f13d-b52e-321e-4be1-1b0e3cfb346b@redhat.com>
Date:   Mon, 21 Jun 2021 14:21:18 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>,
        Keiichi Watanabe <keiichiw@...omium.org>
Cc:     netdev@...r.kernel.org, chirantan@...omium.org,
        "David S . Miller" <davem@...emloft.net>,
        virtualization@...ts.linux-foundation.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] virtio_net: Enable MSI-X vector for ctrl queue


在 2021/6/18 下午8:38, Michael S. Tsirkin 写道:
> On Fri, Jun 18, 2021 at 04:26:25PM +0900, Keiichi Watanabe wrote:
>> When we use vhost-user backend on the host, MSI-X vector should be set
>> so that the vmm can get an irq FD and send it to the backend device
>> process with vhost-user protocol.
>> Since whether the vector is set for a queue is determined depending on
>> the queue has a callback, this commit sets an empty callback for
>> virtio-net's control queue.
>>
>> Signed-off-by: Keiichi Watanabe <keiichiw@...omium.org>
> I'm confused by this explanation. If the vmm wants to get
> an interrupt it can do so - why change the guest driver?


+1, it sounds like a bug in the backend or we probably need more context 
here.

Thanks


>
>> ---
>>   drivers/net/virtio_net.c | 7 ++++++-
>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>> index 11f722460513..002e3695d4b3 100644
>> --- a/drivers/net/virtio_net.c
>> +++ b/drivers/net/virtio_net.c
>> @@ -2696,6 +2696,11 @@ static void virtnet_del_vqs(struct virtnet_info *vi)
>>   	virtnet_free_queues(vi);
>>   }
>>   
>> +static void virtnet_ctrlq_done(struct virtqueue *rvq)
>> +{
>> +	/* Do nothing */
>> +}
>> +
>>   /* How large should a single buffer be so a queue full of these can fit at
>>    * least one full packet?
>>    * Logic below assumes the mergeable buffer header is used.
>> @@ -2748,7 +2753,7 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
>>   
>>   	/* Parameters for control virtqueue, if any */
>>   	if (vi->has_cvq) {
>> -		callbacks[total_vqs - 1] = NULL;
>> +		callbacks[total_vqs - 1] = virtnet_ctrlq_done;
>>   		names[total_vqs - 1] = "control";
>>   	}
>>   
>> -- 
>> 2.32.0.288.g62a8d224e6-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ