lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <wixps4w7rnbd67t5is6wtqvuw7e3waat4no3embl3vnjimtxvz@pemiyojtmunz>
Date: Wed, 7 Feb 2024 09:39:23 +0100
From: Stefano Garzarella <sgarzare@...hat.com>
To: Jason Wang <jasowang@...hat.com>
Cc: virtualization@...ts.linux.dev, 
	Shannon Nelson <shannon.nelson@....com>, Eugenio Pérez <eperezma@...hat.com>, 
	"Michael S. Tsirkin" <mst@...hat.com>, kvm@...r.kernel.org, Kevin Wolf <kwolf@...hat.com>, 
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org, 
	Zhu Lingshan <lingshan.zhu@...el.com>
Subject: Re: Re: [PATCH] vhost-vdpa: fail enabling virtqueue in certain
 conditions

On Wed, Feb 07, 2024 at 11:27:14AM +0800, Jason Wang wrote:
>On Tue, Feb 6, 2024 at 10:52 PM Stefano Garzarella <sgarzare@...hat.com> wrote:
>>
>> If VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK is not negotiated, we expect
>> the driver to enable virtqueue before setting DRIVER_OK. If the driver
>> tries anyway, better to fail right away as soon as we get the ioctl.
>> Let's also update the documentation to make it clearer.
>>
>> We had a problem in QEMU for not meeting this requirement, see
>> https://lore.kernel.org/qemu-devel/20240202132521.32714-1-kwolf@redhat.com/
>
>Maybe it's better to only enable cvq when the backend supports
>VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK. Eugenio, any comment on this?
>
>>
>> Fixes: 9f09fd6171fe ("vdpa: accept VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK backend feature")
>> Cc: eperezma@...hat.com
>> Signed-off-by: Stefano Garzarella <sgarzare@...hat.com>
>> ---
>>  include/uapi/linux/vhost_types.h | 3 ++-
>>  drivers/vhost/vdpa.c             | 4 ++++
>>  2 files changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/uapi/linux/vhost_types.h b/include/uapi/linux/vhost_types.h
>> index d7656908f730..5df49b6021a7 100644
>> --- a/include/uapi/linux/vhost_types.h
>> +++ b/include/uapi/linux/vhost_types.h
>> @@ -182,7 +182,8 @@ struct vhost_vdpa_iova_range {
>>  /* Device can be resumed */
>>  #define VHOST_BACKEND_F_RESUME  0x5
>>  /* Device supports the driver enabling virtqueues both before and after
>> - * DRIVER_OK
>> + * DRIVER_OK. If this feature is not negotiated, the virtqueues must be
>> + * enabled before setting DRIVER_OK.
>>   */
>>  #define VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK  0x6
>>  /* Device may expose the virtqueue's descriptor area, driver area and
>> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
>> index bc4a51e4638b..1fba305ba8c1 100644
>> --- a/drivers/vhost/vdpa.c
>> +++ b/drivers/vhost/vdpa.c
>> @@ -651,6 +651,10 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd,
>>         case VHOST_VDPA_SET_VRING_ENABLE:
>>                 if (copy_from_user(&s, argp, sizeof(s)))
>>                         return -EFAULT;
>> +               if (!vhost_backend_has_feature(vq,
>> +                       VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK) &&
>> +                   (ops->get_status(vdpa) & VIRTIO_CONFIG_S_DRIVER_OK))
>> +                       return -EINVAL;
>
>As discussed, without VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK, we don't
>know if parents can do vq_ready after driver_ok.
>
>So maybe we need to keep this behaviour to unbreak some "legacy" userspace?

I'm not sure it's a good idea, since "legacy" userspace are currently 
broken if used with VDUSE device. So we need to fix userspace in any 
case, and IMHO is better if we start to return an error, so the user 
understands what went wrong, because the problem in QEMU took us quite 
some time to figure out that we couldn't enable vq after DRIVER_OK.

Since userspace is unable to understand if a vhost-vdpa device is VDUSE 
or not, I think we have only 2 options either merge this patch or fix 
VDUSE somehow. But the last one I think is more complicated/intrusive.

Thanks,
Stefano

>
>For example ifcvf did:
>
>static void ifcvf_vdpa_set_vq_ready(struct vdpa_device *vdpa_dev,
>                                    u16 qid, bool ready)
>{
>  struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev);
>
>        ifcvf_set_vq_ready(vf, qid, ready);
>}
>
>And it did:
>
>void ifcvf_set_vq_ready(struct ifcvf_hw *hw, u16 qid, bool ready)
>{
>        struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg;
>
>        vp_iowrite16(qid, &cfg->queue_select);
>        vp_iowrite16(ready, &cfg->queue_enable);
>}
>
>Though it didn't advertise VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK?
>
>Adding LingShan for more thought.
>
>Thanks
>
>>                 ops->set_vq_ready(vdpa, idx, s.num);
>>                 return 0;
>>         case VHOST_VDPA_GET_VRING_GROUP:
>> --
>> 2.43.0
>>
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ