lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 9 Jun 2022 15:19:22 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     Parav Pandit <parav@...dia.com>
Cc:     "Dawar, Gautam" <gautam.dawar@....com>,
        netdev <netdev@...r.kernel.org>,
        "linux-net-drivers (AMD-Xilinx)" <linux-net-drivers@....com>,
        "Anand, Harpreet" <harpreet.anand@....com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Zhu Lingshan <lingshan.zhu@...el.com>,
        Xie Yongji <xieyongji@...edance.com>,
        Eli Cohen <elic@...dia.com>,
        Si-Wei Liu <si-wei.liu@...cle.com>,
        Stefano Garzarella <sgarzare@...hat.com>,
        Wan Jiabing <wanjiabing@...o.com>,
        Dan Carpenter <dan.carpenter@...cle.com>,
        virtualization <virtualization@...ts.linux-foundation.org>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] vdpa: allow vdpa dev_del management operation to return failure

On Wed, Jun 8, 2022 at 6:43 PM Parav Pandit <parav@...dia.com> wrote:
>
>
> > From: Dawar, Gautam <gautam.dawar@....com>
> > Sent: Wednesday, June 8, 2022 6:30 AM
> > To: Jason Wang <jasowang@...hat.com>
> > Cc: netdev <netdev@...r.kernel.org>; linux-net-drivers (AMD-Xilinx) <linux-
> > net-drivers@....com>; Anand, Harpreet <harpreet.anand@....com>;
> > Michael S. Tsirkin <mst@...hat.com>; Zhu Lingshan
> > <lingshan.zhu@...el.com>; Xie Yongji <xieyongji@...edance.com>; Eli
> > Cohen <elic@...dia.com>; Parav Pandit <parav@...dia.com>; Si-Wei Liu <si-
> > wei.liu@...cle.com>; Stefano Garzarella <sgarzare@...hat.com>; Wan
> > Jiabing <wanjiabing@...o.com>; Dan Carpenter
> > <dan.carpenter@...cle.com>; virtualization <virtualization@...ts.linux-
> > foundation.org>; linux-kernel <linux-kernel@...r.kernel.org>
> > Subject: RE: [PATCH] vdpa: allow vdpa dev_del management operation to
> > return failure
> >
> > [AMD Official Use Only - General]
> >
> > Hi Gautam:
> > [GD>>] Hi Jason,
> >
> > On Fri, Jun 3, 2022 at 6:34 PM Gautam Dawar <gautam.dawar@....com>
> > wrote:
> > >
> > > Currently, the vdpa_nl_cmd_dev_del_set_doit() implementation allows
> > > returning a value to depict the operation status but the return type
> > > of dev_del() callback is void. So, any error while deleting the vdpa
> > > device in the vdpa parent driver can't be returned to the management
> > > layer.
> >
> > I wonder under which cognition we can hit an error in dev_del()?
> > [GD>>] In the AMD-Xilinx vDPA driver, on receiving vdpa device deletion
> > request, I try to identify if the vdpa device is in use by any virtio-net driver
> > (through any vdpa bus driver) by looking at the vdpa device status value. In
> > case the vdpa device status is >= VIRTIO_CONFIG_S_DRIVER, -EBUSY is
> > returned.
> > This is to avoid side-effects as noted in
> > https://bugzilla.kernel.org/show_bug.cgi?id=213179 caused by deleting the
> > vdpa device when it is being used.
> > >
> User should be able to delete the device anytime.

It requires a poll event to user space and then Qemu can release the
vhost-vDPA device. This is how VFIO works. We probably need to
implement something like this.

But notice that, at the worst case, usersapce may not respond to this
event, so there's nothing more kenrel can do execpt for waiting.

We need to consider something different. I used to have an idea to
make vhost-vdpa couple with vDPA loosely with SRCU/RCU. We might
consider implementing that.

> Upper layers who are unable to perform teardown sequence should be fixed.

I think we probably don't need to bother with failing the dev_del().
We can consider to fix/workaround the waiting first.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ