lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211213053420-mutt-send-email-mst@kernel.org>
Date:   Mon, 13 Dec 2021 05:42:55 -0500
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     Jason Wang <jasowang@...hat.com>
Cc:     Si-Wei Liu <si-wei.liu@...cle.com>, Eli Cohen <elic@...dia.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        virtualization <virtualization@...ts.linux-foundation.org>,
        netdev <netdev@...r.kernel.org>
Subject: Re: vdpa legacy guest support (was Re: [PATCH] vdpa/mlx5:
 set_features should allow reset to zero)

On Mon, Dec 13, 2021 at 04:57:38PM +0800, Jason Wang wrote:
> On Mon, Dec 13, 2021 at 4:07 PM Michael S. Tsirkin <mst@...hat.com> wrote:
> >
> > On Mon, Dec 13, 2021 at 11:02:39AM +0800, Jason Wang wrote:
> > > On Sun, Dec 12, 2021 at 5:26 PM Michael S. Tsirkin <mst@...hat.com> wrote:
> > > >
> > > > On Fri, Dec 10, 2021 at 05:44:15PM -0800, Si-Wei Liu wrote:
> > > > > Sorry for reviving this ancient thread. I was kinda lost for the conclusion
> > > > > it ended up with. I have the following questions,
> > > > >
> > > > > 1. legacy guest support: from the past conversations it doesn't seem the
> > > > > support will be completely dropped from the table, is my understanding
> > > > > correct? Actually we're interested in supporting virtio v0.95 guest for x86,
> > > > > which is backed by the spec at
> > > > > https://ozlabs.org/~rusty/virtio-spec/virtio-0.9.5.pdf. Though I'm not sure
> > > > > if there's request/need to support wilder legacy virtio versions earlier
> > > > > beyond.
> > > >
> > > > I personally feel it's less work to add in kernel than try to
> > > > work around it in userspace. Jason feels differently.
> > > > Maybe post the patches and this will prove to Jason it's not
> > > > too terrible?
> > >
> > > That's one way, other than the config access before setting features,
> > > we need to deal with other stuffs:
> > >
> > > 1) VIRTIO_F_ORDER_PLATFORM
> > > 2) there could be a parent device that only support 1.0 device
> > >
> > > And a lot of other stuff summarized in spec 7.4 which seems not an
> > > easy task. Various vDPA parent drivers were written under the
> > > assumption that only modern devices are supported.
> > >
> > > Thanks
> >
> > Limiting things to x86 will likely address most issues though, won't it?
> 
> For the ordering, yes. But it means we need to introduce a config
> option for legacy logic?
> And we need to deal with, as you said in another thread, kick before DRIVER_OK:
> 
> E.g we had thing like this:
> 
>         if ((status & VIRTIO_CONFIG_S_DRIVER_OK) &&
>             !(status_old & VIRTIO_CONFIG_S_DRIVER_OK)) {
>                 ret = ifcvf_request_irq(adapter);
>                 if (ret) {
> 
> Similar issues could be found in other parents.

The driver ok thing is mostly an issue for block where it
expects to access the disk directly during probe.

> We also need to consider whether we should encourage the vendor to
> implement the logic.
> 
> I think we can try and see how hard it is.
> 
> Thanks

Right. My point exactly.

> >
> > > >
> > > > > 2. suppose some form of legacy guest support needs to be there, how do we
> > > > > deal with the bogus assumption below in vdpa_get_config() in the short term?
> > > > > It looks one of the intuitive fix is to move the vdpa_set_features call out
> > > > > of vdpa_get_config() to vdpa_set_config().
> > > > >
> > > > >         /*
> > > > >          * Config accesses aren't supposed to trigger before features are
> > > > > set.
> > > > >          * If it does happen we assume a legacy guest.
> > > > >          */
> > > > >         if (!vdev->features_valid)
> > > > >                 vdpa_set_features(vdev, 0);
> > > > >         ops->get_config(vdev, offset, buf, len);
> > > > >
> > > > > I can post a patch to fix 2) if there's consensus already reached.
> > > > >
> > > > > Thanks,
> > > > > -Siwei
> > > >
> > > > I'm not sure how important it is to change that.
> > > > In any case it only affects transitional devices, right?
> > > > Legacy only should not care ...
> > > >
> > > >
> > > > > On 3/2/2021 2:53 AM, Jason Wang wrote:
> > > > > >
> > > > > > On 2021/3/2 5:47 下午, Michael S. Tsirkin wrote:
> > > > > > > On Mon, Mar 01, 2021 at 11:56:50AM +0800, Jason Wang wrote:
> > > > > > > > On 2021/3/1 5:34 上午, Michael S. Tsirkin wrote:
> > > > > > > > > On Wed, Feb 24, 2021 at 10:24:41AM -0800, Si-Wei Liu wrote:
> > > > > > > > > > > Detecting it isn't enough though, we will need a new ioctl to notify
> > > > > > > > > > > the kernel that it's a legacy guest. Ugh :(
> > > > > > > > > > Well, although I think adding an ioctl is doable, may I
> > > > > > > > > > know what the use
> > > > > > > > > > case there will be for kernel to leverage such info
> > > > > > > > > > directly? Is there a
> > > > > > > > > > case QEMU can't do with dedicate ioctls later if there's indeed
> > > > > > > > > > differentiation (legacy v.s. modern) needed?
> > > > > > > > > BTW a good API could be
> > > > > > > > >
> > > > > > > > > #define VHOST_SET_ENDIAN _IOW(VHOST_VIRTIO, ?, int)
> > > > > > > > > #define VHOST_GET_ENDIAN _IOW(VHOST_VIRTIO, ?, int)
> > > > > > > > >
> > > > > > > > > we did it per vring but maybe that was a mistake ...
> > > > > > > >
> > > > > > > > Actually, I wonder whether it's good time to just not support
> > > > > > > > legacy driver
> > > > > > > > for vDPA. Consider:
> > > > > > > >
> > > > > > > > 1) It's definition is no-normative
> > > > > > > > 2) A lot of budren of codes
> > > > > > > >
> > > > > > > > So qemu can still present the legacy device since the config
> > > > > > > > space or other
> > > > > > > > stuffs that is presented by vhost-vDPA is not expected to be
> > > > > > > > accessed by
> > > > > > > > guest directly. Qemu can do the endian conversion when necessary
> > > > > > > > in this
> > > > > > > > case?
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > Overall I would be fine with this approach but we need to avoid breaking
> > > > > > > working userspace, qemu releases with vdpa support are out there and
> > > > > > > seem to work for people. Any changes need to take that into account
> > > > > > > and document compatibility concerns.
> > > > > >
> > > > > >
> > > > > > Agree, let me check.
> > > > > >
> > > > > >
> > > > > > >   I note that any hardware
> > > > > > > implementation is already broken for legacy except on platforms with
> > > > > > > strong ordering which might be helpful in reducing the scope.
> > > > > >
> > > > > >
> > > > > > Yes.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > >
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ