lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 13 Dec 2021 17:13:55 -0800
From:   Si-Wei Liu <si-wei.liu@...cle.com>
To:     Jason Wang <jasowang@...hat.com>,
        "Michael S. Tsirkin" <mst@...hat.com>
Cc:     Eli Cohen <elic@...dia.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        virtualization <virtualization@...ts.linux-foundation.org>,
        netdev <netdev@...r.kernel.org>
Subject: Re: vdpa legacy guest support (was Re: [PATCH] vdpa/mlx5:
 set_features should allow reset to zero)



On 12/12/2021 7:02 PM, Jason Wang wrote:
> On Sun, Dec 12, 2021 at 5:26 PM Michael S. Tsirkin <mst@...hat.com> wrote:
>> On Fri, Dec 10, 2021 at 05:44:15PM -0800, Si-Wei Liu wrote:
>>> Sorry for reviving this ancient thread. I was kinda lost for the conclusion
>>> it ended up with. I have the following questions,
>>>
>>> 1. legacy guest support: from the past conversations it doesn't seem the
>>> support will be completely dropped from the table, is my understanding
>>> correct? Actually we're interested in supporting virtio v0.95 guest for x86,
>>> which is backed by the spec at
>>> https://urldefense.com/v3/__https://ozlabs.org/*rusty/virtio-spec/virtio-0.9.5.pdf__;fg!!ACWV5N9M2RV99hQ!f64RqPFbYWWTGBgfWLzjlpJR_89WgX9KQTTz2vd-9UvMufMzqEbsajs8dxSfg0G8$ . Though I'm not sure
>>> if there's request/need to support wilder legacy virtio versions earlier
>>> beyond.
>> I personally feel it's less work to add in kernel than try to
>> work around it in userspace. Jason feels differently.
>> Maybe post the patches and this will prove to Jason it's not
>> too terrible?
> That's one way, other than the config access before setting features,
> we need to deal with other stuffs:
>
> 1) VIRTIO_F_ORDER_PLATFORM
> 2) there could be a parent device that only support 1.0 device
We do want to involve vendor's support for a legacy (or transitional) 
device datapath. Otherwise it'd be too difficult to emulate/translate in 
software/QEMU. The above two might not be an issue if the vendor claims 
0.95 support in virtqueue and ring layout, plus limiting to x86 support 
(LE with weak ordering) seems to simplify a lot of these requirements. I 
don't think emulating a legacy device model on top of a 1.0 vdpa parent 
for the dataplane would be a good idea, either.

>
> And a lot of other stuff summarized in spec 7.4 which seems not an
> easy task. Various vDPA parent drivers were written under the
> assumption that only modern devices are supported.
If some of these vDPA vendors do provide the 0.95 support, especially on 
the datapath and ring layout that well satisfies a transitional device 
model defined in section 7.4, I guess we can scope the initial support 
to these vendor drivers and x86 only. Let me know if I miss something else.

Thanks,
-Siwei


>
> Thanks
>
>>> 2. suppose some form of legacy guest support needs to be there, how do we
>>> deal with the bogus assumption below in vdpa_get_config() in the short term?
>>> It looks one of the intuitive fix is to move the vdpa_set_features call out
>>> of vdpa_get_config() to vdpa_set_config().
>>>
>>>          /*
>>>           * Config accesses aren't supposed to trigger before features are
>>> set.
>>>           * If it does happen we assume a legacy guest.
>>>           */
>>>          if (!vdev->features_valid)
>>>                  vdpa_set_features(vdev, 0);
>>>          ops->get_config(vdev, offset, buf, len);
>>>
>>> I can post a patch to fix 2) if there's consensus already reached.
>>>
>>> Thanks,
>>> -Siwei
>> I'm not sure how important it is to change that.
>> In any case it only affects transitional devices, right?
>> Legacy only should not care ...
>>
>>
>>> On 3/2/2021 2:53 AM, Jason Wang wrote:
>>>> On 2021/3/2 5:47 下午, Michael S. Tsirkin wrote:
>>>>> On Mon, Mar 01, 2021 at 11:56:50AM +0800, Jason Wang wrote:
>>>>>> On 2021/3/1 5:34 上午, Michael S. Tsirkin wrote:
>>>>>>> On Wed, Feb 24, 2021 at 10:24:41AM -0800, Si-Wei Liu wrote:
>>>>>>>>> Detecting it isn't enough though, we will need a new ioctl to notify
>>>>>>>>> the kernel that it's a legacy guest. Ugh :(
>>>>>>>> Well, although I think adding an ioctl is doable, may I
>>>>>>>> know what the use
>>>>>>>> case there will be for kernel to leverage such info
>>>>>>>> directly? Is there a
>>>>>>>> case QEMU can't do with dedicate ioctls later if there's indeed
>>>>>>>> differentiation (legacy v.s. modern) needed?
>>>>>>> BTW a good API could be
>>>>>>>
>>>>>>> #define VHOST_SET_ENDIAN _IOW(VHOST_VIRTIO, ?, int)
>>>>>>> #define VHOST_GET_ENDIAN _IOW(VHOST_VIRTIO, ?, int)
>>>>>>>
>>>>>>> we did it per vring but maybe that was a mistake ...
>>>>>> Actually, I wonder whether it's good time to just not support
>>>>>> legacy driver
>>>>>> for vDPA. Consider:
>>>>>>
>>>>>> 1) It's definition is no-normative
>>>>>> 2) A lot of budren of codes
>>>>>>
>>>>>> So qemu can still present the legacy device since the config
>>>>>> space or other
>>>>>> stuffs that is presented by vhost-vDPA is not expected to be
>>>>>> accessed by
>>>>>> guest directly. Qemu can do the endian conversion when necessary
>>>>>> in this
>>>>>> case?
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>> Overall I would be fine with this approach but we need to avoid breaking
>>>>> working userspace, qemu releases with vdpa support are out there and
>>>>> seem to work for people. Any changes need to take that into account
>>>>> and document compatibility concerns.
>>>>
>>>> Agree, let me check.
>>>>
>>>>
>>>>>    I note that any hardware
>>>>> implementation is already broken for legacy except on platforms with
>>>>> strong ordering which might be helpful in reducing the scope.
>>>>
>>>> Yes.
>>>>
>>>> Thanks
>>>>
>>>>
>>>>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ