[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e3950e19-b815-1549-72b0-12b628fa2bc1@redhat.com>
Date: Wed, 18 Sep 2019 14:15:54 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Tian, Kevin" <kevin.tian@...el.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-s390@...r.kernel.org" <linux-s390@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"intel-gfx@...ts.freedesktop.org" <intel-gfx@...ts.freedesktop.org>,
"intel-gvt-dev@...ts.freedesktop.org"
<intel-gvt-dev@...ts.freedesktop.org>,
"kwankhede@...dia.com" <kwankhede@...dia.com>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>
Cc: "sebott@...ux.ibm.com" <sebott@...ux.ibm.com>,
"mst@...hat.com" <mst@...hat.com>,
"airlied@...ux.ie" <airlied@...ux.ie>,
"joonas.lahtinen@...ux.intel.com" <joonas.lahtinen@...ux.intel.com>,
"heiko.carstens@...ibm.com" <heiko.carstens@...ibm.com>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
"rob.miller@...adcom.com" <rob.miller@...adcom.com>,
"pasic@...ux.ibm.com" <pasic@...ux.ibm.com>,
"borntraeger@...ibm.com" <borntraeger@...ibm.com>,
"Wang, Zhi A" <zhi.a.wang@...el.com>,
"farman@...ux.ibm.com" <farman@...ux.ibm.com>,
"idos@...lanox.com" <idos@...lanox.com>,
"gor@...ux.ibm.com" <gor@...ux.ibm.com>,
"jani.nikula@...ux.intel.com" <jani.nikula@...ux.intel.com>,
"Wang, Xiao W" <xiao.w.wang@...el.com>,
"freude@...ux.ibm.com" <freude@...ux.ibm.com>,
"zhenyuw@...ux.intel.com" <zhenyuw@...ux.intel.com>,
"Vivi, Rodrigo" <rodrigo.vivi@...el.com>,
"Zhu, Lingshan" <lingshan.zhu@...el.com>,
"akrowiak@...ux.ibm.com" <akrowiak@...ux.ibm.com>,
"pmorel@...ux.ibm.com" <pmorel@...ux.ibm.com>,
"cohuck@...hat.com" <cohuck@...hat.com>,
"oberpar@...ux.ibm.com" <oberpar@...ux.ibm.com>,
"maxime.coquelin@...hat.com" <maxime.coquelin@...hat.com>,
"daniel@...ll.ch" <daniel@...ll.ch>,
"Wang, Zhihong" <zhihong.wang@...el.com>
Subject: Re: [RFC PATCH 2/2] mdev: introduce device specific ops
On 2019/9/18 上午10:57, Tian, Kevin wrote:
>> From: Jason Wang [mailto:jasowang@...hat.com]
>> Sent: Tuesday, September 17, 2019 6:17 PM
>>
>> On 2019/9/17 下午4:09, Tian, Kevin wrote:
>>>> From: Jason Wang
>>>> Sent: Thursday, September 12, 2019 5:40 PM
>>>>
>>>> Currently, except for the crate and remove. The rest fields of
>>>> mdev_parent_ops is just designed for vfio-mdev driver and may not
>> help
>>>> for kernel mdev driver. So follow the device id support by previous
>>>> patch, this patch introduces device specific ops which points to
>>>> device specific ops (e.g vfio ops). This allows the future drivers
>>>> like virtio-mdev to implement its own device specific ops.
>>> Can you give an example about what ops might be required to support
>>> kernel mdev driver? I know you posted a link earlier, but putting a small
>>> example here can save time and avoid inconsistent understanding. Then
>>> it will help whether the proposed split makes sense or there is a
>>> possibility of redefining the callbacks to meet the both requirements.
>>> imo those callbacks fulfill some basic requirements when mediating
>>> a device...
>> I put it in the cover letter.
>>
>> The link ishttps://lkml.org/lkml/2019/9/10/135 which abuses the current
>> VFIO based mdev parent ops.
>>
>> Thanks
> So the main problem is the handling of userspace pointers vs.
> kernel space pointers. You still implement read/write/ioctl
> callbacks which is a subset of current parent_ops definition.
> In that regard is it better to introduce some helper to handle
> the pointer difference in mdev core, while still keeping the
> same set of parent ops (in whatever form suitable for both)?
Pointers is one of the issues. And read/write/ioctl is designed for
userspace API not kernel. Technically, we can use them for kernel but it
would not be as simple and straightforward a set of device specific
callbacks functions. The link above is just an example, e.g we can
simply pass the vring address through a dedicated API instead of
mandatory an offset of a file.
Thanks
>
Powered by blists - more mailing lists