[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <993841ed-942e-c90b-8016-8e7dc76bf13a@redhat.com>
Date: Tue, 17 Sep 2019 11:32:03 +0800
From: Jason Wang <jasowang@...hat.com>
To: Tiwei Bie <tiwei.bie@...el.com>, mst@...hat.com,
alex.williamson@...hat.com, maxime.coquelin@...hat.com
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
dan.daly@...el.com, cunming.liang@...el.com,
zhihong.wang@...el.com, lingshan.zhu@...el.com
Subject: Re: [RFC v4 0/3] vhost: introduce mdev based hardware backend
On 2019/9/17 上午9:02, Tiwei Bie wrote:
> This RFC is to demonstrate below ideas,
>
> a) Build vhost-mdev on top of the same abstraction defined in
> the virtio-mdev series [1];
>
> b) Introduce /dev/vhost-mdev to do vhost ioctls and support
> setting mdev device as backend;
>
> Now the userspace API looks like this:
>
> - Userspace generates a compatible mdev device;
>
> - Userspace opens this mdev device with VFIO API (including
> doing IOMMU programming for this mdev device with VFIO's
> container/group based interface);
>
> - Userspace opens /dev/vhost-mdev and gets vhost fd;
>
> - Userspace uses vhost ioctls to setup vhost (userspace should
> do VHOST_MDEV_SET_BACKEND ioctl with VFIO group fd and device
> fd first before doing other vhost ioctls);
>
> Only compile test has been done for this series for now.
Have a hard thought on the architecture:
1) Create a vhost char device and pass vfio mdev device fd to it as a
backend and translate vhost-mdev ioctl to virtio mdev transport (e.g
read/write). DMA was done through the VFIO DMA mapping on the container
that is attached.
We have two more choices:
2) Use vfio-mdev but do not create vhost-mdev device, instead, just
implement vhost ioctl on vfio_device_ops, and translate them into
virtio-mdev transport or just pass ioctl to parent.
3) Don't use vfio-mdev, create a new vhost-mdev driver, during probe
still try to add dev to vfio group and talk to parent with device
specific ops
So I have some questions:
1) Compared to method 2, what's the advantage of creating a new vhost
char device? I guess it's for keep the API compatibility?
2) For method 2, is there any easy way for user/admin to distinguish e.g
ordinary vfio-mdev for vhost from ordinary vfio-mdev? I saw you
introduce ops matching helper but it's not friendly to management.
3) A drawback of 1) and 2) is that it must follow vfio_device_ops that
assumes the parameter comes from userspace, it prevents support kernel
virtio drivers.
4) So comes the idea of method 3, since it register a new vhost-mdev
driver, we can use device specific ops instead of VFIO ones, then we can
have a common API between vDPA parent and vhost-mdev/virtio-mdev drivers.
What's your thoughts?
Thanks
>
> RFCv3: https://patchwork.kernel.org/patch/11117785/
>
> [1] https://lkml.org/lkml/2019/9/10/135
>
> Tiwei Bie (3):
> vfio: support getting vfio device from device fd
> vfio: support checking vfio driver by device ops
> vhost: introduce mdev based hardware backend
>
> drivers/vfio/mdev/vfio_mdev.c | 3 +-
> drivers/vfio/vfio.c | 32 +++
> drivers/vhost/Kconfig | 9 +
> drivers/vhost/Makefile | 3 +
> drivers/vhost/mdev.c | 462 +++++++++++++++++++++++++++++++
> drivers/vhost/vhost.c | 39 ++-
> drivers/vhost/vhost.h | 6 +
> include/linux/vfio.h | 11 +
> include/uapi/linux/vhost.h | 10 +
> include/uapi/linux/vhost_types.h | 5 +
> 10 files changed, 573 insertions(+), 7 deletions(-)
> create mode 100644 drivers/vhost/mdev.c
>
Powered by blists - more mailing lists