[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210125180440.GR4147@nvidia.com>
Date: Mon, 25 Jan 2021 14:04:40 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: Cornelia Huck <cohuck@...hat.com>
CC: Alex Williamson <alex.williamson@...hat.com>,
Max Gurtovoy <mgurtovoy@...dia.com>, <kvm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <liranl@...dia.com>,
<oren@...dia.com>, <tzahio@...dia.com>, <leonro@...dia.com>,
<yarong@...dia.com>, <aviadye@...dia.com>, <shahafs@...dia.com>,
<artemp@...dia.com>, <kwankhede@...dia.com>, <ACurrid@...dia.com>,
<gmataev@...dia.com>, <cjia@...dia.com>
Subject: Re: [PATCH RFC v1 0/3] Introduce vfio-pci-core subsystem
On Mon, Jan 25, 2021 at 05:20:35PM +0100, Cornelia Huck wrote:
> I think you cut out an important part of Alex' comment, so let me
> repost it here:
Yes, I've already respnded to this.
> I'm missing the bigger picture of how this api is supposed to work out,
> a driver with a lot of TODOs does not help much to figure out whether
> this split makes sense and is potentially useful for a number of use
> cases
The change to vfio-pci is essentially complete, ignoring some
additional cleanup. I'm not sure seeing details how some mlx5 driver
uses it will be substantially more informative if it is useful for
S390, or Intel.
As far as API it is clear to see:
> +/* Exported functions */
> +struct vfio_pci_device *vfio_create_pci_device(struct pci_dev *pdev,
> + const struct vfio_device_ops *vfio_pci_ops, void *dd_data);
> +void vfio_destroy_pci_device(struct pci_dev *pdev);
Called from probe/remove of the consuming driver
> +int vfio_pci_core_open(void *device_data);
> +void vfio_pci_core_release(void *device_data);
> +long vfio_pci_core_ioctl(void *device_data, unsigned int cmd,
> + unsigned long arg);
> +ssize_t vfio_pci_core_read(void *device_data, char __user *buf, size_t count,
> + loff_t *ppos);
> +ssize_t vfio_pci_core_write(void *device_data, const char __user *buf,
> + size_t count, loff_t *ppos);
> +int vfio_pci_core_mmap(void *device_data, struct vm_area_struct *vma);
> +void vfio_pci_core_request(void *device_data, unsigned int count);
> +int vfio_pci_core_match(void *device_data, char *buf);
Called from vfio_device_ops and has the existing well defined API of
the matching ops.
> +int vfio_pci_core_sriov_configure(struct pci_dev *pdev, int nr_virtfn);
> +pci_ers_result_t vfio_pci_core_aer_err_detected(struct pci_dev *pdev,
> + pci_channel_state_t state);
> +
Callbacks from the PCI core, API defined by the PCI subsystem.
Notice there is no major new API exposed from vfio-pci, nor are
vfio-pci internals any more exposed then they used to be.
Except create/destroy, every single newly exported function was
already available via a function pointer someplace else in the
API. This is a key point, because it is very much NOT like the May
series.
Because the new driver sits before vfio_pci because it owns the
vfio_device_ops, it introduces nearly nothing new. The May series put
the new driver after vfio-pci as some internalized sub-driver and
exposed a whole new API, wrappers and callbacks to go along with it.
For instance if a new driver wants to implement some new thing under
ioctl, like migration, then it would do
static long new_driver_pci_core_ioctl(void *device_data, unsigned int cmd,
unsigned long arg)
{
switch (cmd) {
case NEW_THING: return new_driver_thing();
default: return vfio_pci_core_ioctl(device_data, cmd, arg);
}
}
static const struct vfio_device_ops new_driver_pci_ops = {
[...]
.ioctl = new_driver_ioctl,
};
Simple enough, if you understand the above, then you also understand
what direction the mlx5 driver will go in.
This is also why it is clearly useful for a wide range of cases, as a
driver can use as much or as little of the vfio-pci-core ops as it
wants. The driver doesn't get to reach into vfio-pci, but it can sit
before it and intercept the entire uAPI. That is very powerful.
> or whether mdev (even with its different lifecycle model) or a
I think it is appropriate to think of mdev as only the special
lifecycle model, it doesn't have any other functionality.
mdev's lifecycle model does nothing to solve the need to libraryize
vfio-pci.
> different vfio bus driver might be a better fit for the more
What is a "vfio bus driver" ? Why would we need a bus here?
> involved cases. (For example, can s390 ISM fit here?)
Don't know what is special about ISM? What TODO do you think will help
answer that question?
Jason
Powered by blists - more mailing lists