[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210126004522.GD4147@nvidia.com>
Date: Mon, 25 Jan 2021 20:45:22 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: Alex Williamson <alex.williamson@...hat.com>
CC: Cornelia Huck <cohuck@...hat.com>,
Max Gurtovoy <mgurtovoy@...dia.com>, <kvm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <liranl@...dia.com>,
<oren@...dia.com>, <tzahio@...dia.com>, <leonro@...dia.com>,
<yarong@...dia.com>, <aviadye@...dia.com>, <shahafs@...dia.com>,
<artemp@...dia.com>, <kwankhede@...dia.com>, <ACurrid@...dia.com>,
<gmataev@...dia.com>, <cjia@...dia.com>
Subject: Re: [PATCH RFC v1 0/3] Introduce vfio-pci-core subsystem
On Mon, Jan 25, 2021 at 04:31:51PM -0700, Alex Williamson wrote:
> We're supposed to be enlightened by a vendor driver that does nothing
> more than pass the opaque device_data through to the core functions,
> but in reality this is exactly the point of concern above. At a
> minimum that vendor driver needs to look at the vdev to get the
> pdev,
The end driver already havs the pdev, the RFC doesn't go enough into
those bits, it is a good comment.
The dd_data pased to the vfio_create_pci_device() will be retrieved
from the ops to get back to the end drivers data. This can cleanly
include everything: the VF pci_device, PF pci_device, mlx5_core
pointer, vfio_device and vfio_pci_device.
This is why the example passes in the mvadev:
+ vdev = vfio_create_pci_device(pdev, &mlx5_vfio_pci_ops, mvadev);
The mvadev has the PF, VF, and mlx5 core driver pointer.
Getting that back out during the ops is enough to do what the mlx5
driver needs to do, which is relay migration related IOCTLs to the PF
function via the mlx5_core driver so the device can execute them on
behalf of the VF.
> but then what else does it look at, consume, or modify. Now we have
> vendor drivers misusing the core because it's not clear which fields
> are private and how public fields can be used safely,
The kernel has never followed rigid rules for data isolation, it is
normal to have whole private structs exposed in headers so that
container_of can be used to properly compose data structures.
Look at struct device, for instance. Most of that is private to the
driver core.
A few 'private to vfio-pci-core' comments would help, it is good
feedback to make that more clear.
> extensions potentially break vendor drivers, etc. We're only even hand
> waving that existing device specific support could be farmed out to new
> device specific drivers without even going to the effort to prove that.
This is a RFC, not a complete patch series. The RFC is to get feedback
on the general design before everyone comits alot of resources and
positions get dug in.
Do you really think the existing device specific support would be a
problem to lift? It already looks pretty clean with the
vfio_pci_regops, looks easy enough to lift to the parent.
> So far the TODOs rather mask the dirty little secrets of the
> extension rather than showing how a vendor derived driver needs to
> root around in struct vfio_pci_device to do something useful, so
> probably porting actual device specific support rather than further
> hand waving would be more helpful.
It would be helpful to get actual feedback on the high level design -
someting like this was already tried in May and didn't go anywhere -
are you surprised that we are reluctant to commit alot of resources
doing a complete job just to have it go nowhere again?
Thanks,
Jason
Powered by blists - more mailing lists