[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200115195959.28f33078@x1.home>
Date: Wed, 15 Jan 2020 19:59:59 -0700
From: Alex Williamson <alex.williamson@...hat.com>
To: Mika Penttilä <mika.penttila@...tfour.com>
Cc: Yan Zhao <yan.y.zhao@...el.com>,
"zhenyuw@...ux.intel.com" <zhenyuw@...ux.intel.com>,
"intel-gvt-dev@...ts.freedesktop.org"
<intel-gvt-dev@...ts.freedesktop.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"kevin.tian@...el.com" <kevin.tian@...el.com>,
"peterx@...hat.com" <peterx@...hat.com>
Subject: Re: [PATCH v2 1/2] vfio: introduce vfio_dma_rw to read/write a
range of IOVAs
On Thu, 16 Jan 2020 02:30:52 +0000
Mika Penttilä <mika.penttila@...tfour.com> wrote:
> On 15.1.2020 22.06, Alex Williamson wrote:
> > On Tue, 14 Jan 2020 22:53:03 -0500
> > Yan Zhao <yan.y.zhao@...el.com> wrote:
> >
> >> vfio_dma_rw will read/write a range of user space memory pointed to by
> >> IOVA into/from a kernel buffer without pinning the user space memory.
> >>
> >> TODO: mark the IOVAs to user space memory dirty if they are written in
> >> vfio_dma_rw().
> >>
> >> Cc: Kevin Tian <kevin.tian@...el.com>
> >> Signed-off-by: Yan Zhao <yan.y.zhao@...el.com>
> >> ---
> >> drivers/vfio/vfio.c | 45 +++++++++++++++++++
> >> drivers/vfio/vfio_iommu_type1.c | 76 +++++++++++++++++++++++++++++++++
> >> include/linux/vfio.h | 5 +++
> >> 3 files changed, 126 insertions(+)
> >>
> >> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> >> index c8482624ca34..8bd52bc841cf 100644
> >> --- a/drivers/vfio/vfio.c
> >> +++ b/drivers/vfio/vfio.c
> >> @@ -1961,6 +1961,51 @@ int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
> >> }
> >> EXPORT_SYMBOL(vfio_unpin_pages);
> >>
> >> +/*
> >> + * Read/Write a range of IOVAs pointing to user space memory into/from a kernel
> >> + * buffer without pinning the user space memory
> >> + * @dev [in] : device
> >> + * @iova [in] : base IOVA of a user space buffer
> >> + * @data [in] : pointer to kernel buffer
> >> + * @len [in] : kernel buffer length
> >> + * @write : indicate read or write
> >> + * Return error code on failure or 0 on success.
> >> + */
> >> +int vfio_dma_rw(struct device *dev, dma_addr_t iova, void *data,
> >> + size_t len, bool write)
> >> +{
> >> + struct vfio_container *container;
> >> + struct vfio_group *group;
> >> + struct vfio_iommu_driver *driver;
> >> + int ret = 0;
>
> Do you know the iova given to vfio_dma_rw() is indeed a gpa and not iova
> from a iommu mapping? So isn't it you actually assume all the guest is
> pinned,
> like from device assignment?
>
> Or who and how is the vfio mapping added before the vfio_dma_rw() ?
vfio only knows about IOVAs, not GPAs. It's possible that IOVAs are
identity mapped to the GPA space, but a VM with a vIOMMU would quickly
break any such assumption. Pinning is also not required. This access
is via the CPU, not the I/O device, so we don't require the memory to
be pinning and it potentially won't be for a non-IOMMU backed mediated
device. The intention here is that via the mediation of an mdev
device, a vendor driver would already know IOVA ranges for the device
to access via the guest driver programming of the device. Thanks,
Alex
Powered by blists - more mailing lists