[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200115034132.2753-1-yan.y.zhao@intel.com>
Date: Tue, 14 Jan 2020 22:41:32 -0500
From: Yan Zhao <yan.y.zhao@...el.com>
To: alex.williamson@...hat.com, zhenyuw@...ux.intel.com
Cc: intel-gvt-dev@...ts.freedesktop.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, pbonzini@...hat.com,
kevin.tian@...el.com, peterx@...hat.com,
Yan Zhao <yan.y.zhao@...el.com>
Subject: [PATCH v2 0/2] use vfio_dma_rw to read/write IOVAs from CPU side
It is better for a device model to use IOVAs to read/write memory.
And because the rw operations come from CPUs, it is not necessary to call
vfio_pin_pages() to pin those pages.
patch 1 introduces interface vfio_dma_rw in vfio to read/write IOVAs
without pinning user space pages.
patch 2 let gvt switch from kvm side rw interface to vfio_dma_rw.
v2 changelog:
- rename vfio_iova_rw to vfio_dma_rw, vfio iommu driver ops .iova_rw
to .dma_rw. (Alex).
- change iova and len from unsigned long to dma_addr_t and size_t,
respectively. (Alex)
- fix possible overflow in dma->vaddr + iova - dma->iova + offset (Alex)
- split DMAs from on page boundary to on max available size to eliminate
redundant searching of vfio_dma and switching mm. (Alex)
- add a check for IOMMU_WRITE permission.
Yan Zhao (2):
vfio: introduce vfio_dma_rw to read/write a range of IOVAs
drm/i915/gvt: subsitute kvm_read/write_guest with vfio_dma_rw
drivers/gpu/drm/i915/gvt/kvmgt.c | 26 +++--------
drivers/vfio/vfio.c | 45 +++++++++++++++++++
drivers/vfio/vfio_iommu_type1.c | 76 ++++++++++++++++++++++++++++++++
include/linux/vfio.h | 5 +++
4 files changed, 133 insertions(+), 19 deletions(-)
--
2.17.1
Powered by blists - more mailing lists