[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250107142719.179636-8-yilun.xu@linux.intel.com>
Date: Tue, 7 Jan 2025 22:27:14 +0800
From: Xu Yilun <yilun.xu@...ux.intel.com>
To: kvm@...r.kernel.org,
dri-devel@...ts.freedesktop.org,
linux-media@...r.kernel.org,
linaro-mm-sig@...ts.linaro.org,
sumit.semwal@...aro.org,
christian.koenig@....com,
pbonzini@...hat.com,
seanjc@...gle.com,
alex.williamson@...hat.com,
jgg@...dia.com,
vivek.kasireddy@...el.com,
dan.j.williams@...el.com,
aik@....com
Cc: yilun.xu@...el.com,
yilun.xu@...ux.intel.com,
linux-coco@...ts.linux.dev,
linux-kernel@...r.kernel.org,
lukas@...ner.de,
yan.y.zhao@...el.com,
daniel.vetter@...ll.ch,
leon@...nel.org,
baolu.lu@...ux.intel.com,
zhenzhong.duan@...el.com,
tao1.su@...el.com
Subject: [RFC PATCH 07/12] KVM: x86/mmu: Handle page fault for vfio_dmabuf backed MMIO
Add support for resolving page faults on vfio_dmabuf backed MMIO. This
is to support private MMIO for private assigned devices (known as TDI
in TDISP spec).
Private MMIOs are set to KVM as vfio_dmabuf typed memory slot, which is
another type of can-be-private memory slot just like the gmem slot.
Like gmem slot, KVM needs to map its GFN as shared or private based on
the current state of the GFN's memory attribute. When page fault
happens for private MMIO but private <-> shared conversion is needed,
KVM still exits to userspace with exit reason KVM_EXIT_MEMORY_FAULT and
toggles KVM_MEMORY_EXIT_FLAG_PRIVATE. Unlike gmem slot, vfio_dmabuf
slot has only one backend MMIO resource, the switching of GFN's
attribute won't change the way of getting PFN, the vfio_dmabuf specific
way, kvm_vfio_dmabuf_get_pfn().
Signed-off-by: Xu Yilun <yilun.xu@...ux.intel.com>
---
arch/x86/kvm/mmu/mmu.c | 25 +++++++++++++++++++++++--
include/linux/kvm_host.h | 7 ++++++-
2 files changed, 29 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 713ca857f2c2..90ca54fee22f 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4341,8 +4341,13 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu,
return -EFAULT;
}
- r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn,
- &fault->refcounted_page, &max_order);
+ if (kvm_slot_is_vfio_dmabuf(fault->slot))
+ r = kvm_vfio_dmabuf_get_pfn(vcpu->kvm, fault->slot, fault->gfn,
+ &fault->pfn, &max_order);
+ else
+ r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn,
+ &fault->pfn, &fault->refcounted_page,
+ &max_order);
if (r) {
kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
return r;
@@ -4363,6 +4368,22 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu,
if (fault->is_private)
return kvm_mmu_faultin_pfn_private(vcpu, fault);
+ /* vfio_dmabuf slot is also applicable for shared mapping */
+ if (kvm_slot_is_vfio_dmabuf(fault->slot)) {
+ int max_order, r;
+
+ r = kvm_vfio_dmabuf_get_pfn(vcpu->kvm, fault->slot, fault->gfn,
+ &fault->pfn, &max_order);
+ if (r)
+ return r;
+
+ fault->max_level = min(kvm_max_level_for_order(max_order),
+ fault->max_level);
+ fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY);
+
+ return RET_PF_CONTINUE;
+ }
+
foll |= FOLL_NOWAIT;
fault->pfn = __kvm_faultin_pfn(fault->slot, fault->gfn, foll,
&fault->map_writable, &fault->refcounted_page);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 871d927485a5..966a5a247c6b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -614,7 +614,12 @@ struct kvm_memory_slot {
static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot)
{
- return slot && (slot->flags & KVM_MEM_GUEST_MEMFD);
+ return slot && (slot->flags & (KVM_MEM_GUEST_MEMFD | KVM_MEM_VFIO_DMABUF));
+}
+
+static inline bool kvm_slot_is_vfio_dmabuf(const struct kvm_memory_slot *slot)
+{
+ return slot && (slot->flags & KVM_MEM_VFIO_DMABUF);
}
static inline bool kvm_slot_dirty_track_enabled(const struct kvm_memory_slot *slot)
--
2.25.1
Powered by blists - more mailing lists