[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251020130801.68356-1-fangyu.yu@linux.alibaba.com>
Date: Mon, 20 Oct 2025 21:08:01 +0800
From: fangyu.yu@...ux.alibaba.com
To: anup@...infault.org,
atish.patra@...ux.dev,
pjw@...nel.org,
palmer@...belt.com,
aou@...s.berkeley.edu,
alex@...ti.fr,
pbonzini@...hat.com,
jiangyifei@...wei.com
Cc: guoren@...nel.org,
kvm@...r.kernel.org,
kvm-riscv@...ts.infradead.org,
linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org,
Fangyu Yu <fangyu.yu@...ux.alibaba.com>
Subject: [PATCH] RISC-V: KVM: Remove automatic I/O mapping for VM_PFNMAP
From: Fangyu Yu <fangyu.yu@...ux.alibaba.com>
As of commit aac6db75a9fc ("vfio/pci: Use unmap_mapping_range()"),
vm_pgoff may no longer guaranteed to hold the PFN for VM_PFNMAP
regions. Using vma->vm_pgoff to derive the HPA here may therefore
produce incorrect mappings.
Instead, I/O mappings for such regions can be established on-demand
during g-stage page faults, making the upfront ioremap in this path
is unnecessary.
Fixes: 9d05c1fee837 ("RISC-V: KVM: Implement stage2 page table programming")
Signed-off-by: Fangyu Yu <fangyu.yu@...ux.alibaba.com>
---
arch/riscv/kvm/mmu.c | 20 +-------------------
1 file changed, 1 insertion(+), 19 deletions(-)
diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index 525fb5a330c0..84c04c8f0892 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -197,8 +197,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
/*
* A memory region could potentially cover multiple VMAs, and
- * any holes between them, so iterate over all of them to find
- * out if we can map any of them right now.
+ * any holes between them, so iterate over all of them.
*
* +--------------------------------------------+
* +---------------+----------------+ +----------------+
@@ -229,32 +228,15 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
vm_end = min(reg_end, vma->vm_end);
if (vma->vm_flags & VM_PFNMAP) {
- gpa_t gpa = base_gpa + (vm_start - hva);
- phys_addr_t pa;
-
- pa = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT;
- pa += vm_start - vma->vm_start;
-
/* IO region dirty page logging not allowed */
if (new->flags & KVM_MEM_LOG_DIRTY_PAGES) {
ret = -EINVAL;
goto out;
}
-
- ret = kvm_riscv_mmu_ioremap(kvm, gpa, pa, vm_end - vm_start,
- writable, false);
- if (ret)
- break;
}
hva = vm_end;
} while (hva < reg_end);
- if (change == KVM_MR_FLAGS_ONLY)
- goto out;
-
- if (ret)
- kvm_riscv_mmu_iounmap(kvm, base_gpa, size);
-
out:
mmap_read_unlock(current->mm);
return ret;
--
2.50.1
Powered by blists - more mailing lists