[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180904163937.12759-1-koomi@moshbit.net>
Date: Tue, 4 Sep 2018 18:39:36 +0200
From: Lukas Braun <koomi@...hbit.net>
To: Christoffer Dall <christoffer.dall@....com>,
Marc Zyngier <marc.zyngier@....com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org
Cc: Ralph Palutke <ralph.palutke@....de>,
Lukas Braun <koomi@...hbit.net>
Subject: [PATCH] KVM: arm/arm64: Check memslot bounds before mapping hugepages
Userspace can create a memslot with memory backed by (transparent)
hugepages, but with bounds that do not align with hugepages.
In that case, we cannot map the entire region in the guest as hugepages
without exposing additional host memory to the guest and potentially
interfering with other memslots.
Consequently, this patch adds a bounds check when populating guest page
tables and forces the creation of regular PTEs if mapping an entire
hugepage would violate the memslots bounds.
Signed-off-by: Lukas Braun <koomi@...hbit.net>
---
virt/kvm/arm/mmu.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index ed162a6c57c5..bdbec1d136a1 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1504,6 +1504,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
hugetlb = true;
gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
} else {
+ unsigned long pmd_fn_mask = PTRS_PER_PMD - 1;
+
/*
* Pages belonging to memslots that don't have the same
* alignment for userspace and IPA cannot be mapped using
@@ -1513,8 +1515,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* unmapping, updates, and splits of the THP or other pages
* in the stage-2 block range.
*/
- if ((memslot->userspace_addr & ~PMD_MASK) !=
- ((memslot->base_gfn << PAGE_SHIFT) & ~PMD_MASK))
+ int aligned = ((memslot->userspace_addr & ~PMD_MASK) ==
+ ((memslot->base_gfn << PAGE_SHIFT) & ~PMD_MASK));
+
+ /*
+ * We also can't map a huge page if it would violate the bounds
+ * of the containing memslot.
+ */
+ int in_bounds = ((memslot->base_gfn <= (gfn & ~pmd_fn_mask)) &&
+ ((memslot->base_gfn + memslot->npages) > (gfn | pmd_fn_mask)));
+
+ if (!aligned || !in_bounds)
force_pte = true;
}
up_read(¤t->mm->mmap_sem);
--
2.11.0
Powered by blists - more mailing lists