[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201025002739.5804-4-gshan@redhat.com>
Date: Sun, 25 Oct 2020 11:27:39 +1100
From: Gavin Shan <gshan@...hat.com>
To: kvmarm@...ts.cs.columbia.edu
Cc: linux-kernel@...r.kernel.org, will@...nel.org,
alexandru.elisei@....com, maz@...nel.org
Subject: [PATCH 3/3] KVM: arm64: Failback on unsupported huge page sizes
The huge page could be mapped through multiple contiguous PMDs or PTEs.
The corresponding huge page sizes aren't supported by the page table
walker currently.
This fails the unsupported huge page sizes to the near one. Otherwise,
the guest can't boot successfully: CONT_PMD_SHIFT and CONT_PTE_SHIFT
fail back to PMD_SHIFT and PAGE_SHIFT separately.
Signed-off-by: Gavin Shan <gshan@...hat.com>
---
arch/arm64/kvm/mmu.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 0f51585adc04..81cbdc368246 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -793,12 +793,20 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
vma_shift = PMD_SHIFT;
#endif
+ if (vma_shift == CONT_PMD_SHIFT)
+ vma_shift = PMD_SHIFT;
+
if (vma_shift == PMD_SHIFT &&
!fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
force_pte = true;
vma_shift = PAGE_SHIFT;
}
+ if (vma_shift == CONT_PTE_SHIFT) {
+ force_pte = true;
+ vma_shift = PAGE_SHIFT;
+ }
+
vma_pagesize = 1UL << vma_shift;
if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
fault_ipa &= ~(vma_pagesize - 1);
--
2.23.0
Powered by blists - more mailing lists