lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87y2juoa2s.wl-maz@kernel.org>
Date:   Sun, 25 Oct 2020 10:48:27 +0000
From:   Marc Zyngier <maz@...nel.org>
To:     Gavin Shan <gshan@...hat.com>
Cc:     kvmarm@...ts.cs.columbia.edu, linux-kernel@...r.kernel.org,
        will@...nel.org, alexandru.elisei@....com
Subject: Re: [PATCH 3/3] KVM: arm64: Failback on unsupported huge page sizes

On Sun, 25 Oct 2020 01:27:39 +0100,
Gavin Shan <gshan@...hat.com> wrote:
> 
> The huge page could be mapped through multiple contiguous PMDs or PTEs.
> The corresponding huge page sizes aren't supported by the page table
> walker currently.
> 
> This fails the unsupported huge page sizes to the near one. Otherwise,
> the guest can't boot successfully: CONT_PMD_SHIFT and CONT_PTE_SHIFT
> fail back to PMD_SHIFT and PAGE_SHIFT separately.
> 
> Signed-off-by: Gavin Shan <gshan@...hat.com>
> ---
>  arch/arm64/kvm/mmu.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 0f51585adc04..81cbdc368246 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -793,12 +793,20 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		vma_shift = PMD_SHIFT;
>  #endif
>  
> +	if (vma_shift == CONT_PMD_SHIFT)
> +		vma_shift = PMD_SHIFT;
> +
>  	if (vma_shift == PMD_SHIFT &&
>  	    !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
>  		force_pte = true;
>  		vma_shift = PAGE_SHIFT;
>  	}
>  
> +	if (vma_shift == CONT_PTE_SHIFT) {
> +		force_pte = true;
> +		vma_shift = PAGE_SHIFT;
> +	}
> +
>  	vma_pagesize = 1UL << vma_shift;
>  	if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
>  		fault_ipa &= ~(vma_pagesize - 1);

Yup, nice catch. However, I think we should take this opportunity to
rationalise the logic here, and catch future discrepancies (should
someone add contiguous PUD or something similarly silly). How about
something like this (untested):

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index cc323d96c9d4..d9a13a8a82e0 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -787,14 +787,31 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 		vma_shift = PAGE_SHIFT;
 	}
 
-	if (vma_shift == PUD_SHIFT &&
-	    !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
-	       vma_shift = PMD_SHIFT;
+	switch (vma_shift) {
+	case PUD_SHIFT:
+		if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
+			break;
+		fallthrough;
 
-	if (vma_shift == PMD_SHIFT &&
-	    !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
-		force_pte = true;
+	case CONT_PMD_SHIFT:
+		vma_shift = PMD_SHIFT;
+		fallthrough;
+
+	case PMD_SHIFT:
+		if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE))
+			break;
+		fallthrough;
+
+	case CONT_PTE_SHIFT:
 		vma_shift = PAGE_SHIFT;
+		force_pte = true;
+		fallthrough;
+
+	case PAGE_SHIFT:
+		break;
+
+	default:
+		WARN_ONCE(1, "Unknown vma_shift %d", vma_shift);
 	}
 
 	vma_pagesize = 1UL << vma_shift;


Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ