[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d2f8c8d5-39d2-6982-a4ae-eeaf4bf42658@redhat.com>
Date: Wed, 15 Jan 2020 19:20:45 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: x86/mmu: Apply max PA check for MMIO sptes to 32-bit
KVM
On 08/01/20 01:12, Sean Christopherson wrote:
> Remove the bogus 64-bit only condition from the check that disables MMIO
> spte optimization when the system supports the max PA, i.e. doesn't have
> any reserved PA bits. 32-bit KVM always uses PAE paging for the shadow
> MMU, and per Intel's SDM:
>
> PAE paging translates 32-bit linear addresses to 52-bit physical
> addresses.
>
> The kernel's restrictions on max physical addresses are limits on how
> much memory the kernel can reasonably use, not what physical addresses
> are supported by hardware.
>
> Fixes: ce88decffd17 ("KVM: MMU: mmio page fault support")
> Cc: stable@...r.kernel.org
> Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 7269130ea5e2..d9c07343d979 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6191,7 +6191,7 @@ static void kvm_set_mmio_spte_mask(void)
> * If reserved bit is not supported, clear the present bit to disable
> * mmio page fault.
> */
> - if (IS_ENABLED(CONFIG_X86_64) && shadow_phys_bits == 52)
> + if (shadow_phys_bits == 52)
> mask &= ~1ull;
>
> kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
>
Queued, thanks.
Paolo
Powered by blists - more mailing lists