[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87367l669m.fsf@vitty.brq.redhat.com>
Date: Wed, 27 May 2020 12:07:01 +0200
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>,
Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: x86/mmu: Set mmio_value to '0' if reserved #PF can't be generated
Sean Christopherson <sean.j.christopherson@...el.com> writes:
> Set the mmio_value to '0' instead of simply clearing the present bit to
> squash a benign warning in kvm_mmu_set_mmio_spte_mask() that complains
> about the mmio_value overlapping the lower GFN mask on systems with 52
> bits of PA space.
>
> Opportunistically clean up the code and comments.
>
> Fixes: 608831174100 ("KVM: x86: only do L1TF workaround on affected processors")
> Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
> ---
>
> Thanks for the excuse to clean up kvm_set_mmio_spte_mask(), been wanting a
> reason to fix that mess for a few months now :-).
>
> arch/x86/kvm/mmu/mmu.c | 27 +++++++++------------------
> 1 file changed, 9 insertions(+), 18 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 2df0f347655a4..aab90f4079ea9 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6136,25 +6136,16 @@ static void kvm_set_mmio_spte_mask(void)
> u64 mask;
>
> /*
> - * Set the reserved bits and the present bit of an paging-structure
> - * entry to generate page fault with PFER.RSV = 1.
> + * Set a reserved PA bit in MMIO SPTEs to generate page faults with
> + * PFEC.RSVD=1 on MMIO accesses. 64-bit PTEs (PAE, x86-64, and EPT
> + * paging) support a maximum of 52 bits of PA, i.e. if the CPU supports
> + * 52-bit physical addresses then there are no reserved PA bits in the
> + * PTEs and so the reserved PA approach must be disabled.
> */
> -
> - /*
> - * Mask the uppermost physical address bit, which would be reserved as
> - * long as the supported physical address width is less than 52.
> - */
> - mask = 1ull << 51;
> -
> - /* Set the present bit. */
> - mask |= 1ull;
> -
> - /*
> - * If reserved bit is not supported, clear the present bit to disable
> - * mmio page fault.
> - */
> - if (shadow_phys_bits == 52)
> - mask &= ~1ull;
> + if (shadow_phys_bits < 52)
> + mask = BIT_ULL(51) | PT_PRESENT_MASK;
> + else
> + mask = 0;
>
> kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
> }
Nice cleanup,
Reviewed-by: Vitaly Kuznetsov <vkuznets@...hat.com>
--
Vitaly
Powered by blists - more mailing lists