[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55601232-c941-74e8-f740-fd09e9e8a6ae@redhat.com>
Date: Tue, 28 Jun 2016 10:44:53 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Bandan Das <bsd@...hat.com>, kvm@...r.kernel.org
Cc: guangrong.xiao@...ux.intel.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/5] mmu: mark spte present if the x bit is set
On 28/06/2016 06:32, Bandan Das wrote:
> This is safe because is_shadow_present_pte() is called
> on host controlled page table and we know the spte is
> valid
>
> Signed-off-by: Bandan Das <bsd@...hat.com>
> ---
> arch/x86/kvm/mmu.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index def97b3..a50af79 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -304,7 +304,8 @@ static int is_nx(struct kvm_vcpu *vcpu)
>
> static int is_shadow_present_pte(u64 pte)
> {
> - return pte & PT_PRESENT_MASK && !is_mmio_spte(pte);
> + return pte & (PT_PRESENT_MASK | shadow_x_mask) &&
> + !is_mmio_spte(pte);
This should really be pte & 7 when using EPT. But this is okay as an
alternative to a new shadow_present_mask.
Paolo
> }
>
> static int is_large_pte(u64 pte)
>
Powered by blists - more mailing lists