[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jpgd1n1gqva.fsf@linux.bootlegged.copy>
Date: Tue, 28 Jun 2016 13:33:45 -0400
From: Bandan Das <bsd@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org, guangrong.xiao@...ux.intel.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/5] mmu: mark spte present if the x bit is set
Paolo Bonzini <pbonzini@...hat.com> writes:
> On 28/06/2016 06:32, Bandan Das wrote:
>> This is safe because is_shadow_present_pte() is called
>> on host controlled page table and we know the spte is
>> valid
>>
>> Signed-off-by: Bandan Das <bsd@...hat.com>
>> ---
>> arch/x86/kvm/mmu.c | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index def97b3..a50af79 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -304,7 +304,8 @@ static int is_nx(struct kvm_vcpu *vcpu)
>>
>> static int is_shadow_present_pte(u64 pte)
>> {
>> - return pte & PT_PRESENT_MASK && !is_mmio_spte(pte);
>> + return pte & (PT_PRESENT_MASK | shadow_x_mask) &&
>> + !is_mmio_spte(pte);
>
> This should really be pte & 7 when using EPT. But this is okay as an
> alternative to a new shadow_present_mask.
I could revive shadow_xonly_valid probably... Anyway, for now I will
add a TODO comment here.
> Paolo
>
>> }
>>
>> static int is_large_pte(u64 pte)
>>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists