[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c88c77f4-83c8-2d6d-6c78-c004f7917efd@redhat.com>
Date: Sat, 18 Jan 2020 22:38:59 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>,
Krish Sadhukhan <krish.sadhukhan@...cle.com>
Cc: Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: x86: Perform non-canonical checks in 32-bit KVM
On 16/01/20 16:50, Sean Christopherson wrote:
> On Wed, Jan 15, 2020 at 05:37:16PM -0800, Krish Sadhukhan wrote:
>>
>> On 01/15/2020 10:36 AM, Sean Christopherson wrote:
>>> arch/x86/kvm/x86.h | 8 --------
>>> 1 file changed, 8 deletions(-)
>>>
>>> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
>>> index cab5e71f0f0f..3ff590ec0238 100644
>>> --- a/arch/x86/kvm/x86.h
>>> +++ b/arch/x86/kvm/x86.h
>>> @@ -166,21 +166,13 @@ static inline u64 get_canonical(u64 la, u8 vaddr_bits)
>>> static inline bool is_noncanonical_address(u64 la, struct kvm_vcpu *vcpu)
>>> {
>>> -#ifdef CONFIG_X86_64
>>> return get_canonical(la, vcpu_virt_addr_bits(vcpu)) != la;
>>> -#else
>>> - return false;
>>> -#endif
>>> }
>>> static inline bool emul_is_noncanonical_address(u64 la,
>>> struct x86_emulate_ctxt *ctxt)
>>> {
>>> -#ifdef CONFIG_X86_64
>>> return get_canonical(la, ctxt_virt_addr_bits(ctxt)) != la;
>>> -#else
>>> - return false;
>>> -#endif
>>> }
>>> static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
>>
>> nested_vmx_check_host_state() still won't call it on 32-bit because it has
>> the CONFIG_X86_64 guard around the callee:
>>
>> #ifdef CONFIG_X86_64
>> if (CC(is_noncanonical_address(vmcs12->host_fs_base, vcpu)) ||
>> CC(is_noncanonical_address(vmcs12->host_gs_base, vcpu)) ||
>> ...
>
> Doh, I was looking at an older version of nested.c. Nice catch!
>
>> Don't we need to remove these guards in the callers as well ?
>
> Ya, that would be my preference.
>
Fixed and queued, thanks.
Paolo
Powered by blists - more mailing lists