[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 6 Jun 2023 12:20:03 +0800
From: Binbin Wu <binbin.wu@...ux.intel.com>
To: Zeng Guang <guang.zeng@...el.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
H Peter Anvin <hpa@...or.com>, kvm@...r.kernel.org,
x86@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 5/6] KVM: x86: LASS protection on KVM emulation
On 6/1/2023 10:23 PM, Zeng Guang wrote:
> Do LASS violation check for instructions emulated by KVM. Note that for
> instructions executed in the guest directly, hardware will perform the
> check.
>
> Not all instruction emulation leads to accesses to guest linear addresses
> because 1) some instructions like CPUID, RDMSR, don't take memory as
> operands 2) instruction fetch in most cases is already done inside the
> guest.
>
> Four cases in which KVM uses a linear address to access guest memory:
> - KVM emulates instruction fetches or data accesses
> - KVM emulates implicit data access to a system data structure
> - VMX instruction emulation
> - SGX ENCLS instruction emulation
>
> LASS violation check applies to these linear addresses so as to enforce
> mode-based protections as hardware behaves.
>
> As exceptions, the target memory address of emulation of invlpg, branch
> and call instructions doesn't require LASS violation check.
I think LASS doesn't apply to the target addresses in the descriptors of
INVPCID and INVVPID.
Although no code change needed, IMHO, it's better to describe it in the
changelog or/and comments.
>
> Signed-off-by: Zeng Guang <guang.zeng@...el.com>
> Tested-by: Xuelian Guo <xuelian.guo@...el.com>
> ---
> arch/x86/kvm/emulate.c | 30 ++++++++++++++++++++++++++++--
> arch/x86/kvm/vmx/nested.c | 3 +++
> arch/x86/kvm/vmx/sgx.c | 4 ++++
> 3 files changed, 35 insertions(+), 2 deletions(-)
>
[...]
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index e35cf0bd0df9..bb1c3fa13c13 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -4986,6 +4986,9 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
> * destination for long mode!
> */
> exn = is_noncanonical_address(*ret, vcpu);
> +
> + if (!exn)
> + exn = vmx_check_lass(vcpu, 0, *ret, 0);
Can be simpler by using logical-or:
exn = is_noncanonical_address(*ret, vcpu) || vmx_check_lass(vcpu, 0, *ret, 0);
> } else {
> /*
> * When not in long mode, the virtual/linear address is
> diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
> index 2261b684a7d4..3825275827eb 100644
> --- a/arch/x86/kvm/vmx/sgx.c
> +++ b/arch/x86/kvm/vmx/sgx.c
> @@ -46,6 +46,10 @@ static int sgx_get_encls_gva(struct kvm_vcpu *vcpu, unsigned long offset,
> ((s.base != 0 || s.limit != 0xffffffff) &&
> (((u64)*gva + size - 1) > s.limit + 1));
> }
> +
> + if (!fault && is_long_mode(vcpu))
> + fault = vmx_check_lass(vcpu, 0, *gva, 0);
> +
> if (fault)
> kvm_inject_gp(vcpu, 0);
> return fault ? -EINVAL : 0;
Powered by blists - more mailing lists