[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZNwOYdy3AC12MI52@google.com>
Date: Tue, 15 Aug 2023 16:46:41 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Binbin Wu <binbin.wu@...ux.intel.com>
Cc: Zeng Guang <guang.zeng@...el.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
H Peter Anvin <hpa@...or.com>, kvm@...r.kernel.org,
x86@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 6/8] KVM: VMX: Implement and apply vmx_is_lass_violation()
for LASS protection
On Mon, Aug 07, 2023, Binbin Wu wrote:
>
> On 7/19/2023 10:45 AM, Zeng Guang wrote:
> > Implement and wire up vmx_is_lass_violation() in kvm_x86_ops for VMX.
> >
> > LASS violation check takes effect in KVM emulation of instruction fetch
> > and data access including implicit access when vCPU is running in long
> > mode, and also involved in emulation of VMX instruction and SGX ENCLS
> > instruction to enforce the mode-based protections before paging.
> >
> > But the target memory address of emulation of TLB invalidation and branch
> > instructions aren't subject to LASS as exceptions.
Same nit about branch instructions. And I would explicitly say "linear address"
instead of "target memory address", the "target" part makes it a bit ambiguous.
How about this?
Linear addresses used for TLB invalidation (INVPLG, INVPCID, and INVVPID) and
branch targets are not subject to LASS enforcement.
> >
> > Signed-off-by: Zeng Guang <guang.zeng@...el.com>
> > Tested-by: Xuelian Guo <xuelian.guo@...el.com>
> > ---
> > arch/x86/kvm/vmx/nested.c | 3 ++-
> > arch/x86/kvm/vmx/sgx.c | 4 ++++
> > arch/x86/kvm/vmx/vmx.c | 35 +++++++++++++++++++++++++++++++++++
> > arch/x86/kvm/vmx/vmx.h | 3 +++
> > 4 files changed, 44 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> > index e35cf0bd0df9..72e78566a3b6 100644
> > --- a/arch/x86/kvm/vmx/nested.c
> > +++ b/arch/x86/kvm/vmx/nested.c
> > @@ -4985,7 +4985,8 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
> > * non-canonical form. This is the only check on the memory
> > * destination for long mode!
> > */
> > - exn = is_noncanonical_address(*ret, vcpu);
> > + exn = is_noncanonical_address(*ret, vcpu) ||
> > + vmx_is_lass_violation(vcpu, *ret, len, 0);
> > } else {
> > /*
> > * When not in long mode, the virtual/linear address is
> > diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
> > index 2261b684a7d4..f8de637ce634 100644
> > --- a/arch/x86/kvm/vmx/sgx.c
> > +++ b/arch/x86/kvm/vmx/sgx.c
> > @@ -46,6 +46,10 @@ static int sgx_get_encls_gva(struct kvm_vcpu *vcpu, unsigned long offset,
> > ((s.base != 0 || s.limit != 0xffffffff) &&
> > (((u64)*gva + size - 1) > s.limit + 1));
> > }
> > +
> > + if (!fault)
> > + fault = vmx_is_lass_violation(vcpu, *gva, size, 0);
At the risk of bleeding details where they don't need to go... LASS is Long Mode
only, so I believe this chunk can be:
if (!IS_ALIGNED(*gva, alignment)) {
fault = true;
} else if (likely(is_64_bit_mode(vcpu))) {
fault = is_noncanonical_address(*gva, vcpu) ||
vmx_is_lass_violation(vcpu, *gva, size, 0);
} else {
*gva &= 0xffffffff;
fault = (s.unusable) ||
(s.type != 2 && s.type != 3) ||
(*gva > s.limit) ||
((s.base != 0 || s.limit != 0xffffffff) &&
(((u64)*gva + size - 1) > s.limit + 1));
}
which IIRC matches some earlier emulator code.
> > +
> > if (fault)
> > kvm_inject_gp(vcpu, 0);
> > return fault ? -EINVAL : 0;
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 44fb619803b8..15a7c6e7a25d 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -8127,6 +8127,40 @@ static void vmx_vm_destroy(struct kvm *kvm)
> > free_pages((unsigned long)kvm_vmx->pid_table, vmx_get_pid_table_order(kvm));
> > }
> > +bool vmx_is_lass_violation(struct kvm_vcpu *vcpu, unsigned long addr,
> > + unsigned int size, unsigned int flags)
> > +{
> > + const bool is_supervisor_address = !!(addr & BIT_ULL(63));
> > + const bool implicit = !!(flags & X86EMUL_F_IMPLICIT);
> > + const bool fetch = !!(flags & X86EMUL_F_FETCH);
> > + const bool is_wraparound_access = size ? (addr + size - 1) < addr : false;
Shouldn't this WARN if size==0? Ah, the "pre"-fetch fetch to get the max insn
size passes '0'. Well that's annoying.
Please don't use a local variable to track if an access wraps. It's used exactly
one, and there's zero reason to use a ternary operator at the return. E.g. this
is much easier on the eyes:
if (size && (addr + size - 1) < addr)
return true;
return !is_supervisor_address;
Hrm, and typing that out makes me go "huh?" Ah, it's the "implicit" thing that
turned me around. Can you rename "implicit" to "implicit_supervisor"? The
F_IMPLICIT flag is fine, it's just this code:
if (!implicit && vmx_get_cpl(vcpu) == 3)
return is_supervisor_address;
where it's easy to miss that "implicit" is "implicit supervisor".
And one more nit, rather than detect wraparound, I think it would be better to
detect that bit 63 isn't set. Functionally, they're the same, but detecting
wraparound makes it look like wraparound itself is problematic, which isn't
technically true, it's just the only case where an access can possibly straddle
user and kernel address spaces.
And I think we should call out that if LAM is supported, @addr has already been
untagged. Yeah, it's peeking ahead a bit, but I'd rather have a comment that
is a bit premature than forget to add the appropriate comment in the LAM series.
> > +
> > + if (!kvm_is_cr4_bit_set(vcpu, X86_CR4_LASS) || !is_long_mode(vcpu))
> > + return false;
> > +
> > + /*
> > + * INVTLB isn't subject to LASS, e.g. to allow invalidating userspace
> > + * addresses without toggling RFLAGS.AC. Branch targets aren't subject
> > + * to LASS in order to simplifiy far control transfers (the subsequent
> s/simplifiy/simplifiy
>
> > + * fetch will enforce LASS as appropriate).
> > + */
> > + if (flags & (X86EMUL_F_BRANCH | X86EMUL_F_INVTLB))
> > + return false;
> > +
> > + if (!implicit && vmx_get_cpl(vcpu) == 3)
> > + return is_supervisor_address;
> > +
> > + /* LASS is enforced for supervisor-mode access iff SMAP is enabled. */
> To be more accurate, supervisor-mode data access.
> Also, "iff" here is not is a typo for "if" or it stands for "if and only
> if"?
The latter.
> It is not accureate to use "if and only if" here because beside SMAP, there
> are other conditions, i.e. implicit or RFLAGS.AC.
I was trying to avoid a multi-line comment when I suggested the above. Hmm, and
I think we could/should consolidate the two if-statements. This?
/*
* LASS enforcement for supervisor-mode data accesses depends on SMAP
* being enabled, and like SMAP ignores explicit accesses if RFLAGS.AC=1.
*/
if (!fetch) {
if (!kvm_is_cr4_bit_set(vcpu, X86_CR4_SMAP))
return false;
if (!implicit && (kvm_get_rflags(vcpu) & X86_EFLAGS_AC))
return false;
}
> > + if (!fetch && !kvm_is_cr4_bit_set(vcpu, X86_CR4_SMAP))
> > + return false;
> > +
> > + /* Like SMAP, RFLAGS.AC disables LASS checks in supervisor mode. */
> > + if (!fetch && !implicit && (kvm_get_rflags(vcpu) & X86_EFLAGS_AC))
> > + return false;
All in all, this? (wildly untested)
const bool is_supervisor_address = !!(addr & BIT_ULL(63));
const bool implicit_supervisor = !!(flags & X86EMUL_F_IMPLICIT);
const bool fetch = !!(flags & X86EMUL_F_FETCH);
if (!kvm_is_cr4_bit_set(vcpu, X86_CR4_LASS) || !is_long_mode(vcpu))
return false;
/*
* INVTLB isn't subject to LASS, e.g. to allow invalidating userspace
* addresses without toggling RFLAGS.AC. Branch targets aren't subject
* to LASS in order to simplifiy far control transfers (the subsequent
* fetch will enforce LASS as appropriate).
*/
if (flags & (X86EMUL_F_BRANCH | X86EMUL_F_INVTLB))
return false;
if (!implicit_supervisor && vmx_get_cpl(vcpu) == 3)
return is_supervisor_address;
/*
* LASS enforcement for supervisor-mode data accesses depends on SMAP
* being enabled, and like SMAP ignores explicit accesses if RFLAGS.AC=1.
*/
if (!fetch) {
if (!kvm_is_cr4_bit_set(vcpu, X86_CR4_SMAP))
return false;
if (!implicit_supervisor && (kvm_get_rflags(vcpu) & X86_EFLAGS_AC))
return false;
}
/*
* The entire access must be in the appropriate address space. Note,
* if LAM is supported, @addr has already been untagged, so barring a
* massive architecture change to expand the canonical address range,
* it's impossible for a user access to straddle user and supervisor
* address spaces.
*/
if (size && !((addr + size - 1) & BIT_ULL(63)))
return true;
return !is_supervisor_address;
Powered by blists - more mailing lists