lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a66053c-8ece-8ffd-1e33-c301c53c4d45@intel.com>
Date:   Tue, 25 Apr 2023 15:31:28 +0800
From:   Zeng Guang <guang.zeng@...el.com>
To:     "Gao, Chao" <chao.gao@...el.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        "Christopherson,, Sean" <seanjc@...gle.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        H Peter Anvin <hpa@...or.com>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "x86@...nel.org" <x86@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/6] KVM: VMX: Add new ops in kvm_x86_ops for LASS
 violation check


On 4/25/2023 11:10 AM, Gao, Chao wrote:
> On Thu, Apr 20, 2023 at 09:37:20PM +0800, Zeng Guang wrote:
>> +/*
>> + * Determine whether an access to the linear address causes a LASS violation.
>> + * LASS protection is only effective in long mode. As a prerequisite, caller
>> + * should make sure VM running in long mode and invoke this api to do LASS
>> + * violation check.
> Could you place the comment above vmx_check_lass()?
>
> And for __vmx_check_lass(), just add:
>
> A variant of vmx_check_lass() without the check for long mode.
>
>> + */
>> +bool __vmx_check_lass(struct kvm_vcpu *vcpu, u64 access, u64 la, u64 flags)
>> +{
>> +	bool user_mode, user_as, rflags_ac;
>> +
>> +	if (!!(flags & KVM_X86_EMULFLAG_SKIP_LASS) ||
>> +	    !kvm_is_cr4_bit_set(vcpu, X86_CR4_LASS))
>> +		return false;
>> +
>> +	WARN_ON_ONCE(!is_long_mode(vcpu));
>> +
>> +	user_as = !(la >> 63);
>> +
>
>> +	/*
>> +	 * An access is a supervisor-mode access if CPL < 3 or if it implicitly
>> +	 * accesses a system data structure. For implicit accesses to system
>> +	 * data structure, the processor acts as if RFLAGS.AC is clear.
>> +	 */
>> +	if (access & PFERR_IMPLICIT_ACCESS) {
>> +		user_mode = false;
>> +		rflags_ac = false;
>> +	} else {
>> +		user_mode = vmx_get_cpl(vcpu) == 3;
>> +		if (!user_mode)
>> +			rflags_ac = !!(kvm_get_rflags(vcpu) & X86_EFLAGS_AC);
>> +	}
>> +
>> +	if (user_mode != user_as) {
> to reduce one level of indentation, how about:
>
> 	if (user_mode == user_as)
> 		return false;
>
> 	/*
> 	 * Supervisor-mode _data_ accesses to user address space
> 	 * cause LASS violations only if SMAP is enabled.
> 	 */
> 	if (!user_mode && !(access & PFERR_FETCH_MASK)) {
> 		return kvm_is_cr4_bit_set(vcpu, X86_CR4_SMAP) && !rflags_ac;
>
> 	return true;
>
Looks better.


>> +		/*
>> +		 * Supervisor-mode _data_ accesses to user address space
>> +		 * cause LASS violations only if SMAP is enabled.
>> +		 */
>> +		if (!user_mode && !(access & PFERR_FETCH_MASK)) {
>> +			return kvm_is_cr4_bit_set(vcpu, X86_CR4_SMAP) &&
>> +			       !rflags_ac;
>> +		} else {
>> +			return true;
>> +		}
>> +	}
>> +
>> +	return false;
>> +}
>> +
>> +static bool vmx_check_lass(struct kvm_vcpu *vcpu, u64 access, u64 la, u64 flags)
>> +{
>> +	return is_long_mode(vcpu) && __vmx_check_lass(vcpu, access, la, flags);
> Why not request all callers to check if vcpu is in long mode?
>
> e.g.,
> 	return is_long_mode(vcpu) && static_call(kvm_x86_check_lass)(...);
>
> then you can rename __vmx_check_lass() to vmx_check_lass() and drop the
> original one.
By design, __vmx_check_lass() is standalone to be used for checking LASS
violation only. In some cases, cpu mode is already identified prior to 
performing
LASS protection. please refer to patch 4. So we provide two interfaces,
vmx_check_lass() with cpu mode check wrapped for other modules usage, e.g.
kvm emulator and __vmx_check_lass() dedicated for VMX.

I would check the feasibility to re-organize code to be more optimal.

Thanks.
>> +}
>> +
>> static struct kvm_x86_ops vmx_x86_ops __initdata = {
>> 	.name = "kvm_intel",
>>
>> @@ -8207,6 +8260,8 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
>> 	.complete_emulated_msr = kvm_complete_insn_gp,
>>
>> 	.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
>> +
>> +	.check_lass = vmx_check_lass,
>> };
>>
>> static unsigned int vmx_handle_intel_pt_intr(void)
>> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
>> index a3da84f4ea45..6569385a5978 100644
>> --- a/arch/x86/kvm/vmx/vmx.h
>> +++ b/arch/x86/kvm/vmx/vmx.h
>> @@ -433,6 +433,8 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type);
>> u64 vmx_get_l2_tsc_offset(struct kvm_vcpu *vcpu);
>> u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu);
>>
>> +bool __vmx_check_lass(struct kvm_vcpu *vcpu, u64 access, u64 la, u64 flags);
>> +
> no one uses this function. You can defer exporting it to when the first
> external caller is added.
>
>> static inline void vmx_set_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr,
>> 					     int type, bool value)
>> {
>> -- 
>> 2.27.0
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ