lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9fb5109e-1ac0-750e-ccc5-759a04f18475@intel.com>
Date:   Thu, 25 May 2023 10:17:23 +0800
From:   Xiaoyao Li <xiaoyao.li@...el.com>
To:     Chao Gao <chao.gao@...el.com>
Cc:     kvm@...r.kernel.org, x86@...nel.org,
        Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        "H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/3] KVM: x86: Track supported ARCH_CAPABILITIES in
 kvm_caps

On 5/24/2023 4:32 PM, Chao Gao wrote:
> On Wed, May 24, 2023 at 04:14:10PM +0800, Xiaoyao Li wrote:
>> On 5/24/2023 2:16 PM, Chao Gao wrote:
>>> to avoid computing the supported value at runtime every time.
>>>
>>> Toggle the ARCH_CAP_SKIP_VMENTRY_L1DFLUSH bit when l1tf_vmx_mitigation
>>> is modified to achieve the same result as runtime computing.
>>
>> It's not the same result.
> 
> it is because ...
> 
>>
>> In kvm_get_arch_capabilities(), host's value is honored. I.e., when host
>> supports ARCH_CAP_SKIP_VMENTRY_L1DFLUSH, l1tf_vmx_mitigation doesn't make any
>> difference to the result.
> 
> ... l1tf_vmx_mitigation should be VMENTER_L1D_FLUSH_NOT_REQUIRED in this
> case. l1tf_vmx_mitigation cannot be VMENTER_L1D_FLUSH_NEVER.

yes. you are right. Maybe we can clarify it in the changelog

anyway,

Reviewed-by: Xiaoyao Li <xiaoyao.li@...el.com>

>>
>>> Opportunistically, add a comment to document the problem of allowing
>>> changing the supported value of ARCH_CAPABILITIES and the reason why
>>> we don't fix it.
>>>
>>> No functional change intended.
>>>
>>> Link: https://lore.kernel.org/all/ZGZhW%2Fx5OWPmx1qD@google.com/
>>> Link: https://lore.kernel.org/all/ZGeU9sYTPxqNGSqI@google.com/
>>> Signed-off-by: Chao Gao <chao.gao@...el.com>
>>> ---
>>>    arch/x86/kvm/vmx/vmx.c | 25 +++++++++++++++++++++++--
>>>    arch/x86/kvm/x86.c     |  7 ++++---
>>>    arch/x86/kvm/x86.h     |  1 +
>>>    3 files changed, 28 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>>> index 44fb619803b8..8274ef5e89e5 100644
>>> --- a/arch/x86/kvm/vmx/vmx.c
>>> +++ b/arch/x86/kvm/vmx/vmx.c
>>> @@ -309,10 +309,31 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
>>>    	l1tf_vmx_mitigation = l1tf;
>>> -	if (l1tf != VMENTER_L1D_FLUSH_NEVER)
>>> +	/*
>>> +	 * Update static keys and supported arch capabilities according to
>>> +	 * the new mitigation state.
>>> +	 *
>>> +	 * ARCH_CAP_SKIP_VMENTRY_L1DFLUSH is toggled because if we do cache
>>> +	 * flushes for L1 guests on (nested) vmlaunch/vmresume to L2, L1
>>> +	 * guests can skip the flush and if we don't, then L1 guests need
>>> +	 * to do a flush.
>>> +	 *
>>> +	 * Toggling ARCH_CAP_SKIP_VMENTRY_L1DFLUSH may present inconsistent
>>> +	 * model to the guest, e.g., if userspace isn't careful, a VM can
>>> +	 * have vCPUs with different values for ARCH_CAPABILITIES. But
>>> +	 * there is almost no chance to fix the issue. Because, to present
>>> +	 * a consistent model, KVM essentially needs to disallow changing
>>> +	 * the module param after VMs/vCPUs have been created, but that
>>> +	 * would prevent userspace from toggling the param while VMs are
>>> +	 * running, e.g., in response to a new vulnerability.
>>> +	 */
>>> +	if (l1tf != VMENTER_L1D_FLUSH_NEVER) {
>>>    		static_branch_enable(&vmx_l1d_should_flush);
>>> -	else
>>> +		kvm_caps.supported_arch_cap |= ARCH_CAP_SKIP_VMENTRY_L1DFLUSH;
>>> +	} else {
>>>    		static_branch_disable(&vmx_l1d_should_flush);
>>> +		kvm_caps.supported_arch_cap &= ~ARCH_CAP_SKIP_VMENTRY_L1DFLUSH;
>>> +	}
>>>    	if (l1tf == VMENTER_L1D_FLUSH_COND)
>>>    		static_branch_enable(&vmx_l1d_flush_cond);
>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>> index c0778ca39650..2408b5f554b7 100644
>>> --- a/arch/x86/kvm/x86.c
>>> +++ b/arch/x86/kvm/x86.c
>>> @@ -1672,7 +1672,7 @@ static int kvm_get_msr_feature(struct kvm_msr_entry *msr)
>>>    {
>>>    	switch (msr->index) {
>>>    	case MSR_IA32_ARCH_CAPABILITIES:
>>> -		msr->data = kvm_get_arch_capabilities();
>>> +		msr->data = kvm_caps.supported_arch_cap;
>>>    		break;
>>>    	case MSR_IA32_PERF_CAPABILITIES:
>>>    		msr->data = kvm_caps.supported_perf_cap;
>>> @@ -7156,7 +7156,7 @@ static void kvm_probe_msr_to_save(u32 msr_index)
>>>    			return;
>>>    		break;
>>>    	case MSR_IA32_TSX_CTRL:
>>> -		if (!(kvm_get_arch_capabilities() & ARCH_CAP_TSX_CTRL_MSR))
>>> +		if (!(kvm_caps.supported_arch_cap & ARCH_CAP_TSX_CTRL_MSR))
>>>    			return;
>>>    		break;
>>>    	default:
>>> @@ -9532,6 +9532,7 @@ static int __kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
>>>    		kvm_caps.max_guest_tsc_khz = max;
>>>    	}
>>>    	kvm_caps.default_tsc_scaling_ratio = 1ULL << kvm_caps.tsc_scaling_ratio_frac_bits;
>>> +	kvm_caps.supported_arch_cap = kvm_get_arch_capabilities();
>>>    	kvm_init_msr_lists();
>>>    	return 0;
>>> @@ -11895,7 +11896,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>>>    	if (r)
>>>    		goto free_guest_fpu;
>>> -	vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
>>> +	vcpu->arch.arch_capabilities = kvm_caps.supported_arch_cap;
>>>    	vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
>>>    	kvm_xen_init_vcpu(vcpu);
>>>    	kvm_vcpu_mtrr_init(vcpu);
>>> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
>>> index c544602d07a3..d3e524bcc169 100644
>>> --- a/arch/x86/kvm/x86.h
>>> +++ b/arch/x86/kvm/x86.h
>>> @@ -29,6 +29,7 @@ struct kvm_caps {
>>>    	u64 supported_xcr0;
>>>    	u64 supported_xss;
>>>    	u64 supported_perf_cap;
>>> +	u64 supported_arch_cap;
>>>    };
>>>    void kvm_spurious_fault(void);
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ