lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZGc1/lwk5BAdRyOi@chao-email>
Date:   Fri, 19 May 2023 16:40:30 +0800
From:   Chao Gao <chao.gao@...el.com>
To:     Sean Christopherson <seanjc@...gle.com>,
        <pawan.kumar.gupta@...ux.intel.com>
CC:     Xiaoyao Li <xiaoyao.li@...el.com>, <kvm@...r.kernel.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>, <x86@...nel.org>,
        "H. Peter Anvin" <hpa@...or.com>, <linux-kernel@...r.kernel.org>,
        Jim Mattson <jmattson@...gle.com>
Subject: Re: [PATCH] KVM: x86: Track supported ARCH_CAPABILITIES in kvm_caps

+Pawan, could you share your thoughts on questions about FB_CLEAR?

On Thu, May 18, 2023 at 10:33:15AM -0700, Sean Christopherson wrote:
>+Jim
>
>On Thu, May 18, 2023, Xiaoyao Li wrote:
>> On 5/6/2023 11:04 AM, Chao Gao wrote:
>> > to avoid computing the supported value at runtime every time.
>> > 
>> > No functional change intended.
>> 
>> the value of kvm_get_arch_capabilities() can be changed due to
>> 
>> 	if (l1tf_vmx_mitigation != VMENTER_L1D_FLUSH_NEVER)
>> 		data |= ARCH_CAP_SKIP_VMENTRY_L1DFLUSH;
>> 
>> and l1tf_vmx_mitigation can be runtime changed by vmentry_l1d_flush module
>> param.

Thanks for pointing this out. I noticed l1tf_vmx_mitigation and analyzed if
it could be changed at runtime. Obviously I did a wrong analysis.

>
>Nice catch!
>
>> We need a detailed analysis that in no real case can
>> ARCH_CAP_SKIP_VMENTRY_L1DFLUSH bit change runtime.
>
>No, the fact that it _can_ be modified by a writable module param is enough to
>make this patch buggy.
>
>I do like snapshotting and then updating the value, even though there's likely no
>meaningful performance benefit, as that would provide a place to document that
>the "supported" value is dynamic.  Though the fact that it's dynamic is arguably a bug
>in its own right, e.g. if userspace isn't careful, a VM can have vCPUs with different
>values for ARCH_CAPABILITIES.  But fixing that is probably a fool's errand.  So

I am not sure if fixing it is fool. There would be some other problem:

KVM enables L1DLFUSH and creates a guest. Then ARCH_CAP_SKIP_VMENTRY_L1DFLUSH is
exposed to the guest. If L1DFLUSH is disabled at runtime in KVM, the guest
doesn't know this change and won't do L1DFLUSH when entering L2. Then L2 may use
L1TF to leak some secrets of L1.

>I vote to snapshot the value and toggle the ARCH_CAP_SKIP_VMENTRY_L1DFLUSH bit
>when l1tf_vmx_mitigation is modified.

Sure. Will do.

>
>On a somewhat related topic, what in the absolute #$#$ is going on with FB_CLEAR_DIS!?!?
>I made the mistake of digging into why KVM doesn't advertise ARCH_CAP_FB_CLEAR_CTRL...
>
>  1. I see *nothing* in commit 027bbb884be0 ("KVM: x86/speculation: Disable Fill
>     buffer clear within guests") that justifies 1x RDMSR and 2x WRMSR on every
>     entry+exit.
>
>  2. I'm pretty sure conditioning mmio_stale_data_clear on kvm_arch_has_assigned_device()
>     is a bug.  AIUI, the vulnerability applies to _any_ MMIO accesses.  Assigning
>     a device is necessary to let the device DMA into the guest, but it's not
>     necessary to let the guest access MMIO addresses, that's done purely via
>     memslots.
>
>  3. Irrespective of whether or not there is a performance benefit, toggling the
>     MSR on every entry+exit is completely unnecessary if KVM won't do VERW before
>     VM-Enter, i.e. if (!mds_user_clear && !mmio_stale_data_clear), then the
>     toggling can be done in vmx_prepare_switch_to_{guest,host}().  This probably
>     isn't worth pursuing though, as #4 below is more likely, especially since
>     X86_BUG_MSBDS_ONLY is limited to Atom (and MIC, lol) CPUs.
>
>  4. If the host will will _never_ do VERW, i.e. #3 + !X86_BUG_MSBDS_ONLY, then
>     KVM just needs to context switch the MSR between guests since the value that's
>     loaded while running in the host is irrelevant.  E.g. use a percpu cache to
>     track the current value.

Agreed.

Looks VERW can be used in CPL3, should we restore the MSR on returning
to userspace i.e., leverage uret mechanism?

>
>  5. MSR_IA32_MCU_OPT_CTRL is not modified by the host after a CPU is brought up,
>     i.e. the host's desired value is effectively static post-boot, and barring
>     a buggy configuration (running KVM as a guest), the boot CPU's value will be
>     the same as every other CPU.
>
>  6. Performance aside, KVM should not be speculating (ha!) on what the guest
>     will and will not do, and should instead honor whatever behavior is presented
>     to the guest.  If the guest CPU model indicates that VERW flushes buffers,
>     then KVM damn well needs to let VERW flush buffers.
>
>  7. Why on earth did Intel provide a knob that affects both the host and guest,
>     since AFAICT the intent of the MSR is purely to suppress FB clearing for an
>     unsuspecting (or misconfigured?) guest!?!?!

I doubt it is purely for guests. Any chance userspace application may use VERW?

And I don't think the original patch is for misconfigured guest. IIUC, it is
about migrating a guest from a vulnerable host to an invulnerable host.

>
>FWIW, this trainwreck is another reason why I'm not going to look at the proposed
>"Intel IA32_SPEC_CTRL Virtualization" crud until external forces dictate that I
>do so. I have zero confidence that a paravirt interface defined by hardware
>vendors to fiddle with mitigations will be sane, flexible, and extensible.
>
>Anyways, can someone from Intel do some basic performance analysis to justify
>doing RDMSR + WRMSRx2 on every transition?  Unless someone provides numbers that

Pawan, could you help to answer this question?

>show a clear, meaningful benefit to the aggressive toggling, I'm inclined to have
>KVM do #4, e.g. end up with something like:
>
>	/* L1D Flush includes CPU buffer clear to mitigate MDS */
>	if (static_branch_unlikely(&vmx_l1d_should_flush)) {
>		vmx_l1d_flush(vcpu);
>	} else if (static_branch_unlikely(&mds_user_clear) ||
>		   static_branch_unlikely(&mmio_stale_data_clear)) {
>		mds_clear_cpu_buffers();
>	} else if (static_branch_unlikely(&kvm_toggle_fb_clear) {
>		bool enable_fb_clear = !!(vcpu->arch.arch_capabilities & ARCH_CAP_FB_CLEAR);
>
>		if (this_cpu_read(kvm_fb_clear_enabled) != enable_fb_clear) {
>			u64 mcu_opt_ctrl = host_mcu_opt_ctrl;
>
>			if (enable_fb_clear)
>				mcu_opt_ctrl &= ~FB_CLEAR_DIS;
>			else
>				mcu_opt_ctrl |= FB_CLEAR_DIS;
>			native_wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_opt_ctrl);
>			this_cpu_write(kvm_fb_clear_enabled, enable_fb_clear);
>		}
>	}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ