[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cb8d8ae8-edf6-42a2-8cdc-3bd7b7e0711e@suse.com>
Date: Thu, 26 Oct 2023 19:14:18 +0300
From: Nikolay Borisov <nik.borisov@...e.com>
To: Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Josh Poimboeuf <jpoimboe@...nel.org>,
Andy Lutomirski <luto@...nel.org>,
Jonathan Corbet <corbet@....net>,
Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>, tony.luck@...el.com,
ak@...ux.intel.com, tim.c.chen@...ux.intel.com
Cc: linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
kvm@...r.kernel.org,
Alyssa Milburn <alyssa.milburn@...ux.intel.com>,
Daniel Sneddon <daniel.sneddon@...ux.intel.com>,
antonio.gomez.iglesias@...ux.intel.com
Subject: Re: [PATCH v3 6/6] KVM: VMX: Move VERW closer to VMentry for MDS
mitigation
On 25.10.23 г. 23:53 ч., Pawan Gupta wrote:
> During VMentry VERW is executed to mitigate MDS. After VERW, any memory
> access like register push onto stack may put host data in MDS affected
> CPU buffers. A guest can then use MDS to sample host data.
>
> Although likelihood of secrets surviving in registers at current VERW
> callsite is less, but it can't be ruled out. Harden the MDS mitigation
> by moving the VERW mitigation late in VMentry path.
>
> Note that VERW for MMIO Stale Data mitigation is unchanged because of
> the complexity of per-guest conditional VERW which is not easy to handle
> that late in asm with no GPRs available. If the CPU is also affected by
> MDS, VERW is unconditionally executed late in asm regardless of guest
> having MMIO access.
>
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>
> ---
> arch/x86/kvm/vmx/vmenter.S | 3 +++
> arch/x86/kvm/vmx/vmx.c | 10 +++++++---
> 2 files changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> index b3b13ec04bac..139960deb736 100644
> --- a/arch/x86/kvm/vmx/vmenter.S
> +++ b/arch/x86/kvm/vmx/vmenter.S
> @@ -161,6 +161,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
> /* Load guest RAX. This kills the @regs pointer! */
> mov VCPU_RAX(%_ASM_AX), %_ASM_AX
>
> + /* Clobbers EFLAGS.ZF */
> + CLEAR_CPU_BUFFERS
> +
> /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
> jnc .Lvmlaunch
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 24e8694b83fc..2d149589cf5b 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7226,13 +7226,17 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
>
> guest_state_enter_irqoff();
>
> - /* L1D Flush includes CPU buffer clear to mitigate MDS */
> + /*
> + * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW
> + * mitigation for MDS is done late in VMentry and is still
> + * executed inspite of L1D Flush. This is because an extra VERW
> + * should not matter much after the big hammer L1D Flush.
> + */
> if (static_branch_unlikely(&vmx_l1d_should_flush))
> vmx_l1d_flush(vcpu);
> - else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
> - mds_clear_cpu_buffers();
> else if (static_branch_unlikely(&mmio_stale_data_clear) &&
> kvm_arch_has_assigned_device(vcpu->kvm))
> + /* MMIO mitigation is mutually exclusive with MDS mitigation later in asm */
Mutually exclusive implies that you have one or the other but not both,
whilst I think the right formulation here is redundant? Because if mmio
is enabled mds_clear_cpu_buffers() will clear the buffers here and
later they'll be cleared again, no ? Alternatively you might augment
this check to only execute iff X86_FEATURE_CLEAR_CPU_BUF is not set?
> mds_clear_cpu_buffers();
>
> vmx_disable_fb_clear(vmx);
>
Powered by blists - more mailing lists