lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251101034132.2qi5b2ysld6fi2cq@desk>
Date: Fri, 31 Oct 2025 20:41:32 -0700
From: Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Borislav Petkov <bp@...en8.de>,
	Peter Zijlstra <peterz@...radead.org>,
	Josh Poimboeuf <jpoimboe@...nel.org>, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org, Brendan Jackman <jackmanb@...gle.com>
Subject: Re: [PATCH v4 4/8] KVM: VMX: Handle MMIO Stale Data in VM-Enter
 assembly via ALTERNATIVES_2

On Fri, Oct 31, 2025 at 04:55:37PM -0700, Pawan Gupta wrote:
> On Thu, Oct 30, 2025 at 05:30:36PM -0700, Sean Christopherson wrote:
> ...
> > diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> > index 1f99a98a16a2..61a809790a58 100644
> > --- a/arch/x86/kvm/vmx/vmenter.S
> > +++ b/arch/x86/kvm/vmx/vmenter.S
> > @@ -71,6 +71,7 @@
> >   * @regs:	unsigned long * (to guest registers)
> >   * @flags:	VMX_RUN_VMRESUME:	use VMRESUME instead of VMLAUNCH
> >   *		VMX_RUN_SAVE_SPEC_CTRL: save guest SPEC_CTRL into vmx->spec_ctrl
> > + *		VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO: vCPU can access host MMIO
> >   *
> >   * Returns:
> >   *	0 on VM-Exit, 1 on VM-Fail
> > @@ -137,6 +138,12 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >  	/* Load @regs to RAX. */
> >  	mov (%_ASM_SP), %_ASM_AX
> >  
> > +	/* Stash "clear for MMIO" in EFLAGS.ZF (used below). */
> > +	ALTERNATIVE_2 "",								\
> > +		      __stringify(test $VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO, %ebx), 	\
> > +		      X86_FEATURE_CLEAR_CPU_BUF_MMIO,					\
> > +		      "", X86_FEATURE_CLEAR_CPU_BUF_VM
> > +
> >  	/* Check if vmlaunch or vmresume is needed */
> >  	bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
> >  
> > @@ -161,7 +168,12 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >  	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
> >  
> >  	/* Clobbers EFLAGS.ZF */
> > -	VM_CLEAR_CPU_BUFFERS
> > +	ALTERNATIVE_2 "",							\
> > +		      __stringify(jz .Lskip_clear_cpu_buffers;			\
> > +				  CLEAR_CPU_BUFFERS_SEQ;			\
> > +				  .Lskip_clear_cpu_buffers:),			\
> > +		      X86_FEATURE_CLEAR_CPU_BUF_MMIO,				\
> > +		      __CLEAR_CPU_BUFFERS, X86_FEATURE_CLEAR_CPU_BUF_VM
> 
> Another way to write this could be:
> 
> 	ALTERNATIVE_2 "jmp .Lskip_clear_cpu_buffers",					\
> 		      "jz  .Lskip_clear_cpu_buffers", X86_FEATURE_CLEAR_CPU_BUF_MMIO,	\
> 		      "",			      X86_FEATURE_CLEAR_CPU_BUF_VM
> 
> 	CLEAR_CPU_BUFFERS_SEQ
> .Lskip_clear_cpu_buffers:
> 
> With this jmp;verw; would show up in the disassembly on unaffected CPUs, I
> don't know how big a problem is that. OTOH, I find this easier to understand.

As far as execution is concerned, it basically boils down to 9 NOPs:

54:	48 8b 00             	mov    (%rax),%rax
				---
57:	90                   	nop
58:	90                   	nop
59:	90                   	nop
5a:	90                   	nop
5b:	90                   	nop
5c:	90                   	nop
5d:	90                   	nop
5e:	90                   	nop
5f:	90                   	nop
				---
60:	73 08                	jae

versus 1 near jump:

54:	48 8b 00             	mov    (%rax),%rax
				---
57:	eb 0b                	jmp    ffffffff81fa1064
59:	90                   	nop
5a:	90                   	nop
5b:	90                   	nop
5c:	90                   	nop
5d:	0f 00 2d dc ef 05 ff 	verw   -0xfa1024(%rip)
				---
64:	73 08                	jae

I can't tell which one is better.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ