[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aR0PXEyP_OKuiQOO@google.com>
Date: Tue, 18 Nov 2025 16:29:16 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Uros Bizjak <ubizjak@...il.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>, Peter Zijlstra <peterz@...radead.org>,
Josh Poimboeuf <jpoimboe@...nel.org>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>, Brendan Jackman <jackmanb@...gle.com>
Subject: Re: [PATCH v5 1/9] KVM: VMX: Use on-stack copy of @flags in __vmx_vcpu_run()
On Fri, Nov 14, 2025, Uros Bizjak wrote:
> On 11/14/25 00:37, Sean Christopherson wrote:
> > diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> > index 574159a84ee9..93cf2ca7919a 100644
> > --- a/arch/x86/kvm/vmx/vmenter.S
> > +++ b/arch/x86/kvm/vmx/vmenter.S
> > @@ -92,7 +92,7 @@ SYM_FUNC_START(__vmx_vcpu_run)
> > /* Save @vmx for SPEC_CTRL handling */
> > push %_ASM_ARG1
> > - /* Save @flags for SPEC_CTRL handling */
> > + /* Save @flags (used for VMLAUNCH vs. VMRESUME and mitigations). */
> > push %_ASM_ARG3
> > /*
> > @@ -101,9 +101,6 @@ SYM_FUNC_START(__vmx_vcpu_run)
> > */
> > push %_ASM_ARG2
> > - /* Copy @flags to EBX, _ASM_ARG3 is volatile. */
> > - mov %_ASM_ARG3L, %ebx
> > -
> > lea (%_ASM_SP), %_ASM_ARG2
> > call vmx_update_host_rsp
> > @@ -147,9 +144,6 @@ SYM_FUNC_START(__vmx_vcpu_run)
> > /* Load @regs to RAX. */
> > mov (%_ASM_SP), %_ASM_AX
> > - /* Check if vmlaunch or vmresume is needed */
> > - bt $VMX_RUN_VMRESUME_SHIFT, %ebx
> > -
> > /* Load guest registers. Don't clobber flags. */
> > mov VCPU_RCX(%_ASM_AX), %_ASM_CX
> > mov VCPU_RDX(%_ASM_AX), %_ASM_DX
> > @@ -173,8 +167,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
> > /* Clobbers EFLAGS.ZF */
> > CLEAR_CPU_BUFFERS
> > - /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
> > - jnc .Lvmlaunch
> > + /* Check @flags to see if vmlaunch or vmresume is needed. */
> > + testl $VMX_RUN_VMRESUME, WORD_SIZE(%_ASM_SP)
> > + jz .Lvmlaunch
>
>
> You could use TESTB instead of TESTL in the above code to save 3 bytes
> of code and some memory bandwidth.
>
> Assembler will report unwanted truncation if VMX_RUN_VRESUME ever
> becomes larger than 255.
Unfortunately, the warning with gcc isn't escalated to an error with -Werror,
e.g. with KVM_WERROR=y. And AFAICT clang's assembler doesn't warn at all and
happily generates garbage. E.g. with VMX_RUN_VMRESUME relocated to bit 10, clang
generates this without a warning:
33c: f6 44 24 08 00 testb $0x0,0x8(%rsp)
341: 74 08 je 34b <__vmx_vcpu_run+0x9b>
343: 0f 01 c3 vmresume
versus the expected:
33c: f7 44 24 08 00 04 00 testl $0x400,0x8(%rsp)
343: 00
344: 74 08 je 34e <__vmx_vcpu_run+0x9e>
346: 0f 01 c3 vmresume
So for now at least, I'll stick with testl.
Powered by blists - more mailing lists