[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFULd4ZOtj7WZkSSKqLjxCJ-yBr20AYrqzCpxj2K_=XmrX1QZg@mail.gmail.com>
Date: Tue, 19 Aug 2025 18:24:45 +0200
From: Uros Bizjak <ubizjak@...il.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: kvm@...r.kernel.org, x86@...nel.org, linux-kernel@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...nel.org>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH] KVM: VMX: Micro-optimize SPEC_CTRL handling in __vmx_vcpu_run()
On Tue, Aug 19, 2025 at 5:00 PM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Thu, Aug 07, 2025, Uros Bizjak wrote:
> > Use memory operand in CMP instruction to avoid usage of a
> > temporary register. Use %eax register to hold VMX_spec_ctrl
> > and use it directly in the follow-up WRMSR.
> >
> > The new code saves a few bytes by removing two MOV insns, from:
> >
> > 2d: 48 8b 7c 24 10 mov 0x10(%rsp),%rdi
> > 32: 8b bf 48 18 00 00 mov 0x1848(%rdi),%edi
> > 38: 65 8b 35 00 00 00 00 mov %gs:0x0(%rip),%esi
> > 3f: 39 fe cmp %edi,%esi
> > 41: 74 0b je 4e <...>
> > 43: b9 48 00 00 00 mov $0x48,%ecx
> > 48: 31 d2 xor %edx,%edx
> > 4a: 89 f8 mov %edi,%eax
> > 4c: 0f 30 wrmsr
> >
> > to:
> >
> > 2d: 48 8b 7c 24 10 mov 0x10(%rsp),%rdi
> > 32: 8b 87 48 18 00 00 mov 0x1848(%rdi),%eax
> > 38: 65 3b 05 00 00 00 00 cmp %gs:0x0(%rip),%eax
> > 3f: 74 09 je 4a <...>
> > 41: b9 48 00 00 00 mov $0x48,%ecx
> > 46: 31 d2 xor %edx,%edx
> > 48: 0f 30 wrmsr
> >
> > No functional change intended.
> >
> > Signed-off-by: Uros Bizjak <ubizjak@...il.com>
> > Cc: Sean Christopherson <seanjc@...gle.com>
> > Cc: Paolo Bonzini <pbonzini@...hat.com>
> > Cc: Thomas Gleixner <tglx@...utronix.de>
> > Cc: Ingo Molnar <mingo@...nel.org>
> > Cc: Borislav Petkov <bp@...en8.de>
> > Cc: Dave Hansen <dave.hansen@...ux.intel.com>
> > Cc: "H. Peter Anvin" <hpa@...or.com>
> > ---
> > arch/x86/kvm/vmx/vmenter.S | 6 ++----
> > 1 file changed, 2 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> > index 0a6cf5bff2aa..c65de5de92ab 100644
> > --- a/arch/x86/kvm/vmx/vmenter.S
> > +++ b/arch/x86/kvm/vmx/vmenter.S
> > @@ -118,13 +118,11 @@ SYM_FUNC_START(__vmx_vcpu_run)
> > * and vmentry.
> > */
> > mov 2*WORD_SIZE(%_ASM_SP), %_ASM_DI
> > - movl VMX_spec_ctrl(%_ASM_DI), %edi
> > - movl PER_CPU_VAR(x86_spec_ctrl_current), %esi
> > - cmp %edi, %esi
> > + movl VMX_spec_ctrl(%_ASM_DI), %eax
> > + cmp PER_CPU_VAR(x86_spec_ctrl_current), %eax
>
> Huh. There's a pre-existing bug lurking here, and in the SVM code. SPEC_CTRL
> is an MSR, i.e. a 64-bit value, but the assembly code assumes bits 63:32 are always
> zero.
But MSBs are zero, MSR is defined in arch/x86/include/msr-index.h as:
#define MSR_IA32_SPEC_CTRL 0x00000048 /* Speculation Control */
and "movl $..., %eax" zero-extends the value to full 64-bit width.
FWIW, MSR_IA32_SPEC_CTR is handled in the same way in arch/x86/entry/entry.S:
movl $MSR_IA32_PRED_CMD, %ecx
So, the insn is OK when MSR fits 32-bits. The insn is a bit smaller, too:
movl $0x01, %ecx
movq $0x01, %rcx
assembles to:
0: b9 01 00 00 00 mov $0x1,%ecx
5: 48 c7 c1 01 00 00 00 mov $0x1,%rcx
Rest assured that the assembler checks the immediate when 32-bit
registers are involved, e.g.:
mov.s: Assembler messages:
mov.s:1: Warning: 0xaaaaaaaaaa shortened to 0xaaaaaaaa
Uros.
Powered by blists - more mailing lists