[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAFULd4argBdTBM7m7U1Q-RMJdyYtAfOD08ukGn+JsT-v4Z6NrA@mail.gmail.com>
Date: Tue, 15 Apr 2025 09:42:03 +0200
From: Uros Bizjak <ubizjak@...il.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: kvm@...r.kernel.org, x86@...nel.org, linux-kernel@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>, Vitaly Kuznetsov <vkuznets@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...nel.org>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH 2/2] KVM: VMX: Use LEAVE in vmx_do_interrupt_irqoff()
On Tue, Apr 15, 2025 at 3:05 AM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Mon, Apr 14, 2025, Uros Bizjak wrote:
> > Micro-optimize vmx_do_interrupt_irqoff() by substituting
> > MOV %RBP,%RSP; POP %RBP instruction sequence with equivalent
> > LEAVE instruction. GCC compiler does this by default for
> > a generic tuning and for all modern processors:
>
> Out of curisoity, is LEAVE actually a performance win, or is the benefit essentially
> just the few code bytes saves?
It is hard to say for out-of-order execution cores, especially when
the stack engine is thrown to the mix (these two instructions, plus
following RET, all update %rsp).
The pragmatic solution was to do what the compiler does and use the
compiler's choice, based on the tuning below.
> > DEF_TUNE (X86_TUNE_USE_LEAVE, "use_leave",
> > m_386 | m_CORE_ALL | m_K6_GEODE | m_AMD_MULTIPLE | m_ZHAOXIN
> > | m_TREMONT | m_CORE_HYBRID | m_CORE_ATOM | m_GENERIC)
The tuning is updated when a new target is introduced to the compiler
and is based on various measurements by the processor manufacturer.
The above covers the majority of recent processors (plus generic
tuning), so I guess we won't fail by following the suit. OTOH, any
performance difference will be negligible.
> > The new code also saves a couple of bytes, from:
> >
> > 27: 48 89 ec mov %rbp,%rsp
> > 2a: 5d pop %rbp
> >
> > to:
> >
> > 27: c9 leave
Thanks,
Uros.
Powered by blists - more mailing lists