[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZwVuOcRujpzo9yTb@google.com>
Date: Tue, 8 Oct 2024 10:39:05 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Manali Shukla <manali.shukla@....com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
nikunj@....com
Subject: Re: [PATCH 5/5] KVM: x86: Add fastpath handling of HLT VM-Exits
On Tue, Oct 08, 2024, Manali Shukla wrote:
> Hi Sean,
>
> On 8/3/2024 1:21 AM, Sean Christopherson wrote:
> > Add a fastpath for HLT VM-Exits by immediately re-entering the guest if
> > it has a pending wake event. When virtual interrupt delivery is enabled,
> > i.e. when KVM doesn't need to manually inject interrupts, this allows KVM
> > to stay in the fastpath run loop when a vIRQ arrives between the guest
> > doing CLI and STI;HLT. Without AMD's Idle HLT-intercept support, the CPU
> > generates a HLT VM-Exit even though KVM will immediately resume the guest.
> >
> > Note, on bare metal, it's relatively uncommon for a modern guest kernel to
> > actually trigger this scenario, as the window between the guest checking
> > for a wake event and committing to HLT is quite small. But in a nested
> > environment, the timings change significantly, e.g. rudimentary testing
> > showed that ~50% of HLT exits where HLT-polling was successful would be
> > serviced by this fastpath, i.e. ~50% of the time that a nested vCPU gets
> > a wake event before KVM schedules out the vCPU, the wake event was pending
> > even before the VM-Exit.
> >
>
> Could you please help me with the test case that resulted in an approximately
> 50% improvement for the nested scenario?
It's not a 50% improvement, it was simply an observation that ~50% of the time
_that HLT-polling is successful_, the wake event was already pending when the
VM-Exit occurred. That is _wildly_ different than a "50% improvement".
As for the test case, it's simply running a lightly loaded VM as L2.
Powered by blists - more mailing lists