[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230120163734.63e62444@imammedo.users.ipa.redhat.com>
Date: Fri, 20 Jan 2023 16:37:34 +0100
From: Igor Mammedov <imammedo@...hat.com>
To: "Srivatsa S. Bhat" <srivatsa@...il.mit.edu>
Cc: Thomas Gleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org,
amakhalov@...are.com, ganb@...are.com, ankitja@...are.com,
bordoloih@...are.com, keerthanak@...are.com, blamoreaux@...are.com,
namit@...are.com, Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Wyes Karny <wyes.karny@....com>,
Lewis Caroll <lewis.carroll@....com>,
Tom Lendacky <thomas.lendacky@....com>,
Juergen Gross <jgross@...e.com>, x86@...nel.org,
VMware PV-Drivers Reviewers <pv-drivers@...are.com>,
virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
xen-devel@...ts.xenproject.org
Subject: Re: [PATCH v2] x86/hotplug: Do not put offline vCPUs in mwait idle
state
On Fri, 20 Jan 2023 05:55:11 -0800
"Srivatsa S. Bhat" <srivatsa@...il.mit.edu> wrote:
> Hi Igor and Thomas,
>
> Thank you for your review!
>
> On 1/19/23 1:12 PM, Thomas Gleixner wrote:
> > On Mon, Jan 16 2023 at 15:55, Igor Mammedov wrote:
> >> "Srivatsa S. Bhat" <srivatsa@...il.mit.edu> wrote:
> >>> Fix this by preventing the use of mwait idle state in the vCPU offline
> >>> play_dead() path for any hypervisor, even if mwait support is
> >>> available.
> >>
> >> if mwait is enabled, it's very likely guest to have cpuidle
> >> enabled and using the same mwait as well. So exiting early from
> >> mwait_play_dead(), might just punt workflow down:
> >> native_play_dead()
> >> ...
> >> mwait_play_dead();
> >> if (cpuidle_play_dead()) <- possible mwait here
> >> hlt_play_dead();
> >>
> >> and it will end up in mwait again and only if that fails
> >> it will go HLT route and maybe transition to VMM.
> >
> > Good point.
> >
> >> Instead of workaround on guest side,
> >> shouldn't hypervisor force VMEXIT on being uplugged vCPU when it's
> >> actually hot-unplugging vCPU? (ex: QEMU kicks vCPU out from guest
> >> context when it is removing vCPU, among other things)
> >
> > For a pure guest side CPU unplug operation:
> >
> > guest$ echo 0 >/sys/devices/system/cpu/cpu$N/online
> >
> > the hypervisor is not involved at all. The vCPU is not removed in that
> > case.
> >
>
> Agreed, and this is indeed the scenario I was targeting with this patch,
> as opposed to vCPU removal from the host side. I'll add this clarification
> to the commit message.
commit message explicitly said:
"which prevents the hypervisor from running other vCPUs or workloads on the
corresponding pCPU."
and that implies unplug on hypervisor side as well.
Why? That's because when hypervisor exposes mwait to guest, it has to reserve/pin
a pCPU for each of present vCPUs. And you can safely run other VMs/workloads
on that pCPU only after it's not possible for it to be reused by VM where
it was used originally.
Now consider following worst (and most likely) case without unplug
on hypervisor side:
1. vm1mwait: pin pCPU2 to vCPU2
2. vm1mwait: guest$ echo 0 >/sys/devices/system/cpu/cpu2/online
-> HLT -> VMEXIT
--
3. vm2mwait: pin pCPU2 to vCPUx and start VM
4. vm2mwait: guest OS onlines Vcpu and starts using it incl.
going into idle=>mwait state
--
5. vm1mwait: it still thinks that vCPU is present it can rightfully do:
guest$ echo 1 >/sys/devices/system/cpu/cpu2/online
--
6.1 best case vm1mwait online fails after timeout
6.2 worse case: vm2mwait does VMEXIT on vCPUx around time-frame when
vm1mwait onlines vCPU2, the online may succeed and then vm2mwait's
vCPUx will be stuck (possibly indefinitely) until for some reason
VMEXIT happens on vm1mwait's vCPU2 _and_ host decides to schedule
vCPUx on pCPU2 which would make vm1mwait stuck on vCPU2.
So either way it's expected behavior.
And if there is no intention to unplug vCPU on hypervisor side,
then VMEXIT on play_dead is not really necessary (mwait is better
then HLT), since hypervisor can't safely reuse pCPU elsewhere and
VCPU goes into deep sleep within guest context.
PS:
The only case where making HLT/VMEXIT on play_dead might work out,
would be if new workload weren't pinned to the same pCPU nor
used mwait (i.e. host can migrate it elsewhere and schedule
vCPU2 back on pCPU2).
> > So to ensure that this ends up in HLT something like the below is
> > required.
> >
> > Note, the removal of the comment after mwait_play_dead() is intentional
> > because the comment is completely bogus. Not having MWAIT is not a
> > failure. But that wants to be a seperate patch.
> >
>
> Sounds good, will do and post a new version.
>
> Thank you!
>
> Regards,
> Srivatsa
> VMware Photon OS
>
>
> > Thanks,
> >
> > tglx
> > ---
> > diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> > index 55cad72715d9..3f1f20f71ec5 100644
> > --- a/arch/x86/kernel/smpboot.c
> > +++ b/arch/x86/kernel/smpboot.c
> > @@ -1833,7 +1833,10 @@ void native_play_dead(void)
> > play_dead_common();
> > tboot_shutdown(TB_SHUTDOWN_WFS);
> >
> > - mwait_play_dead(); /* Only returns on failure */
> > + if (this_cpu_has(X86_FEATURE_HYPERVISOR))
> > + hlt_play_dead();
> > +
> > + mwait_play_dead();
> > if (cpuidle_play_dead())
> > hlt_play_dead();
> > }
> >
> >
> >
> >
>
Powered by blists - more mailing lists