[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <39a67e7d-e5a8-734a-bfd7-ef147504950c@suse.com>
Date: Fri, 23 Sep 2022 12:52:47 +0200
From: Juergen Gross <jgross@...e.com>
To: Peter Zijlstra <peterz@...radead.org>,
"Srivatsa S. Bhat" <srivatsa@...il.mit.edu>
Cc: linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>,
Alexey Makhalov <amakhalov@...are.com>, x86@...nel.org,
VMware PV-Drivers Reviewers <pv-drivers@...are.com>,
ganb@...are.com, sturlapati@...are.com, bordoloih@...are.com,
ankitja@...are.com, keerthanak@...are.com, namit@...are.com,
srivatsab@...are.com
Subject: Re: [PATCH] smp/hotplug, x86/vmware: Put offline vCPUs in halt
instead of mwait
On 23.09.22 09:05, Peter Zijlstra wrote:
> On Thu, Jul 21, 2022 at 01:44:33PM -0700, Srivatsa S. Bhat wrote:
>> From: Srivatsa S. Bhat (VMware) <srivatsa@...il.mit.edu>
>>
>> VMware ESXi allows enabling a passthru mwait CPU-idle state in the
>> guest using the following VMX option:
>>
>> monitor_control.mwait_in_guest = "TRUE"
>>
>> This lets a vCPU in mwait to remain in guest context (instead of
>> yielding to the hypervisor via a VMEXIT), which helps speed up
>> wakeups from idle.
>>
>> However, this runs into problems with CPU hotplug, because the Linux
>> CPU offline path prefers to put the vCPU-to-be-offlined in mwait
>> state, whenever mwait is available. As a result, since a vCPU in mwait
>> remains in guest context and does not yield to the hypervisor, an
>> offline vCPU *appears* to be 100% busy as viewed from ESXi, which
>> prevents the hypervisor from running other vCPUs or workloads on the
>> corresponding pCPU (particularly when vCPU - pCPU mappings are
>> statically defined by the user).
>
> I would hope vCPU pinning is a mandatory thing when MWAIT passthrough it
> set?
>
>> [ Note that such a vCPU is not
>> actually busy spinning though; it remains in mwait idle state in the
>> guest ].
>>
>> Fix this by overriding the CPU offline play_dead() callback for VMware
>> hypervisor, by putting the CPU in halt state (which actually yields to
>> the hypervisor), even if mwait support is available.
>>
>> Signed-off-by: Srivatsa S. Bhat (VMware) <srivatsa@...il.mit.edu>
>> ---
>
>> +static void vmware_play_dead(void)
>> +{
>> + play_dead_common();
>> + tboot_shutdown(TB_SHUTDOWN_WFS);
>> +
>> + /*
>> + * Put the vCPU going offline in halt instead of mwait (even
>> + * if mwait support is available), to make sure that the
>> + * offline vCPU yields to the hypervisor (which may not happen
>> + * with mwait, for example, if the guest's VMX is configured
>> + * to retain the vCPU in guest context upon mwait).
>> + */
>> + hlt_play_dead();
>> +}
>> #endif
>>
>> static __init int activate_jump_labels(void)
>> @@ -349,6 +365,7 @@ static void __init vmware_paravirt_ops_setup(void)
>> #ifdef CONFIG_SMP
>> smp_ops.smp_prepare_boot_cpu =
>> vmware_smp_prepare_boot_cpu;
>> + smp_ops.play_dead = vmware_play_dead;
>> if (cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
>> "x86/vmware:online",
>> vmware_cpu_online,
>
> No real objection here; but would not something like the below fix the
> problem more generally? I'm thinking MWAIT passthrough for *any*
> hypervisor doesn't want play_dead to use it.
>
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index f24227bc3220..166cb3aaca8a 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -1759,6 +1759,8 @@ static inline void mwait_play_dead(void)
> return;
> if (!this_cpu_has(X86_FEATURE_CLFLUSH))
> return;
> + if (this_cpu_has(X86_FEATURE_HYPERVISOR))
> + return;
> if (__this_cpu_read(cpu_info.cpuid_level) < CPUID_MWAIT_LEAF)
> return;
>
With my Xen hat on I agree with this approach.
Juergen
Download attachment "OpenPGP_0xB0DE9DD628BF132F.asc" of type "application/pgp-keys" (3099 bytes)
Download attachment "OpenPGP_signature" of type "application/pgp-signature" (496 bytes)
Powered by blists - more mailing lists