[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YSa8z5vQKbFuLtew@google.com>
Date: Wed, 25 Aug 2021 21:57:35 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Lai Jiangshan <jiangshanlai+lkml@...il.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>,
Venkatesh Srinivas <venkateshs@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH 2/2] KVM: Guard cpusmask NULL check with
CONFIG_CPUMASK_OFFSTACK
On Wed, Aug 25, 2021, Lai Jiangshan wrote:
> On Sat, Aug 21, 2021 at 8:09 AM Sean Christopherson <seanjc@...gle.com> wrote:
> > @@ -277,6 +277,14 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
> > if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu))
> > continue;
> >
> > + /*
> > + * tmp can be NULL if cpumasks are allocated off stack, as
> > + * allocation of the mask is deliberately not fatal and is
> > + * handled by falling back to kicking all online CPUs.
> > + */
> > + if (IS_ENABLED(CONFIG_CPUMASK_OFFSTACK) && !tmp)
> > + continue;
> > +
>
> Hello, Sean
>
> I don't think it is a good idea to reinvent the cpumask_available().
Using cpumask_available() is waaaay better, thanks!
Vitaly / Paolo, take this one instead?
>From deff3e168c0612a2947d1ef29e488282631a788c Mon Sep 17 00:00:00 2001
From: Sean Christopherson <seanjc@...gle.com>
Date: Fri, 20 Aug 2021 13:36:21 -0700
Subject: [PATCH] KVM: Use cpumask_available() to check for NULL cpumask when
kicking vCPUs
Check for a NULL cpumask_var_t when kicking multiple vCPUs via
cpumask_available(), which performs a !NULL check if and only if cpumasks
are configured to be allocated off-stack. This is a meaningless
optimization, e.g. avoids a TEST+Jcc and TEST+CMOV on x86, but more
importantly helps document that the NULL check is necessary even though
all callers pass in a local variable.
No functional change intended.
Cc: Lai Jiangshan <jiangshanlai@...il.com>
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
virt/kvm/kvm_main.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 786b914db98f..2082aceffbf6 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -245,9 +245,13 @@ static void ack_flush(void *_completed)
{
}
-static inline bool kvm_kick_many_cpus(const struct cpumask *cpus, bool wait)
+static inline bool kvm_kick_many_cpus(cpumask_var_t tmp, bool wait)
{
- if (unlikely(!cpus))
+ const struct cpumask *cpus;
+
+ if (likely(cpumask_available(tmp)))
+ cpus = tmp;
+ else
cpus = cpu_online_mask;
if (cpumask_empty(cpus))
@@ -277,6 +281,14 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu))
continue;
+ /*
+ * tmp can be "unavailable" if cpumasks are allocated off stack
+ * as allocation of the mask is deliberately not fatal and is
+ * handled by falling back to kicking all online CPUs.
+ */
+ if (!cpumask_available(tmp))
+ continue;
+
/*
* Note, the vCPU could get migrated to a different pCPU at any
* point after kvm_request_needs_ipi(), which could result in
@@ -288,7 +300,7 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
* were reading SPTEs _before_ any changes were finalized. See
* kvm_vcpu_kick() for more details on handling requests.
*/
- if (tmp != NULL && kvm_request_needs_ipi(vcpu, req)) {
+ if (kvm_request_needs_ipi(vcpu, req)) {
cpu = READ_ONCE(vcpu->cpu);
if (cpu != -1 && cpu != me)
__cpumask_set_cpu(cpu, tmp);
--
Powered by blists - more mailing lists