[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1477642287-24104-2-git-send-email-xinhui.pan@linux.vnet.ibm.com>
Date: Fri, 28 Oct 2016 04:11:17 -0400
From: Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>
To: linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
virtualization@...ts.linux-foundation.org,
linux-s390@...r.kernel.org, xen-devel-request@...ts.xenproject.org,
kvm@...r.kernel.org, xen-devel@...ts.xenproject.org, x86@...nel.org
Cc: benh@...nel.crashing.org, paulus@...ba.org, mpe@...erman.id.au,
mingo@...hat.com, peterz@...radead.org, paulmck@...ux.vnet.ibm.com,
will.deacon@....com, kernellwp@...il.com, jgross@...e.com,
pbonzini@...hat.com, bsingharora@...il.com, boqun.feng@...il.com,
borntraeger@...ibm.com, rkrcmar@...hat.com,
David.Laight@...LAB.COM, Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>
Subject: [PATCH v6 01/11] kernel/sched: introduce vcpu preempted check interface
This patch support to fix lock holder preemption issue.
For kernel users, we could use bool vcpu_is_preempted(int cpu) to detech if
one vcpu is preempted or not.
The default implementation is a macro defined by false. So compiler can
wrap it out if arch dose not support such vcpu pteempted check.
Suggested-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@...ibm.com>
Tested-by: Juergen Gross <jgross@...e.com>
---
include/linux/sched.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 348f51b..44c1ce7 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -3506,6 +3506,18 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
#endif /* CONFIG_SMP */
+/*
+ * In order to deal with a various lock holder preemption issues provide an
+ * interface to see if a vCPU is currently running or not.
+ *
+ * This allows us to terminate optimistic spin loops and block, analogous to
+ * the native optimistic spin heuristic of testing if the lock owner task is
+ * running or not.
+ */
+#ifndef vcpu_is_preempted
+#define vcpu_is_preempted(cpu) false
+#endif
+
extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask);
extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
--
2.4.11
Powered by blists - more mailing lists