lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 27 Jun 2016 13:41:29 -0400
From:	Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>
To:	linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Cc:	paulmck@...ux.vnet.ibm.com, peterz@...radead.org, mingo@...hat.com,
	mpe@...erman.id.au, paulus@...ba.org, benh@...nel.crashing.org,
	Waiman.Long@....com, boqun.feng@...il.com, will.deacon@....com,
	dave@...olabs.net, Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>
Subject: [PATCH 2/3] powerpc/spinlock: support vcpu preempted check

This is to fix some holder preemption issues. Spinning at one
vcpu which is preempted is meaningless.

Kernel need such interfaces, So lets support it.

We also should suooprt both the shared and dedicated mode.
So add lppaca_dedicated_proc macro in lppaca.h

Suggested-by: Boqun Feng <boqun.feng@...il.com>
Signed-off-by: Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>
---
 arch/powerpc/include/asm/lppaca.h   |  6 ++++++
 arch/powerpc/include/asm/spinlock.h | 15 +++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/arch/powerpc/include/asm/lppaca.h b/arch/powerpc/include/asm/lppaca.h
index d0a2a2f..0a263d3 100644
--- a/arch/powerpc/include/asm/lppaca.h
+++ b/arch/powerpc/include/asm/lppaca.h
@@ -111,12 +111,18 @@ extern struct lppaca lppaca[];
  * we will have to transition to something better.
  */
 #define LPPACA_OLD_SHARED_PROC		2
+#define LPPACA_OLD_DEDICATED_PROC      (1 << 6)
 
 static inline bool lppaca_shared_proc(struct lppaca *l)
 {
 	return !!(l->__old_status & LPPACA_OLD_SHARED_PROC);
 }
 
+static inline bool lppaca_dedicated_proc(struct lppaca *l)
+{
+	return !!(l->__old_status & LPPACA_OLD_DEDICATED_PROC);
+}
+
 /*
  * SLB shadow buffer structure as defined in the PAPR.  The save_area
  * contains adjacent ESID and VSID pairs for each shadowed SLB.  The
diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 523673d..ae938ee 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -52,6 +52,21 @@
 #define SYNC_IO
 #endif
 
+/* For fixing some spinning issues in a guest.
+ * kernel would check if vcpu is preempted during a spin loop.
+ * we support that.
+ */
+#define arch_vcpu_is_preempted arch_vcpu_is_preempted
+static inline bool arch_vcpu_is_preempted(int cpu)
+{
+	struct lppaca *lp = &lppaca_of(cpu);
+
+	if (unlikely(!(lppaca_shared_proc(lp) ||
+			lppaca_dedicated_proc(lp))))
+		return false;
+	return !!(be32_to_cpu(lp->yield_count) & 1);
+}
+
 static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
 {
 	return lock.slock == 0;
-- 
2.4.11

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ