lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170901161007.2661-1-mathieu.desnoyers@efficios.com>
Date:   Fri,  1 Sep 2017 12:10:07 -0400
From:   Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:     "Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Andrew Hunter <ahh@...gle.com>,
        Maged Michael <maged.michael@...il.com>, gromer@...gle.com,
        Avi Kivity <avi@...lladb.com>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        Dave Watson <davejwatson@...com>,
        Andy Lutomirski <luto@...nel.org>,
        Will Deacon <will.deacon@....com>,
        Hans Boehm <hboehm@...gle.com>
Subject: [RFC PATCH v3] membarrier: provide core serialization

Add a new MEMBARRIER_FLAG_SYNC_CORE flag to the membarrier
system call. It allows membarrier to issue core serializing barriers in
addition to memory barriers on target threads whenever a membarrier
command is performed.

It is relevant for reclaim of JIT code, which requires to issue core
serializing barriers on all threads running on behalf of a process
after ensuring the old code is not visible anymore, before re-using
memory for new code.

The new MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED used with
MEMBARRIER_FLAG_SYNC_CORE flag registers the current process as
requiring core serialization. It may block. It can be used to ensure
MEMBARRIER_CMD_PRIVATE_EXPEDITED never blocks, even the first time it is
invoked by a process with the MEMBARRIER_FLAG_SYNC_CORE flag.

* Scheduler Overhead Benchmarks

Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
Linux v4.13-rc6

Inter-thread scheduling
taskset 01 ./perf bench sched pipe -T

                       Avg. usecs/op         Std.Dev. usecs/op
Before this change:         2.55                   0.10
With this change:           2.49                   0.08
SYNC_CORE processes:        2.70                   0.10

Inter-process scheduling
taskset 01 ./perf bench sched pipe

Before this change:         2.93                   0.13
With this change:           2.93                   0.13
SYNC_CORE processes:        3.20                   0.06

Changes since v2:
- Rename MEMBARRIER_CMD_REGISTER_SYNC_CORE to
  MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED,
- Introduce the "MEMBARRIER_FLAG_SYNC_CORE" flag.
- Introduce CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE, only implemented by
  x86 32/64 initially.
- Introduce arch_membarrier_user_icache_flush, a no-op on x86 32/64,
  which can be implemented on architectures with incoherent data and
  instruction caches. It is associated with
  CONFIG_ARCH_HAS_MEMBARRIER_USER_ICACHE_FLUSH.
- Introduce membarrier_sync_core_active counter, used for the shared
  system-wide membarrier with MEMBARRIER_FLAG_SYNC_CORE flag. If set, it
  issues sync_core on sched_out.
- The membarrier_sync_core per-thread flag still issues a sync_core()
  on sched_out, but now issues both sync_core and icache flush on
  sched_in, only when the current->mm changes between prev and next.

Changes since v1:
- Add missing MEMBARRIER_CMD_REGISTER_SYNC_CORE header documentation,
- Add benchmarks to commit message.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
CC: Peter Zijlstra <peterz@...radead.org>
CC: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
CC: Boqun Feng <boqun.feng@...il.com>
CC: Andrew Hunter <ahh@...gle.com>
CC: Maged Michael <maged.michael@...il.com>
CC: gromer@...gle.com
CC: Avi Kivity <avi@...lladb.com>
CC: Benjamin Herrenschmidt <benh@...nel.crashing.org>
CC: Paul Mackerras <paulus@...ba.org>
CC: Michael Ellerman <mpe@...erman.id.au>
CC: Dave Watson <davejwatson@...com>
CC: Andy Lutomirski <luto@...nel.org>
CC: Will Deacon <will.deacon@....com>
CC: Hans Boehm <hboehm@...gle.com>
---
 arch/x86/Kconfig                |   1 +
 fs/exec.c                       |   1 +
 include/linux/sched.h           |  82 +++++++++++++++++++++++++
 include/uapi/linux/membarrier.h |  32 ++++++++--
 init/Kconfig                    |   6 ++
 kernel/fork.c                   |   2 +
 kernel/sched/core.c             |   3 +
 kernel/sched/membarrier.c       | 133 +++++++++++++++++++++++++++++++++++-----
 8 files changed, 240 insertions(+), 20 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 323cb065be5e..d39ae515632e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -62,6 +62,7 @@ config X86
 	select ARCH_HAS_STRICT_MODULE_RWX
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
 	select ARCH_HAS_ZONE_DEVICE		if X86_64
+	select ARCH_HAS_MEMBARRIER_SYNC_CORE
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
 	select ARCH_MIGHT_HAVE_ACPI_PDC		if ACPI
 	select ARCH_MIGHT_HAVE_PC_PARPORT
diff --git a/fs/exec.c b/fs/exec.c
index 62175cbcc801..a4ab3253bac7 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1794,6 +1794,7 @@ static int do_execveat_common(int fd, struct filename *filename,
 	/* execve succeeded */
 	current->fs->in_exec = 0;
 	current->in_execve = 0;
+	membarrier_execve(current);
 	acct_update_integrals(current);
 	task_numa_free(current);
 	free_bprm(bprm);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8337e2db0bb2..113d9c03a21c 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1086,6 +1086,9 @@ struct task_struct {
 	/* Used by LSM modules for access restriction: */
 	void				*security;
 #endif
+#ifdef CONFIG_MEMBARRIER
+	int membarrier_sync_core;
+#endif
 
 	/*
 	 * New fields for task_struct should be added above here, so that
@@ -1623,4 +1626,83 @@ extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
 #define TASK_SIZE_OF(tsk)	TASK_SIZE
 #endif
 
+#ifdef CONFIG_ARCH_HAS_MEMBARRIER_USER_ICACHE_FLUSH
+/*
+ * Architectures with incoherent data and instruction caches are
+ * required to implement arch_membarrier_user_icache_flush() if they
+ * want to support the MEMBARRIER_FLAG_SYNC_CORE flag.
+ */
+extern void arch_membarrier_user_icache_flush(void);
+#else
+static inline void arch_membarrier_user_icache_flush(void)
+{
+}
+#endif
+
+#if defined(CONFIG_MEMBARRIER) && defined(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE)
+extern atomic_long_t membarrier_sync_core_active;
+
+static inline void membarrier_fork(struct task_struct *t,
+		unsigned long clone_flags)
+{
+	/*
+	 * Coherence of membarrier_sync_core against thread fork is
+	 * protected by siglock. membarrier_fork is called with siglock
+	 * held.
+	 */
+	t->membarrier_sync_core = current->membarrier_sync_core;
+}
+static inline void membarrier_execve(struct task_struct *t)
+{
+	t->membarrier_sync_core = 0;
+}
+static inline void membarrier_sched_out(struct task_struct *t)
+{
+	/*
+	 * Core serialization is performed before the memory barrier
+	 * preceding the store to rq->curr. A non-zero sync_core_active
+	 * implies that a core serializing shared membarrier is in
+	 * progress.
+	 */
+	if (unlikely(READ_ONCE(t->membarrier_sync_core)
+			|| atomic_long_read(&membarrier_sync_core_active)))
+		sync_core();
+	/*
+	 * Flushing icache on each scheduler entry when a shared
+	 * membarrier requiring core serialization is in progress.
+	 */
+	if (unlikely(atomic_long_read(&membarrier_sync_core_active)))
+		arch_membarrier_user_icache_flush();
+}
+static inline void membarrier_sched_in(struct task_struct *prev,
+		struct task_struct *next)
+{
+	/*
+	 * Core serialization is performed after the memory barrier
+	 * following the store to rq->curr.
+	 */
+	if (unlikely(READ_ONCE(next->membarrier_sync_core))) {
+		if (unlikely(prev->mm != next->mm)) {
+			sync_core();
+			arch_membarrier_user_icache_flush();
+		}
+	}
+}
+#else
+static inline void membarrier_fork(struct task_struct *t,
+		unsigned long clone_flags)
+{
+}
+static inline void membarrier_execve(struct task_struct *t)
+{
+}
+static inline void membarrier_sched_out(struct task_struct *t)
+{
+}
+static inline void membarrier_sched_in(struct task_struct *prev,
+		struct task_struct *next)
+{
+}
+#endif
+
 #endif
diff --git a/include/uapi/linux/membarrier.h b/include/uapi/linux/membarrier.h
index 6d47b3249d8a..4c8682026500 100644
--- a/include/uapi/linux/membarrier.h
+++ b/include/uapi/linux/membarrier.h
@@ -54,19 +54,41 @@
  *                          same processes as the caller thread. This
  *                          command returns 0. The "expedited" commands
  *                          complete faster than the non-expedited ones,
- *                          they never block, but have the downside of
- *                          causing extra overhead.
+ *                          they usually never block, but have the
+ *                          downside of causing extra overhead. The only
+ *                          case where it can block is the first time it
+ *                          is called by a process with the
+ *                          MEMBARRIER_FLAG_SYNC_CORE flag, if there has
+ *                          not been any prior registration of that
+ *                          process with
+ *                          MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED
+ *                          and the same flag.
+ * @MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED:
+ *                          When used with MEMBARRIER_FLAG_SYNC_CORE,
+ *                          register the current process as requiring
+ *                          core serialization when a private expedited
+ *                          membarrier is issued. It may block. It can
+ *                          be used to ensure
+ *                          MEMBARRIER_CMD_PRIVATE_EXPEDITED never
+ *                          blocks, even the first time it is invoked by
+ *                          a process with the MEMBARRIER_FLAG_SYNC_CORE
+ *                          flag.
  *
  * Command to be passed to the membarrier system call. The commands need to
  * be a single bit each, except for MEMBARRIER_CMD_QUERY which is assigned to
  * the value 0.
  */
 enum membarrier_cmd {
-	MEMBARRIER_CMD_QUERY			= 0,
-	MEMBARRIER_CMD_SHARED			= (1 << 0),
+	MEMBARRIER_CMD_QUERY				= 0,
+	MEMBARRIER_CMD_SHARED				= (1 << 0),
 	/* reserved for MEMBARRIER_CMD_SHARED_EXPEDITED (1 << 1) */
 	/* reserved for MEMBARRIER_CMD_PRIVATE (1 << 2) */
-	MEMBARRIER_CMD_PRIVATE_EXPEDITED	= (1 << 3),
+	MEMBARRIER_CMD_PRIVATE_EXPEDITED		= (1 << 3),
+	MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED	= (1 << 4),
+};
+
+enum membarrier_flags {
+	MEMBARRIER_FLAG_SYNC_CORE			= (1 << 0),
 };
 
 #endif /* _UAPI_LINUX_MEMBARRIER_H */
diff --git a/init/Kconfig b/init/Kconfig
index 8514b25db21c..e74baef9f347 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -615,6 +615,12 @@ config ARCH_SUPPORTS_INT128
 config ARCH_WANT_NUMA_VARIABLE_LOCALITY
 	bool
 
+# For architectures implementing membarrier core synchronization,
+# required by the membarrier sync_core registration.
+#
+config ARCH_HAS_MEMBARRIER_SYNC_CORE
+	bool
+
 config NUMA_BALANCING
 	bool "Memory placement aware NUMA scheduler"
 	depends on ARCH_SUPPORTS_NUMA_BALANCING
diff --git a/kernel/fork.c b/kernel/fork.c
index e075b7780421..1d44d7250431 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1840,6 +1840,8 @@ static __latent_entropy struct task_struct *copy_process(
 	 */
 	copy_seccomp(p);
 
+	membarrier_fork(p, clone_flags);
+
 	/*
 	 * Process group and session signals need to be delivered to just the
 	 * parent before the fork or both the parent and the child after the
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d57553551ad6..98aac5a44604 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3292,6 +3292,8 @@ static void __sched notrace __schedule(bool preempt)
 	local_irq_disable();
 	rcu_note_context_switch(preempt);
 
+	membarrier_sched_out(prev);
+
 	/*
 	 * Make sure that signal_pending_state()->signal_pending() below
 	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
@@ -3364,6 +3366,7 @@ static void __sched notrace __schedule(bool preempt)
 
 		/* Also unlocks the rq: */
 		rq = context_switch(rq, prev, next, &rf);
+		membarrier_sched_in(prev, next);
 	} else {
 		rq->clock_update_flags &= ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP);
 		rq_unlock_irq(rq, &rf);
diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
index 7eec6914d2d2..8c8a25e17a50 100644
--- a/kernel/sched/membarrier.c
+++ b/kernel/sched/membarrier.c
@@ -18,6 +18,7 @@
 #include <linux/membarrier.h>
 #include <linux/tick.h>
 #include <linux/cpumask.h>
+#include <linux/atomic.h>
 
 #include "sched.h"	/* for cpu_rq(). */
 
@@ -25,22 +26,118 @@
  * Bitmask made from a "or" of all commands within enum membarrier_cmd,
  * except MEMBARRIER_CMD_QUERY.
  */
-#define MEMBARRIER_CMD_BITMASK	\
-	(MEMBARRIER_CMD_SHARED | MEMBARRIER_CMD_PRIVATE_EXPEDITED)
+#define MEMBARRIER_CMD_BITMASK			\
+	(MEMBARRIER_CMD_SHARED			\
+	| MEMBARRIER_CMD_PRIVATE_EXPEDITED	\
+	| MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED)
+
+#ifdef CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE
+atomic_long_t membarrier_sync_core_active;
+
+static void membarrier_shared_sync_core_begin(int flags)
+{
+	if (flags & MEMBARRIER_FLAG_SYNC_CORE)
+		atomic_long_inc(&membarrier_sync_core_active);
+}
+
+static void membarrier_shared_sync_core_end(int flags)
+{
+	if (flags & MEMBARRIER_FLAG_SYNC_CORE)
+		atomic_long_dec(&membarrier_sync_core_active);
+}
+
+static int membarrier_register_private_expedited_sync_core(void)
+{
+	struct task_struct *p = current, *t;
+
+	if (READ_ONCE(p->membarrier_sync_core))
+		return 0;
+	if (get_nr_threads(p) == 1) {
+		p->membarrier_sync_core = 1;
+		return 0;
+	}
+
+	/*
+	 * Coherence of membarrier_sync_core against thread fork is
+	 * protected by siglock.
+	 */
+	spin_lock(&p->sighand->siglock);
+	for_each_thread(p, t)
+		WRITE_ONCE(t->membarrier_sync_core, 1);
+	spin_unlock(&p->sighand->siglock);
+	/*
+	 * Ensure all future scheduler execution will observe the new
+	 * membarrier_sync_core state for this process.
+	 */
+	synchronize_sched();
+	return 0;
+}
+static void membarrier_sync_core(void)
+{
+	sync_core();
+}
+#else
+static void membarrier_shared_sync_core_begin(int flags)
+{
+}
+static void membarrier_shared_sync_core_end(int flags)
+{
+}
+static int membarrier_register_private_expedited_sync_core(void)
+{
+	return -EINVAL;
+}
+static void membarrier_sync_core(void)
+{
+}
+#endif
+
+static int membarrier_shared(int flags)
+{
+	if (unlikely(flags & ~MEMBARRIER_FLAG_SYNC_CORE))
+		return -EINVAL;
+	/* MEMBARRIER_CMD_SHARED is not compatible with nohz_full. */
+	if (tick_nohz_full_enabled())
+		return -EINVAL;
+	if (num_online_cpus() == 1)
+		return 0;
+
+	membarrier_shared_sync_core_begin(flags);
+	synchronize_sched();
+	membarrier_shared_sync_core_end(flags);
+
+	return 0;
+}
 
 static void ipi_mb(void *info)
 {
-	smp_mb();	/* IPIs should be serializing but paranoid. */
+	/* IPIs should be serializing but paranoid. */
+	smp_mb();
+	membarrier_sync_core();
+	arch_membarrier_user_icache_flush();
 }
 
-static void membarrier_private_expedited(void)
+static int membarrier_private_expedited(int flags)
 {
 	int cpu;
 	bool fallback = false;
 	cpumask_var_t tmpmask;
 
+	if (unlikely(flags & ~MEMBARRIER_FLAG_SYNC_CORE))
+		return -EINVAL;
+	/*
+	 * Do the process registration ourself if it has not been
+	 * performed by an explicit register command.
+	 */
+	if (unlikely(flags & MEMBARRIER_FLAG_SYNC_CORE)) {
+		int ret;
+
+		ret = membarrier_register_private_expedited_sync_core();
+		if (ret)
+			return ret;
+	}
 	if (num_online_cpus() == 1 || get_nr_threads(current) == 1)
-		return;
+		return 0;
 
 	/*
 	 * Matches memory barriers around rq->curr modification in
@@ -94,6 +191,16 @@ static void membarrier_private_expedited(void)
 	 * rq->curr modification in scheduler.
 	 */
 	smp_mb();	/* exit from system call is not a mb */
+	return 0;
+}
+
+static int membarrier_register_private_expedited(int flags)
+{
+	if (unlikely(flags & ~MEMBARRIER_FLAG_SYNC_CORE))
+		return -EINVAL;
+	if (flags & MEMBARRIER_FLAG_SYNC_CORE)
+		return membarrier_register_private_expedited_sync_core();
+	return 0;
 }
 
 /**
@@ -125,27 +232,23 @@ static void membarrier_private_expedited(void)
  */
 SYSCALL_DEFINE2(membarrier, int, cmd, int, flags)
 {
-	if (unlikely(flags))
-		return -EINVAL;
 	switch (cmd) {
 	case MEMBARRIER_CMD_QUERY:
 	{
 		int cmd_mask = MEMBARRIER_CMD_BITMASK;
 
+		if (unlikely(flags))
+			return -EINVAL;
 		if (tick_nohz_full_enabled())
 			cmd_mask &= ~MEMBARRIER_CMD_SHARED;
 		return cmd_mask;
 	}
 	case MEMBARRIER_CMD_SHARED:
-		/* MEMBARRIER_CMD_SHARED is not compatible with nohz_full. */
-		if (tick_nohz_full_enabled())
-			return -EINVAL;
-		if (num_online_cpus() > 1)
-			synchronize_sched();
-		return 0;
+		return membarrier_shared(flags);
 	case MEMBARRIER_CMD_PRIVATE_EXPEDITED:
-		membarrier_private_expedited();
-		return 0;
+		return membarrier_private_expedited(flags);
+	case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED:
+		return membarrier_register_private_expedited(flags);
 	default:
 		return -EINVAL;
 	}
-- 
2.11.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ