[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-10bcc80e9dbced128e3b4aa86e4737e5486a45d0@git.kernel.org>
Date: Mon, 5 Feb 2018 13:29:34 -0800
From: tip-bot for Mathieu Desnoyers <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: paulmck@...ux.vnet.ibm.com, ahh@...gle.com, paulus@...ba.org,
sehr@...gle.com, davejwatson@...com, ghackmann@...gle.com,
mathieu.desnoyers@...icios.com, hpa@...or.com,
boqun.feng@...il.com, linux-kernel@...r.kernel.org,
mpe@...erman.id.au, avi@...lladb.com, tglx@...utronix.de,
parri.andrea@...il.com, luto@...nel.org, benh@...nel.crashing.org,
linux@...linux.org.uk, will.deacon@....com,
torvalds@...ux-foundation.org, mingo@...nel.org,
maged.michael@...il.com, peterz@...radead.org
Subject: [tip:sched/urgent] membarrier/x86: Provide core serializing command
Commit-ID: 10bcc80e9dbced128e3b4aa86e4737e5486a45d0
Gitweb: https://git.kernel.org/tip/10bcc80e9dbced128e3b4aa86e4737e5486a45d0
Author: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
AuthorDate: Mon, 29 Jan 2018 15:20:18 -0500
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 5 Feb 2018 21:35:11 +0100
membarrier/x86: Provide core serializing command
There are two places where core serialization is needed by membarrier:
1) When returning from the membarrier IPI,
2) After scheduler updates curr to a thread with a different mm, before
going back to user-space, since the curr->mm is used by membarrier to
check whether it needs to send an IPI to that CPU.
x86-32 uses IRET as return from interrupt, and both IRET and SYSEXIT to go
back to user-space. The IRET instruction is core serializing, but not
SYSEXIT.
x86-64 uses IRET as return from interrupt, which takes care of the IPI.
However, it can return to user-space through either SYSRETL (compat
code), SYSRETQ, or IRET. Given that SYSRET{L,Q} is not core serializing,
we rely instead on write_cr3() performed by switch_mm() to provide core
serialization after changing the current mm, and deal with the special
case of kthread -> uthread (temporarily keeping current mm into
active_mm) by adding a sync_core() in that specific case.
Use the new sync_core_before_usermode() to guarantee this.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Acked-by: Thomas Gleixner <tglx@...utronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Andrea Parri <parri.andrea@...il.com>
Cc: Andrew Hunter <ahh@...gle.com>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Avi Kivity <avi@...lladb.com>
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: Boqun Feng <boqun.feng@...il.com>
Cc: Dave Watson <davejwatson@...com>
Cc: David Sehr <sehr@...gle.com>
Cc: Greg Hackmann <ghackmann@...gle.com>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Maged Michael <maged.michael@...il.com>
Cc: Michael Ellerman <mpe@...erman.id.au>
Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Paul Mackerras <paulus@...ba.org>
Cc: Russell King <linux@...linux.org.uk>
Cc: Will Deacon <will.deacon@....com>
Cc: linux-api@...r.kernel.org
Cc: linux-arch@...r.kernel.org
Link: http://lkml.kernel.org/r/20180129202020.8515-10-mathieu.desnoyers@efficios.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/Kconfig | 1 +
arch/x86/entry/entry_32.S | 5 +++++
arch/x86/entry/entry_64.S | 4 ++++
arch/x86/mm/tlb.c | 7 ++++---
4 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 31030ad..e095bdb 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -54,6 +54,7 @@ config X86
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_KCOV if X86_64
+ select ARCH_HAS_MEMBARRIER_SYNC_CORE
select ARCH_HAS_PMEM_API if X86_64
select ARCH_HAS_REFCOUNT
select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64
diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 2a35b1e..abee6d2 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -566,6 +566,11 @@ restore_all:
.Lrestore_nocheck:
RESTORE_REGS 4 # skip orig_eax/error_code
.Lirq_return:
+ /*
+ * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization
+ * when returning from IPI handler and when returning from
+ * scheduler to user-space.
+ */
INTERRUPT_RETURN
.section .fixup, "ax"
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index a835704..5816858 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -804,6 +804,10 @@ GLOBAL(restore_regs_and_return_to_kernel)
POP_EXTRA_REGS
POP_C_REGS
addq $8, %rsp /* skip regs->orig_ax */
+ /*
+ * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization
+ * when returning from IPI handler.
+ */
INTERRUPT_RETURN
ENTRY(native_iret)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 9fa7d2e..9b34121 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -229,9 +229,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
this_cpu_write(cpu_tlbstate.is_lazy, false);
/*
- * The membarrier system call requires a full memory barrier
- * before returning to user-space, after storing to rq->curr.
- * Writing to CR3 provides that full memory barrier.
+ * The membarrier system call requires a full memory barrier and
+ * core serialization before returning to user-space, after
+ * storing to rq->curr. Writing to CR3 provides that full
+ * memory barrier and core serializing instruction.
*/
if (real_prev == next) {
VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
Powered by blists - more mailing lists