[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200716193820.1141936-1-palmer@dabbelt.com>
Date: Thu, 16 Jul 2020 12:38:20 -0700
From: Palmer Dabbelt <palmer@...belt.com>
To: Will Deacon <willdeacon@...gle.com>
Cc: mpe@...erman.id.au, benh@...nel.crashing.org, paulus@...ba.org,
npiggin@...il.com, msuchanek@...e.de, tglx@...utronix.de,
bigeasy@...utronix.de, jniethe5@...il.com,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
kernel-team@...roid.com, Palmer Dabbelt <palmerdabbelt@...gle.com>
Subject: [PATCH] powerpc/64: Fix an out of date comment about MMIO ordering
From: Palmer Dabbelt <palmerdabbelt@...gle.com>
This primitive has been renamed, but because it was spelled incorrectly in the
first place it must have escaped the fixup patch. As far as I can tell this
logic is still correct: smp_mb__after_spinlock() uses the default smp_mb()
implementation, which is "sync" rather than "hwsync" but those are the same
(though I'm not that familiar with PowerPC).
Signed-off-by: Palmer Dabbelt <palmerdabbelt@...gle.com>
---
arch/powerpc/kernel/entry_64.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index b3c9f15089b6..7b38b4daca93 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -357,7 +357,7 @@ _GLOBAL(_switch)
* kernel/sched/core.c).
*
* Uncacheable stores in the case of involuntary preemption must
- * be taken care of. The smp_mb__before_spin_lock() in __schedule()
+ * be taken care of. The smp_mb__after_spinlock() in __schedule()
* is implemented as hwsync on powerpc, which orders MMIO too. So
* long as there is an hwsync in the context switch path, it will
* be executed on the source CPU after the task has performed
--
2.28.0.rc0.105.gf9edc3c819-goog
Powered by blists - more mailing lists