[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220313012221.1755483-1-guoren@kernel.org>
Date: Sun, 13 Mar 2022 09:22:21 +0800
From: guoren@...nel.org
To: guoren@...nel.org
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-csky@...r.kernel.org, linux-riscv@...ts.infradead.org,
linux-xtensa@...ux-xtensa.org, Guo Ren <guoren@...ux.alibaba.com>,
Max Filippov <jcmvbkbc@...il.com>,
Will Deacon <will@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Palmer Dabbelt <palmer@...belt.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Chris Zankel <chris@...kel.net>, Arnd Bergmann <arnd@...db.de>
Subject: [PATCH V2] arch: patch_text: Fixup last cpu should be master
From: Guo Ren <guoren@...ux.alibaba.com>
These patch_text implementations are using stop_machine_cpuslocked
infrastructure with atomic cpu_count. The original idea: When the
master CPU patch_text, the others should wait for it. But current
implementation is using the first CPU as master, which couldn't
guarantee the remaining CPUs are waiting. This patch changes the
last CPU as the master to solve the potential risk.
Signed-off-by: Guo Ren <guoren@...ux.alibaba.com>
Signed-off-by: Guo Ren <guoren@...nel.org>
Reviewed-by: Max Filippov <jcmvbkbc@...il.com>
Cc: Will Deacon <will@...nel.org>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Palmer Dabbelt <palmer@...belt.com>
Cc: Peter Zijlstra <peterz@...radead.org
Cc: Masami Hiramatsu <mhiramat@...nel.org>
Cc: Chris Zankel <chris@...kel.net>
Cc: Arnd Bergmann <arnd@...db.de>
---
Changes in V2:
- Fixup last cpu should be num_online_cpus() by Max Filippov
- Fixup typos found by Max Filippov
---
arch/arm64/kernel/patching.c | 4 ++--
arch/csky/kernel/probes/kprobes.c | 2 +-
arch/riscv/kernel/patch.c | 2 +-
arch/xtensa/kernel/jump_label.c | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
index 771f543464e0..33e0fabc0b79 100644
--- a/arch/arm64/kernel/patching.c
+++ b/arch/arm64/kernel/patching.c
@@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
int i, ret = 0;
struct aarch64_insn_patch *pp = arg;
- /* The first CPU becomes master */
- if (atomic_inc_return(&pp->cpu_count) == 1) {
+ /* The last CPU becomes master */
+ if (atomic_inc_return(&pp->cpu_count) == num_online_cpus()) {
for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
pp->new_insns[i]);
diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
index 42920f25e73c..34ba684d5962 100644
--- a/arch/csky/kernel/probes/kprobes.c
+++ b/arch/csky/kernel/probes/kprobes.c
@@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
struct csky_insn_patch *param = priv;
unsigned int addr = (unsigned int)param->addr;
- if (atomic_inc_return(¶m->cpu_count) == 1) {
+ if (atomic_inc_return(¶m->cpu_count) == num_online_cpus()) {
*(u16 *) addr = cpu_to_le16(param->opcode);
dcache_wb_range(addr, addr + 2);
atomic_inc(¶m->cpu_count);
diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
index 0b552873a577..765004b60513 100644
--- a/arch/riscv/kernel/patch.c
+++ b/arch/riscv/kernel/patch.c
@@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
struct patch_insn *patch = data;
int ret = 0;
- if (atomic_inc_return(&patch->cpu_count) == 1) {
+ if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
ret =
patch_text_nosync(patch->addr, &patch->insn,
GET_INSN_LENGTH(patch->insn));
diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
index 61cf6497a646..b67efcd7e32c 100644
--- a/arch/xtensa/kernel/jump_label.c
+++ b/arch/xtensa/kernel/jump_label.c
@@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
{
struct patch *patch = data;
- if (atomic_inc_return(&patch->cpu_count) == 1) {
+ if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
local_patch_text(patch->addr, patch->data, patch->sz);
atomic_inc(&patch->cpu_count);
} else {
--
2.25.1
Powered by blists - more mailing lists