[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220312155603.1752193-1-guoren@kernel.org>
Date: Sat, 12 Mar 2022 23:56:03 +0800
From: guoren@...nel.org
To: guoren@...nel.org
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-csky@...r.kernel.org, linux-riscv@...ts.infradead.org,
linux-xtensa@...ux-xtensa.org, Guo Ren <guoren@...ux.alibaba.com>,
Will Deacon <will@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Palmer Dabbelt <palmer@...belt.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Chris Zankel <chris@...kel.net>,
Max Filippov <jcmvbkbc@...il.com>,
Arnd Bergmann <arnd@...db.de>
Subject: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
From: Guo Ren <guoren@...ux.alibaba.com>
These patch_text implementations are using stop_machine_cpuslocked
infrastructure with atomic cpu_count. The origin idea is that when
the master CPU patch_text, others should wait for it. But current
implementation is using the first CPU as master, which couldn't
guarantee continue CPUs are waiting. This patch changes the last
CPU as the master to solve the potaintial risk.
Signed-off-by: Guo Ren <guoren@...ux.alibaba.com>
Signed-off-by: Guo Ren <guoren@...nel.org>
Cc: Will Deacon <will@...nel.org>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Palmer Dabbelt <palmer@...belt.com>
Cc: Peter Zijlstra <peterz@...radead.org
Cc: Masami Hiramatsu <mhiramat@...nel.org>
Cc: Chris Zankel <chris@...kel.net>
Cc: Max Filippov <jcmvbkbc@...il.com>
Cc: Arnd Bergmann <arnd@...db.de>
---
arch/arm64/kernel/patching.c | 4 ++--
arch/csky/kernel/probes/kprobes.c | 2 +-
arch/riscv/kernel/patch.c | 2 +-
arch/xtensa/kernel/jump_label.c | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
index 771f543464e0..6cfea9650e65 100644
--- a/arch/arm64/kernel/patching.c
+++ b/arch/arm64/kernel/patching.c
@@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
int i, ret = 0;
struct aarch64_insn_patch *pp = arg;
- /* The first CPU becomes master */
- if (atomic_inc_return(&pp->cpu_count) == 1) {
+ /* The last CPU becomes master */
+ if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) {
for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
pp->new_insns[i]);
diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
index 42920f25e73c..19821a06a991 100644
--- a/arch/csky/kernel/probes/kprobes.c
+++ b/arch/csky/kernel/probes/kprobes.c
@@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
struct csky_insn_patch *param = priv;
unsigned int addr = (unsigned int)param->addr;
- if (atomic_inc_return(¶m->cpu_count) == 1) {
+ if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) {
*(u16 *) addr = cpu_to_le16(param->opcode);
dcache_wb_range(addr, addr + 2);
atomic_inc(¶m->cpu_count);
diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
index 0b552873a577..cca72a9388e3 100644
--- a/arch/riscv/kernel/patch.c
+++ b/arch/riscv/kernel/patch.c
@@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
struct patch_insn *patch = data;
int ret = 0;
- if (atomic_inc_return(&patch->cpu_count) == 1) {
+ if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
ret =
patch_text_nosync(patch->addr, &patch->insn,
GET_INSN_LENGTH(patch->insn));
diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
index 61cf6497a646..7e1d3f952eb3 100644
--- a/arch/xtensa/kernel/jump_label.c
+++ b/arch/xtensa/kernel/jump_label.c
@@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
{
struct patch *patch = data;
- if (atomic_inc_return(&patch->cpu_count) == 1) {
+ if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
local_patch_text(patch->addr, patch->data, patch->sz);
atomic_inc(&patch->cpu_count);
} else {
--
2.25.1
Powered by blists - more mailing lists