[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJF2gTT44yEp5VySOGd-1+MAy3=xLw3-8hCPQ74w_zHJ1fJ3ww@mail.gmail.com>
Date: Sun, 13 Mar 2022 09:04:26 +0800
From: Guo Ren <guoren@...nel.org>
To: Max Filippov <jcmvbkbc@...il.com>
Cc: Linux ARM <linux-arm-kernel@...ts.infradead.org>,
LKML <linux-kernel@...r.kernel.org>, linux-csky@...r.kernel.org,
linux-riscv <linux-riscv@...ts.infradead.org>,
"open list:TENSILICA XTENSA PORT (xtensa)"
<linux-xtensa@...ux-xtensa.org>,
Guo Ren <guoren@...ux.alibaba.com>,
Will Deacon <will@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Palmer Dabbelt <palmer@...belt.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Chris Zankel <chris@...kel.net>, Arnd Bergmann <arnd@...db.de>
Subject: Re: [RFC PATCH] arch: patch_text: Fixup last cpu should be master
On Sun, Mar 13, 2022 at 7:57 AM Max Filippov <jcmvbkbc@...il.com> wrote:
>
> On Sat, Mar 12, 2022 at 7:56 AM <guoren@...nel.org> wrote:
> >
> > From: Guo Ren <guoren@...ux.alibaba.com>
> >
> > These patch_text implementations are using stop_machine_cpuslocked
> > infrastructure with atomic cpu_count. The origin idea is that when
> > the master CPU patch_text, others should wait for it. But current
> > implementation is using the first CPU as master, which couldn't
> > guarantee continue CPUs are waiting. This patch changes the last
> > CPU as the master to solve the potaintial risk.
> >
> > Signed-off-by: Guo Ren <guoren@...ux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@...nel.org>
> > Cc: Will Deacon <will@...nel.org>
> > Cc: Catalin Marinas <catalin.marinas@....com>
> > Cc: Palmer Dabbelt <palmer@...belt.com>
> > Cc: Peter Zijlstra <peterz@...radead.org
> > Cc: Masami Hiramatsu <mhiramat@...nel.org>
> > Cc: Chris Zankel <chris@...kel.net>
> > Cc: Max Filippov <jcmvbkbc@...il.com>
> > Cc: Arnd Bergmann <arnd@...db.de>
> > ---
> > arch/arm64/kernel/patching.c | 4 ++--
> > arch/csky/kernel/probes/kprobes.c | 2 +-
> > arch/riscv/kernel/patch.c | 2 +-
> > arch/xtensa/kernel/jump_label.c | 2 +-
> > 4 files changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
> > index 771f543464e0..6cfea9650e65 100644
> > --- a/arch/arm64/kernel/patching.c
> > +++ b/arch/arm64/kernel/patching.c
> > @@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
> > int i, ret = 0;
> > struct aarch64_insn_patch *pp = arg;
> >
> > - /* The first CPU becomes master */
> > - if (atomic_inc_return(&pp->cpu_count) == 1) {
> > + /* The last CPU becomes master */
> > + if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) {
>
> atomic_inc_return returns the incremented value, so the last CPU gets
> num_online_cpus(), not (num_online_cpus() - 1).
Oops! You are right, thx.
>
> > for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
> > ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
> > pp->new_insns[i]);
> > diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
> > index 42920f25e73c..19821a06a991 100644
> > --- a/arch/csky/kernel/probes/kprobes.c
> > +++ b/arch/csky/kernel/probes/kprobes.c
> > @@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
> > struct csky_insn_patch *param = priv;
> > unsigned int addr = (unsigned int)param->addr;
> >
> > - if (atomic_inc_return(¶m->cpu_count) == 1) {
> > + if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) {
>
> Ditto.
>
> > *(u16 *) addr = cpu_to_le16(param->opcode);
> > dcache_wb_range(addr, addr + 2);
> > atomic_inc(¶m->cpu_count);
> > diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> > index 0b552873a577..cca72a9388e3 100644
> > --- a/arch/riscv/kernel/patch.c
> > +++ b/arch/riscv/kernel/patch.c
> > @@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
> > struct patch_insn *patch = data;
> > int ret = 0;
> >
> > - if (atomic_inc_return(&patch->cpu_count) == 1) {
> > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
>
> Ditto.
>
> > ret =
> > patch_text_nosync(patch->addr, &patch->insn,
> > GET_INSN_LENGTH(patch->insn));
> > diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
> > index 61cf6497a646..7e1d3f952eb3 100644
> > --- a/arch/xtensa/kernel/jump_label.c
> > +++ b/arch/xtensa/kernel/jump_label.c
> > @@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
> > {
> > struct patch *patch = data;
> >
> > - if (atomic_inc_return(&patch->cpu_count) == 1) {
> > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) {
>
> Ditto.
>
> > local_patch_text(patch->addr, patch->data, patch->sz);
> > atomic_inc(&patch->cpu_count);
> > } else {
> > --
> > 2.25.1
> >
>
>
> --
> Thanks.
> -- Max
--
Best Regards
Guo Ren
ML: https://lore.kernel.org/linux-csky/
Powered by blists - more mailing lists