[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mhng-d2e23c07-fd6f-4ae8-a2c7-fc1825e50503@palmer-ri-x1c9>
Date: Fri, 22 Apr 2022 09:02:00 -0700 (PDT)
From: Palmer Dabbelt <palmer@...belt.com>
To: guoren@...nel.org
CC: guoren@...nel.org, Arnd Bergmann <arnd@...db.de>,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-riscv@...ts.infradead.org, guoren@...ux.alibaba.com,
mhiramat@...nel.org, stable@...r.kernel.org
Subject: Re: [PATCH V3] riscv: patch_text: Fixup last cpu should be master
On Thu, 21 Apr 2022 15:57:32 PDT (-0700), Palmer Dabbelt wrote:
> On Wed, 06 Apr 2022 07:16:49 PDT (-0700), guoren@...nel.org wrote:
>> From: Guo Ren <guoren@...ux.alibaba.com>
>>
>> These patch_text implementations are using stop_machine_cpuslocked
>> infrastructure with atomic cpu_count. The original idea: When the
>> master CPU patch_text, the others should wait for it. But current
>> implementation is using the first CPU as master, which couldn't
>> guarantee the remaining CPUs are waiting. This patch changes the
>> last CPU as the master to solve the potential risk.
>>
>> Signed-off-by: Guo Ren <guoren@...ux.alibaba.com>
>> Signed-off-by: Guo Ren <guoren@...nel.org>
>> Acked-by: Palmer Dabbelt <palmer@...osinc.com>
>> Reviewed-by: Masami Hiramatsu <mhiramat@...nel.org>
>> Cc: <stable@...r.kernel.org>
>> ---
>> arch/riscv/kernel/patch.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
>> index 0b552873a577..765004b60513 100644
>> --- a/arch/riscv/kernel/patch.c
>> +++ b/arch/riscv/kernel/patch.c
>> @@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
>> struct patch_insn *patch = data;
>> int ret = 0;
>>
>> - if (atomic_inc_return(&patch->cpu_count) == 1) {
>> + if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
>> ret =
>> patch_text_nosync(patch->addr, &patch->insn,
>> GET_INSN_LENGTH(patch->insn));
>
> Thanks, this is on fixes.
Sorry, I forgot to add the Fixes and stable tags. I just fixed that up,
but I'm going to hold off on this one until next week's PR to make sure
it has time to go through linux-next.
Powered by blists - more mailing lists