[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tencent_3501158CD78C510DC4628352A1E9AC98CB07@qq.com>
Date: Fri, 27 Oct 2023 19:49:43 +0800
From: Rong Tao <rtoax@...mail.com>
To: Mark Rutland <mark.rutland@....com>
Cc: elver@...gle.com, linux-kernel@...r.kernel.org,
peterz@...radead.org, rongtao@...tc.cn, tglx@...utronix.de
Subject: Re: [PATCH 1/2] stop_machine: Use non-atomic read
multi_stop_data::state clearly
On 10/24/23 6:46 PM, Mark Rutland wrote:
> On Fri, Oct 20, 2023 at 10:43:33PM +0800, Rong Tao wrote:
>> From: Rong Tao <rongtao@...tc.cn>
>>
>> commit b1fc58333575 ("stop_machine: Avoid potential race behaviour")
>> solved the race behaviour problem, to better show that race behaviour
>> does not exist, pass the 'curstate' directly to ack_state() instead of
>> refetching msdata->state in ack_state().
>>
> I'd prefer if we make this:
>
> | stop_machine: pass curstate to ack_state()
> |
> | The multi_cpu_stop() state machine uses multi_stop_data::state to hold
> | the current state, and this is read and written atomically except in
> | ack_state(), which performs a non-atomic read.
> |
> | As ack_state() only performs this non-atomic read when there is a single
> | writer, this is benign, but it makes reasoning about the state machine a
> | little harder.
> |
> | Remove the non-atomic read and pass the (atomically read) curstate in
> | instead. This makes it clear that we do not expect any racy writes, and
> | avoids a redundant load.
>
> With that wording:
>
> Acked-by: Mark Rutland <mark.rutland@....com>
>
> Mark.
Hi, Mark, I just submit a single patch [0] individually, not as a patchset.
please review. thank you.
Rong Tao
[0]
https://lore.kernel.org/lkml/tencent_FB1D31CEC045E837ABE5B25CC5E37575F405@qq.com/
>
>> Signed-off-by: Rong Tao <rongtao@...tc.cn>
>> ---
>> kernel/stop_machine.c | 7 ++++---
>> 1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
>> index cedb17ba158a..268c2e581698 100644
>> --- a/kernel/stop_machine.c
>> +++ b/kernel/stop_machine.c
>> @@ -188,10 +188,11 @@ static void set_state(struct multi_stop_data *msdata,
>> }
>>
>> /* Last one to ack a state moves to the next state. */
>> -static void ack_state(struct multi_stop_data *msdata)
>> +static void ack_state(struct multi_stop_data *msdata,
>> + enum multi_stop_state curstate)
>> {
>> if (atomic_dec_and_test(&msdata->thread_ack))
>> - set_state(msdata, msdata->state + 1);
>> + set_state(msdata, curstate + 1);
>> }
>>
>> notrace void __weak stop_machine_yield(const struct cpumask *cpumask)
>> @@ -242,7 +243,7 @@ static int multi_cpu_stop(void *data)
>> default:
>> break;
>> }
>> - ack_state(msdata);
>> + ack_state(msdata, curstate);
>> } else if (curstate > MULTI_STOP_PREPARE) {
>> /*
>> * At this stage all other CPUs we depend on must spin
>> --
>> 2.41.0
>>
Powered by blists - more mailing lists