[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <tencent_49AFDBA31F885906234219591097D42ABE08@qq.com>
Date: Fri, 20 Oct 2023 22:43:33 +0800
From: Rong Tao <rtoax@...mail.com>
To: mark.rutland@....com, elver@...gle.com,
linux-kernel@...r.kernel.org, peterz@...radead.org,
rongtao@...tc.cn, rtoax@...mail.com, tglx@...utronix.de
Subject: [PATCH 1/2] stop_machine: Use non-atomic read multi_stop_data::state clearly
From: Rong Tao <rongtao@...tc.cn>
commit b1fc58333575 ("stop_machine: Avoid potential race behaviour")
solved the race behaviour problem, to better show that race behaviour
does not exist, pass the 'curstate' directly to ack_state() instead of
refetching msdata->state in ack_state().
Signed-off-by: Rong Tao <rongtao@...tc.cn>
---
kernel/stop_machine.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index cedb17ba158a..268c2e581698 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -188,10 +188,11 @@ static void set_state(struct multi_stop_data *msdata,
}
/* Last one to ack a state moves to the next state. */
-static void ack_state(struct multi_stop_data *msdata)
+static void ack_state(struct multi_stop_data *msdata,
+ enum multi_stop_state curstate)
{
if (atomic_dec_and_test(&msdata->thread_ack))
- set_state(msdata, msdata->state + 1);
+ set_state(msdata, curstate + 1);
}
notrace void __weak stop_machine_yield(const struct cpumask *cpumask)
@@ -242,7 +243,7 @@ static int multi_cpu_stop(void *data)
default:
break;
}
- ack_state(msdata);
+ ack_state(msdata, curstate);
} else if (curstate > MULTI_STOP_PREPARE) {
/*
* At this stage all other CPUs we depend on must spin
--
2.41.0
Powered by blists - more mailing lists