[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1531845017-19935-1-git-send-email-isaacm@codeaurora.org>
Date: Tue, 17 Jul 2018 09:30:17 -0700
From: "Isaac J. Manjarres" <isaacm@...eaurora.org>
To: peterz@...radead.org, matt@...eblueprint.co.uk, mingo@...nel.org,
tglx@...utronix.de, bigeasy@...utronix.de
Cc: "Isaac J. Manjarres" <isaacm@...eaurora.org>,
linux-kernel@...r.kernel.org, psodagud@...eaurora.org,
gregkh@...uxfoundation.org, pkondeti@...eaurora.org,
stable@...r.kernel.org
Subject: [PATCH v4] stop_machine: Disable preemption after queueing stopper threads
After cpu_stop_queue_two_works() queues the cpu_stop works
for the stopper threads, it releases the locks held for
both threads, which enables preemption, which allows the
following race condition to occur:
On one CPU, call it CPU 3, thread 1 invokes
cpu_stop_queue_two_works(2, 3,...), and the execution is such
that thread 1 queues the works for migration/2 and migration/3,
and is preempted after releasing the locks for migration/2 and
migration/3, but before waking the threads.
Then, On CPU 2, a kworker, call it thread 2, is running,
and it invokes cpu_stop_queue_two_works(1, 2,...), such that
thread 2 queues the works for migration/1 and migration/2.
Meanwhile, on CPU 3, thread 1 resumes execution, and wakes
migration/2 and migration/3. This means that when CPU 2
releases the locks for migration/1 and migration/2, but before
it wakes those threads, it can be preempted by migration/2.
If thread 2 is preempted by migration/2, then migration/2 will
execute the first work item successfully, since migration/3
was woken up by CPU 3, but when it goes to execute the second
work item, it disables preemption, calls multi_cpu_stop(),
and thus, CPU 2 will wait forever for migration/1, which should
have been woken up by thread 2. However migration/1 cannot be
woken up by thread 2, since it is a kworker, so it is affine to
CPU 2, but CPU 2 is running migration/2 with preemption
disabled, so thread 2 will never run.
Disable preemption after queueing works for stopper threads
to ensure that the operation of queueing the works and waking
the stopper threads is atomic.
Fixes: 0b26351b910f ("stop_machine, sched: Fix migrate_swap() vs. active_balance() deadlock")
Co-Developed-by: Prasad Sodagudi <psodagud@...eaurora.org>
Co-Developed-by: Pavankumar Kondeti <pkondeti@...eaurora.org>
Signed-off-by: Isaac J. Manjarres <isaacm@...eaurora.org>
Signed-off-by: Prasad Sodagudi <psodagud@...eaurora.org>
Signed-off-by: Pavankumar Kondeti <pkondeti@...eaurora.org>
Cc: stable@...r.kernel.org
---
kernel/stop_machine.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index f89014a..e190d1e 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -260,6 +260,15 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
err = 0;
__cpu_stop_queue_work(stopper1, work1, &wakeq);
__cpu_stop_queue_work(stopper2, work2, &wakeq);
+ /*
+ * The waking up of stopper threads has to happen
+ * in the same scheduling context as the queueing.
+ * Otherwise, there is a possibility of one of the
+ * above stoppers being woken up by another CPU,
+ * and preempting us. This will cause us to n ot
+ * wake up the other stopper forever.
+ */
+ preempt_disable();
unlock:
raw_spin_unlock(&stopper2->lock);
raw_spin_unlock_irq(&stopper1->lock);
@@ -270,7 +279,10 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
goto retry;
}
- wake_up_q(&wakeq);
+ if (!err) {
+ wake_up_q(&wakeq);
+ preempt_enable();
+ }
return err;
}
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
Powered by blists - more mailing lists