lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 23 Jul 2018 18:13:48 -0700
From:   isaacm@...eaurora.org
To:     peterz@...radead.org, matt@...eblueprint.co.uk, mingo@...nel.org,
        tglx@...utronix.de, bigeasy@...utronix.de
Cc:     linux-kernel@...r.kernel.org, psodagud@...eaurora.org,
        gregkh@...uxfoundation.org, pkondeti@...eaurora.org,
        stable@...r.kernel.org
Subject: Re: [PATCH] stop_machine: Disable preemption after queueing stopper
 threads

Hi all,

Are there any comments about this patch?

Thanks,
Isaac Manjarres
On 2018-07-17 12:35, Isaac J. Manjarres wrote:
> This commit:
> 
> 9fb8d5dc4b64 ("stop_machine, Disable preemption when
> waking two stopper threads")
> 
> does not fully address the race condition that can occur
> as follows:
> 
> On one CPU, call it CPU 3, thread 1 invokes
> cpu_stop_queue_two_works(2, 3,...), and the execution is such
> that thread 1 queues the works for migration/2 and migration/3,
> and is preempted after releasing the locks for migration/2 and
> migration/3, but before waking the threads.
> 
> Then, On CPU 2, a kworker, call it thread 2, is running,
> and it invokes cpu_stop_queue_two_works(1, 2,...), such that
> thread 2 queues the works for migration/1 and migration/2.
> Meanwhile, on CPU 3, thread 1 resumes execution, and wakes
> migration/2 and migration/3. This means that when CPU 2
> releases the locks for migration/1 and migration/2, but before
> it wakes those threads, it can be preempted by migration/2.
> 
> If thread 2 is preempted by migration/2, then migration/2 will
> execute the first work item successfully, since migration/3
> was woken up by CPU 3, but when it goes to execute the second
> work item, it disables preemption, calls multi_cpu_stop(),
> and thus, CPU 2 will wait forever for migration/1, which should
> have been woken up by thread 2. However migration/1 cannot be
> woken up by thread 2, since it is a kworker, so it is affine to
> CPU 2, but CPU 2 is running migration/2 with preemption
> disabled, so thread 2 will never run.
> 
> Disable preemption after queueing works for stopper threads
> to ensure that the operation of queueing the works and waking
> the stopper threads is atomic.
> 
> Fixes: 9fb8d5dc4b64 ("stop_machine, Disable preemption when waking two
> stopper threads")
> Co-Developed-by: Prasad Sodagudi <psodagud@...eaurora.org>
> Co-Developed-by: Pavankumar Kondeti <pkondeti@...eaurora.org>
> Signed-off-by: Isaac J. Manjarres <isaacm@...eaurora.org>
> Signed-off-by: Prasad Sodagudi <psodagud@...eaurora.org>
> Signed-off-by: Pavankumar Kondeti <pkondeti@...eaurora.org>
> Cc: stable@...r.kernel.org
> ---
>  kernel/stop_machine.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
> index 1ff523d..e190d1e 100644
> --- a/kernel/stop_machine.c
> +++ b/kernel/stop_machine.c
> @@ -260,6 +260,15 @@ static int cpu_stop_queue_two_works(int cpu1,
> struct cpu_stop_work *work1,
>  	err = 0;
>  	__cpu_stop_queue_work(stopper1, work1, &wakeq);
>  	__cpu_stop_queue_work(stopper2, work2, &wakeq);
> +	/*
> +	 * The waking up of stopper threads has to happen
> +	 * in the same scheduling context as the queueing.
> +	 * Otherwise, there is a possibility of one of the
> +	 * above stoppers being woken up by another CPU,
> +	 * and preempting us. This will cause us to n ot
> +	 * wake up the other stopper forever.
> +	 */
> +	preempt_disable();
>  unlock:
>  	raw_spin_unlock(&stopper2->lock);
>  	raw_spin_unlock_irq(&stopper1->lock);
> @@ -271,7 +280,6 @@ static int cpu_stop_queue_two_works(int cpu1,
> struct cpu_stop_work *work1,
>  	}
> 
>  	if (!err) {
> -		preempt_disable();
>  		wake_up_q(&wakeq);
>  		preempt_enable();
>  	}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ