lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <318fac36-66cd-7f90-df61-44042119ee2e@codeaurora.org>
Date:   Mon, 26 Jun 2017 16:03:27 -0700
From:   Vikram Mulukutla <markivx@...eaurora.org>
To:     rusty@...tcorp.com.au, tj@...nel.org, tglx@...utronix.de,
        akpm@...ux-foundation.org
Cc:     linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH] kthread: Atomically set completion and perform dequeue in
 __kthread_parkme


correcting Thomas Gleixner's email address. s/linuxtronix/linutronix

On 6/26/2017 3:18 PM, Vikram Mulukutla wrote:
> kthread_park waits for the target kthread to park itself with
> __kthread_parkme using a completion variable. __kthread_parkme - which is
> invoked by the target kthread - sets the completion variable before
> calling schedule() to voluntarily get itself off of the runqueue.
> 
> This causes an interesting race in the hotplug path. takedown_cpu()
> invoked for CPU_X attempts to park the cpuhp/X hotplug kthread before
> running the stopper thread on CPU_X. kthread_unpark doesn't guarantee that
> cpuhp/X is off of X's runqueue, only that the thread has executed
> __kthread_parkme and set the completion. cpuhp/X may have been preempted
> out before calling schedule() to voluntarily sleep. takedown_cpu proceeds
> to run the stopper thread on CPU_X which promptly migrates off the
> still-on-rq cpuhp/X thread to another cpu CPU_Y, setting its affinity
> mask to something other than CPU_X alone.
> 
> This is OK - cpuhp/X may finally get itself off of CPU_Y's runqueue at
> some later point. But if that doesn't happen (for example, if there's
> an RT thread on CPU_Y), the kthread_unpark in a subsequent cpu_up call
> for CPU_X will race with the still-on-rq condition. Even now we're
> functionally OK because there is a wait_task_inactive in the
> kthread_unpark(), BUT the following happens:
> 
> [   12.472745] BUG: scheduling while atomic: swapper/7/0/0x00000002
> [   12.472749] Modules linked in:
> [   12.472756] CPU: 7 PID: 0 Comm: swapper/7 Not tainted 4.9.32-perf+ #680
> [   12.472758] Hardware name: XXXXX
> [   12.472760] Call trace:
> [   12.472773] [<ffffff8eb4e87928>] dump_backtrace+0x0/0x198
> [   12.472777] [<ffffff8eb4e87ad4>] show_stack+0x14/0x1c
> [   12.472781] [<ffffff8eb516c998>] dump_stack+0x8c/0xac
> [   12.472786] [<ffffff8eb4ecea28>] __schedule_bug+0x54/0x70
> [   12.472792] [<ffffff8eb5bbf478>] __schedule+0x6b4/0x928
> [   12.472794] [<ffffff8eb5bbf728>] schedule+0x3c/0xa0
> [   12.472797] [<ffffff8eb5bc2950>] schedule_hrtimeout_range_clock+0x80/0xec
> [   12.472799] [<ffffff8eb5bc29ec>] schedule_hrtimeout+0x18/0x20
> [   12.472803] [<ffffff8eb4ed3b30>] wait_task_inactive+0x1a0/0x1a4
> [   12.472806] [<ffffff8eb4ec1b88>] __kthread_bind_mask+0x20/0x7c
> [   12.472809] [<ffffff8eb4ec1c0c>] __kthread_bind+0x28/0x30
> [   12.472811] [<ffffff8eb4ec1c88>] __kthread_unpark+0x5c/0x60
> [   12.472814] [<ffffff8eb4ec1cb0>] kthread_unpark+0x24/0x2c
> [   12.472818] [<ffffff8eb4ea4a7c>] cpuhp_online_idle+0x50/0x90
> [   12.472822] [<ffffff8eb4ef2940>] cpu_startup_entry+0x3c/0x1d4
> [   12.472824] [<ffffff8eb4e8dae4>] secondary_start_kernel+0x164/0x1b4
> 
> Since the kthread_unpark is invoked from a preemption-disabled context,
> wait_task_inactive's action of invoking schedule is invalid, causing the
> splat. Note that kthread_bind_mask is correctly attempting to re-set
> the affinity mask since cpuhp is a per-cpu smpboot thread.
> 
> Instead of adding an expensive wait_task_inactive inside kthread_park()
> or trying to muck with the hotplug code, let's just ensure that the
> completion variable and the schedule happen atomically inside
> __kthread_parkme. This focuses the fix to the hotplug requirement alone,
> and removes the unnecessary migration of cpuhp/X.
> 
> Signed-off-by: Vikram Mulukutla <markivx@...eaurora.org>
> ---
>   kernel/kthread.c | 13 ++++++++++++-
>   1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index 26db528..7ad3354 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -171,9 +171,20 @@ static void __kthread_parkme(struct kthread *self)
>   {
>   	__set_current_state(TASK_PARKED);
>   	while (test_bit(KTHREAD_SHOULD_PARK, &self->flags)) {
> +		/*
> +		 * Why the preempt_disable?
> +		 * Hotplug needs to ensure that 'self' is off of the runqueue
> +		 * as well, before scheduling the stopper thread that will
> +		 * migrate tasks off of the runqeue that 'self' was running on.
> +		 * This avoids unnecessary migration work and also ensures that
> +		 * kthread_unpark in the cpu_up path doesn't race with
> +		 * __kthread_parkme.
> +		 */
> +		preempt_disable();
>   		if (!test_and_set_bit(KTHREAD_IS_PARKED, &self->flags))
>   			complete(&self->parked);
> -		schedule();
> +		schedule_preempt_disabled();
> +		preempt_enable();
>   		__set_current_state(TASK_PARKED);
>   	}
>   	clear_bit(KTHREAD_IS_PARKED, &self->flags);
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ