lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1707042145170.2131@nanos>
Date:   Tue, 4 Jul 2017 21:49:10 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Vikram Mulukutla <markivx@...eaurora.org>
cc:     Rusty Russell <rusty@...tcorp.com.au>, Tejun Heo <tj@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Sebastian Sewior <bigeasy@...utronix.de>
Subject: Re: [PATCH] kthread: Atomically set completion and perform dequeue
 in __kthread_parkme

On Mon, 26 Jun 2017, Vikram Mulukutla wrote:
> On 6/26/2017 3:18 PM, Vikram Mulukutla wrote:
> > kthread_park waits for the target kthread to park itself with
> > __kthread_parkme using a completion variable. __kthread_parkme - which is
> > invoked by the target kthread - sets the completion variable before
> > calling schedule() to voluntarily get itself off of the runqueue.
> > 
> > This causes an interesting race in the hotplug path. takedown_cpu()
> > invoked for CPU_X attempts to park the cpuhp/X hotplug kthread before
> > running the stopper thread on CPU_X. kthread_unpark doesn't guarantee that
> > cpuhp/X is off of X's runqueue, only that the thread has executed
> > __kthread_parkme and set the completion. cpuhp/X may have been preempted
> > out before calling schedule() to voluntarily sleep. takedown_cpu proceeds
> > to run the stopper thread on CPU_X which promptly migrates off the
> > still-on-rq cpuhp/X thread to another cpu CPU_Y, setting its affinity
> > mask to something other than CPU_X alone.
> > 
> > This is OK - cpuhp/X may finally get itself off of CPU_Y's runqueue at
> > some later point. But if that doesn't happen (for example, if there's
> > an RT thread on CPU_Y), the kthread_unpark in a subsequent cpu_up call
> > for CPU_X will race with the still-on-rq condition. Even now we're
> > functionally OK because there is a wait_task_inactive in the
> > kthread_unpark(), BUT the following happens:
> > 
> > [   12.472745] BUG: scheduling while atomic: swapper/7/0/0x00000002

Thats not the worst problem. We could simply enable preemption there, but
the real issue is that this is the idle task of the upcoming CPU which is
not supposed to schedule in the first place.

So no, your 'fix' is just papering over the underlying issue.

And yes, the moron who did not think about wait_task_inactive() being
called via kthread_unpark() -> kthread_bind() is me.

I'm testing a proper fix for it right now. Will post later.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ