lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180424133325.GA3179@codeblueprint.co.uk>
Date:   Tue, 24 Apr 2018 14:33:25 +0100
From:   Matt Fleming <matt@...eblueprint.co.uk>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
        Michal Hocko <mhocko@...e.com>,
        Mike Galbraith <umgwanakikbuti@...il.com>
Subject: Re: cpu stopper threads and load balancing leads to deadlock

On Fri, 20 Apr, at 11:50:05AM, Peter Zijlstra wrote:
> On Tue, Apr 17, 2018 at 03:21:19PM +0100, Matt Fleming wrote:
> > Hi guys,
> > 
> > We've seen a bug in one of our SLE kernels where the cpu stopper
> > thread ("migration/15") is entering idle balance. This then triggers
> > active load balance.
> > 
> > At the same time, a task on another CPU triggers a page fault and NUMA
> > balancing kicks in to try and migrate the task closer to the NUMA node
> > for that page (we're inside stop_two_cpus()). This faulting task is
> > spinning in try_to_wake_up() (inside smp_cond_load_acquire(&p->on_cpu,
> > !VAL)), waiting for "migration/15" to context switch.
> > 
> > Unfortunately, because "migration/15" is doing active load balance
> > it's spinning waiting for the NUMA-page-faulting CPU's stopper lock,
> > which is already held (since it's inside stop_two_cpus()).
> > 
> > Deadlock ensues.
> 
> 
> So if I read that right, something like the following happens:
> 
> CPU0					CPU1
> 
> schedule(.prev=migrate/0)		<fault>
>   pick_next_task			  ...
>     idle_balance			    migrate_swap()
>       active_balance			      stop_two_cpus()
> 						spin_lock(stopper0->lock)
> 						spin_lock(stopper1->lock)
> 						ttwu(migrate/0)
> 						  smp_cond_load_acquire() -- waits for schedule()
>         stop_one_cpu(1)
> 	  spin_lock(stopper1->lock) -- waits for stopper lock

Yep, that's exactly right.

> Fix _this_ deadlock by taking out the wakeups from under stopper->lock.
> I'm not entirely sure there isn't more dragons here, but this particular
> one seems fixable by doing that.
> 
> Is there any way you can reproduce/test this?

I'm afraid I don't have any way to test this, but I can ask the
customer that reported it if they can.

Either way, this fix looks good to me.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ