lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180102180119.GA1355@linux.vnet.ibm.com>
Date:   Tue, 2 Jan 2018 10:01:19 -0800
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     Prateek Sood <prsood@...eaurora.org>,
        Peter Zijlstra <peterz@...radead.org>, avagin@...il.com,
        mingo@...nel.org, linux-kernel@...r.kernel.org,
        cgroups@...r.kernel.org, sramana@...eaurora.org
Subject: Re: [PATCH] cgroup/cpuset: fix circular locking dependency

On Tue, Jan 02, 2018 at 09:44:08AM -0800, Paul E. McKenney wrote:
> On Tue, Jan 02, 2018 at 08:16:56AM -0800, Tejun Heo wrote:
> > Hello,
> > 
> > On Fri, Dec 29, 2017 at 02:07:16AM +0530, Prateek Sood wrote:
> > > task T is waiting for cpuset_mutex acquired
> > > by kworker/2:1
> > > 
> > > sh ==> cpuhp/2 ==> kworker/2:1 ==> sh 
> > > 
> > > kworker/2:3 ==> kthreadd ==> Task T ==> kworker/2:1
> > > 
> > > It seems that my earlier patch set should fix this scenario:
> > > 1) Inverting locking order of cpuset_mutex and cpu_hotplug_lock.
> > > 2) Make cpuset hotplug work synchronous.
> > >
> > > Could you please share your feedback.
> > 
> > Hmm... this can also be resolved by adding WQ_MEM_RECLAIM to the
> > synchronize rcu workqueue, right?  Given the wide-spread usages of
> > synchronize_rcu and friends, maybe that's the right solution, or at
> > least something we also need to do, for this particular deadlock?
> 
> To make WQ_MEM_RECLAIM work, I need to dynamically allocate RCU's
> workqueues, correct?  Or is there some way to mark a statically
> allocated workqueue as WQ_MEM_RECLAIM after the fact?
> 
> I can dynamically allocate them, but I need to carefully investigate
> boot-time use.  So if it is possible to be lazy, I do want to take
> the easy way out.  ;-)

Actually, after taking a quick look, could you please supply me with
a way of mark a statically allocated workqueue as WQ_MEM_RECLAIM after
the fact?  Otherwise, I end up having to check for the workqueue having
been allocated pretty much each time I use it, which is going to be an
open invitation for bugs.  Plus it looks like there are ways that RCU's
workqueue wakeups can be executed during very early boot, which can be
handled, but again in a rather messy fashion.

In contrast, given a way of mark a statically allocated workqueue
as WQ_MEM_RECLAIM after the fact, I simply continue initializing the
workqueue at early boot, and then add the WQ_MEM_RECLAIM marking some
arbitrarily chosen time after the scheduler has been initialized.

The required change to workqueues looks easy, just move the body of
the "if (flags & WQ_MEM_RECLAIM) {" statement in __alloc_workqueue_key()
to a separate function, right?

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ