[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140501144018.GA25369@localhost.localdomain>
Date: Thu, 1 May 2014 16:40:21 +0200
From: Frederic Weisbecker <fweisbec@...il.com>
To: Tejun Heo <tj@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Christoph Lameter <cl@...ux.com>,
Kevin Hilman <khilman@...aro.org>,
Lai Jiangshan <laijs@...fujitsu.com>,
Mike Galbraith <bitbucket@...ine.de>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Viresh Kumar <viresh.kumar@...aro.org>
Subject: Re: [PATCH 2/4] workqueue: Split apply attrs code from its locking
On Thu, Apr 24, 2014 at 10:48:32AM -0400, Tejun Heo wrote:
> On Thu, Apr 24, 2014 at 04:37:34PM +0200, Frederic Weisbecker wrote:
> > +static int apply_workqueue_attrs_locked(struct workqueue_struct *wq,
> > + const struct workqueue_attrs *attrs)
> > {
> > struct workqueue_attrs *new_attrs, *tmp_attrs;
> > struct pool_workqueue **pwq_tbl, *dfl_pwq;
> > @@ -3976,15 +3960,6 @@ int apply_workqueue_attrs(struct workqueue_struct *wq,
> > copy_workqueue_attrs(tmp_attrs, new_attrs);
> >
> > /*
> > - * CPUs should stay stable across pwq creations and installations.
> > - * Pin CPUs, determine the target cpumask for each node and create
> > - * pwqs accordingly.
> > - */
> > - get_online_cpus();
> > -
> > - mutex_lock(&wq_pool_mutex);
>
> lockdep_assert_held()
Not sure... Only a small part of the function actually needs to be locked. Namely
those doing the pwq allocations, which already have the lockdep_assert_held().
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists