[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120906172912.GJ2448@linux.vnet.ibm.com>
Date: Thu, 6 Sep 2012 10:29:13 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, mingo@...e.hu, laijs@...fujitsu.com,
dipankar@...ibm.com, akpm@...ux-foundation.org,
mathieu.desnoyers@...ymtl.ca, josh@...htriplett.org,
niv@...ibm.com, tglx@...utronix.de, rostedt@...dmis.org,
Valdis.Kletnieks@...edu, dhowells@...hat.com,
eric.dumazet@...il.com, darren@...art.com, fweisbec@...il.com,
sbw@....edu, patches@...aro.org
Subject: Re: [PATCH tip/core/rcu 03/23] rcu: Move RCU grace-period cleanup
into kthread
On Thu, Sep 06, 2012 at 03:34:38PM +0200, Peter Zijlstra wrote:
> On Thu, 2012-08-30 at 11:18 -0700, Paul E. McKenney wrote:
> > static void rcu_report_qs_rsp(struct rcu_state *rsp, unsigned long flags)
> > __releases(rcu_get_root(rsp)->lock)
> > {
> > + raw_spin_unlock_irqrestore(&rcu_get_root(rsp)->lock, flags);
> > + wake_up(&rsp->gp_wq); /* Memory barrier implied by wake_up() path. */
> > }
>
> Could you now also clean up the locking so that the caller releases this
> lock?
>
> I so dislike asymmetric locking like that..
Or I could inline the whole thing at the two callsites...
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists