[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1310545448.14978.40.camel@twins>
Date: Wed, 13 Jul 2011 10:24:08 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: paulmck@...ux.vnet.ibm.com
Cc: Dave Jones <davej@...hat.com>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: lockdep circular locking error (rcu_node_level_0 vs rq->lock)
On Tue, 2011-07-12 at 15:54 -0700, Paul E. McKenney wrote:
> > @@ -892,6 +892,7 @@ static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
> > * prev into current:
> > */
> > spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
> > + rcu_read_acquire();
>
> Oooh... This is a tricky one. Hmmm...
<snip>
> Does any of this make sense?
No?
> > @@ -3141,6 +3170,7 @@ context_switch(struct rq *rq, struct task_struct *prev,
> > */
> > #ifndef __ARCH_WANT_UNLOCKED_CTXSW
> > spin_release(&rq->lock.dep_map, 1, _THIS_IP_);
> > + rcu_read_release();
>
> My guess is that we don't need the rcu_read_release() -- the arch shouldn't
> care that we have a non-atomic field in task_struct incremented, right?
>
> Or am I confused about what this is doing?
Either that or I used the wrong primitives for what I was meaning to do.
So from looking at rcupdate.h rcu_read_{acquire,release} are the lockdep
annotations of the rcu_read_lock. The thing we're doing above is context
switching and not making lockdep upset.
The problem is that held lock state is tracked per task, and we're well
switching tasks, so we need to transfer the held lock state from the old
to the new task, since the new task will be unlocking the thing, if at
that time lockdep finds we're not actually holding it it screams bloody
murder.
So what we do is we release the lock (annotation only, the actual lock
says locked) on prev before switching and acquire the lock on next right
after switching.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists