[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180412091217.GY4082@hirez.programming.kicks-ass.net>
Date: Thu, 12 Apr 2018 11:12:17 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Boqun Feng <boqun.feng@...il.com>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Andrea Parri <parri.andrea@...il.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Josh Triplett <josh@...htriplett.org>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: [RFC tip/locking/lockdep v6 19/20] rcu: Equip sleepable RCU with
lockdep dependency graph checks
On Thu, Apr 12, 2018 at 10:12:33AM +0800, Boqun Feng wrote:
> A trivial fix/hack would be adding local_irq_disable() and
> local_irq_enable() around srcu_lock_sync() like:
>
> static inline void srcu_lock_sync(struct lockdep_map *map)
> {
> local_irq_disable();
> lock_map_acquire(map);
> lock_map_release(map);
> local_irq_enable();
> }
>
> However, it might be better, if lockdep could provide some annotation
> API for such an empty critical section to say the grap-and-drop is
> atomic. Something like:
>
> /*
> * Annotate a wait point for all previous critical section to
> * go out.
> *
> * This won't make @map a irq unsafe lock, no matter it's called
> * w/ or w/o irq disabled.
> */
> lock_wait_unlock(struct lockdep_map *map, ..)
>
> And in this primitive, we do something similar like
> lock_acquire()+lock_release(). This primitive could be used elsewhere,
> as I bebieve we have several empty grab-and-drop critical section for
> lockdep annotations, e.g. in start_flush_work().
>
> Thoughts?
>
> This cerntainly requires a bit more work, in the meanwhile, I will add
> another self testcase which has a srcu_read_lock() called in irq.
Yeah, I've never really bothered to clean those things up, but I don't
see any reason to stop you from doing it ;-)
As to the initial pattern with disabling IRQs, I think I've seen code
like that before, and in general performance isn't a top priority
(within reason) when you're running lockdep kernels, so I've usually let
it be.
Powered by blists - more mailing lists