[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160114181846.GZ3818@linux.vnet.ibm.com>
Date: Thu, 14 Jan 2016 10:18:46 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Sasha Levin <sasha.levin@...cle.com>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
On Thu, Jan 14, 2016 at 06:43:16PM +0100, Thomas Gleixner wrote:
> On Wed, 13 Jan 2016, Paul E. McKenney wrote:
> > On Wed, Jan 13, 2016 at 10:05:49AM +0100, Thomas Gleixner wrote:
> > > We can fix that particular issue in the posix-timer code by making the
> > > locking symetric:
> > >
> > > rcu_read_lock();
> > > spin_lock_irq(timer->lock);
> > >
> > > ...
> > >
> > > spin_unlock_irq(timer->lock);
> > > rcu_read_unlock();
> > >
> > > instead of:
> > >
> > > rcu_read_lock();
> > > spin_lock_irq(timer->lock);
> > > rcu_read_unlock();
> > >
> > > ...
> > >
> > > spin_unlock_irq(timer->lock);
> > >
> > > But the question is, whether this is the only offending code path in tree. We
> > > can avoid the hassle by making rtmutex->wait_lock irq safe.
> > >
> > > Thoughts?
> >
> > Given that the lock is disabling irq, I don't see a problem with
> > extending the RCU read-side critical section to cover the entire
> > irq-disabled region.
>
> I cannot follow here. What would be different if the lock would not disable
> irqs? I mean you can get preempted right after rcu_read_lock() before
> acquiring the spinlock.
I was thinking in terms of the fact that disabling irqs would block the
grace period for the current implementation of RCU (but -not- SRCU, just
for the record). You are right that the new version can be preempted
just after the rcu_read_lock() but the same is true of the old pattern
as well. To avoid this possibility of preemption, the code would need
to look something like this:
local_irq_disable();
rcu_read_lock();
spin_lock(timer->lock);
...
spin_unlock(timer->lock);
rcu_read_unlock();
local_irq_enable();
> > Your point about the hassle of finding and fixing all the other instances of
> > this sort is well taken, however.
>
> Right. We have the pattern
>
> rcu_read_lock();
> x = lookup();
> if (x)
> keep_hold(x)
> rcu_read_unlock();
> return x;
>
> all over the place. Now that keep_hold() can be everything from a refcount to
> a spinlock and I'm not sure that we can force stuff depending on the mechanism
> to be completely symetric. So we are probably better off by making that rcu
> unlock machinery more robust.
OK. If I read the lockdep reports correctly, the issue occurs
when rcu_read_unlock_special() finds that it needs to unboost,
which means doing an rt_mutex_unlock(). This is done outside of
rcu_read_unlock_special()'s irq-disabled region, but of course the caller
might have disabled irqs.
If I remember correctly, disabling irqs across rt_mutex_unlock() gets
me lockdep splats.
I could imagine having a per-CPU pointer to rt_mutex that
rcu_read_unlock() sets, and that is checked at every point that irqs
are enabled, with a call to rt_mutex_unlock() if that pointer is non-NULL.
But perhaps you had something else in mind?
Thanx, Paul
Powered by blists - more mailing lists