[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160115014232.GQ3818@linux.vnet.ibm.com>
Date: Thu, 14 Jan 2016 17:42:32 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Sasha Levin <sasha.levin@...cle.com>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
On Thu, Jan 14, 2016 at 08:47:41PM +0100, Thomas Gleixner wrote:
> On Thu, 14 Jan 2016, Paul E. McKenney wrote:
> > On Thu, Jan 14, 2016 at 06:43:16PM +0100, Thomas Gleixner wrote:
> > > Right. We have the pattern
> > >
> > > rcu_read_lock();
> > > x = lookup();
> > > if (x)
> > > keep_hold(x)
> > > rcu_read_unlock();
> > > return x;
> > >
> > > all over the place. Now that keep_hold() can be everything from a refcount to
> > > a spinlock and I'm not sure that we can force stuff depending on the mechanism
> > > to be completely symetric. So we are probably better off by making that rcu
> > > unlock machinery more robust.
> >
> > OK. If I read the lockdep reports correctly, the issue occurs
> > when rcu_read_unlock_special() finds that it needs to unboost,
> > which means doing an rt_mutex_unlock(). This is done outside of
> > rcu_read_unlock_special()'s irq-disabled region, but of course the caller
> > might have disabled irqs.
> >
> > If I remember correctly, disabling irqs across rt_mutex_unlock() gets
> > me lockdep splats.
>
> That shouldn't be the case. The splats come from this scenario:
>
> CPU0 CPU1
> rtmutex_lock(rcu)
> raw_spin_lock(&rcu->lock)
> rcu_read_lock()
> Interrupt spin_lock_irq(some->lock);
> rcu_read_unlock()
> rcu_read_unlock_special()
> rtmutex_unlock(rcu)
> raw_spin_lock(&rcu->lock)
> spin_lock(some->lock)
>
> Now we dead locked.
>
> > I could imagine having a per-CPU pointer to rt_mutex that
> > rcu_read_unlock() sets, and that is checked at every point that irqs
> > are enabled, with a call to rt_mutex_unlock() if that pointer is non-NULL.
> >
> > But perhaps you had something else in mind?
>
> We can solve that issue by taking rtmutex->wait_lock with irqsave. So the
> above becomes:
>
> CPU0 CPU1
> rtmutex_lock(rcu)
> raw_spin_lock_irq(&rcu->lock)
> rcu_read_lock()
> spin_lock_irq(some->lock);
> rcu_read_unlock()
> rcu_read_unlock_special()
> rtmutex_unlock(rcu)
> raw_spin_lock_irqsave(&rcu->lock, flags)
> raw_spin_unlock_irq(&rcu->lock)
> Interrupt ...
> spin_lock(some->lock) raw_spin_unlock_irqrestore(&rcu->lock, flags)
> ...
> spin_unlock_irq(some->lock);
>
> Untested patch below.
One small fix to make it build below. Started rcutorture, somewhat
pointlessly given that the splat doesn't appear on my setup.
Thanx, Paul
------------------------------------------------------------------------
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 18e8b41ff796..1fdf6470dfd1 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -645,7 +645,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
rt_mutex_enqueue(lock, waiter);
/* [8] Release the task */
- raw_spin_unlock(&task->pi_lock, flags);
+ raw_spin_unlock(&task->pi_lock);
put_task_struct(task);
/*
Powered by blists - more mailing lists