[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160128060530.GC1834@swordfish>
Date: Thu, 28 Jan 2016 15:05:30 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: "byungchul.park" <byungchul.park@....com>
Cc: 'Sergey Senozhatsky' <sergey.senozhatsky.work@...il.com>,
akpm@...ux-foundation.org, mingo@...nel.org,
linux-kernel@...r.kernel.org, akinobu.mita@...il.com, jack@...e.cz,
torvalds@...ux-foundation.org, peter@...leysoftware.com,
sergey.senozhatsky@...il.com
Subject: Re: [PATCH v4] lib/spinlock_debug.c: prevent a recursive cycle in
the debug code
On (01/28/16 13:36), byungchul.park wrote:
[..]
> > the thing is, it's really-really hard to lockup in console_trylock()...
> >
> > int down_trylock(struct semaphore *sem)
> > {
> > unsigned long flags;
> > int count;
> >
> > raw_spin_lock_irqsave(&sem->lock, flags); <<<<<< um...
>
> I also think it's hard, but a backtrace said the lockup happened here.
what was the state of `struct semaphore *sem' and especially of `sem->lock'?
what was the lock->owner_cpu doing? (addr2line of its pc registe, for example).
> > count = sem->count - 1;
> > if (likely(count >= 0))
> > sem->count = count;
> > raw_spin_unlock_irqrestore(&sem->lock, flags);
> >
> > return (count < 0);
> > }
> >
> > was not able to dereference sem->count? `*sem' was corrupted? or because
> > sem->lock was corrupted? in any of those cases you solve the problem from
> > the wrong side. if you have a memory corruption then it can corrupt
>
> What I solved here is to make it possible to print what the problem is, by
> the spinlock debug code instead of system lockup while printing a debug
> message. I think it's not wrong.
none of the CPUs will print anything anymore. it's done.
your CPUa - CPUx are spinning in down_trylock()
vprintk_emit()
down_trylock()
raw_spin_lock_irqsave() << spin here
and they are spinnig because CPUz is keeping the sem->lock and is
_not_ going to release it. and this is the root cause, not spin_dump().
CPUz was expected to do a quick thing in down_trylock()
raw_spin_lock_irqsave(&sem->lock, flags);
count = sem->count - 1;
if (likely(count >= 0))
sem->count = count;
raw_spin_unlock_irqrestore(&sem->lock, flags);
or down()/down_common()
raw_spin_lock_irqsave(&sem->lock, flags);
if (likely(sem->count > 0))
sem->count--;
else
down_common()
{
...
for (;;) {
if (signal_pending_state(state, task))
goto interrupted;
if (unlikely(timeout <= 0))
goto timed_out;
__set_task_state(task, state);
raw_spin_unlock_irq(&sem->lock);
timeout = schedule_timeout(timeout);
raw_spin_lock_irq(&sem->lock);
if (waiter.up)
return 0;
}
...
}
raw_spin_unlock_irqrestore(&sem->lock, flags);
I can't see how it's even possible to keep that spin_lock locked
longer than `loops_per_jiffy * HZ'.
and the fact that you saw N recursive
printk()->down_trylock()->spin_lock()->spin_dump()->printk()->...
sounds like a good prove of the fact the your CPUz was either dead,
or gone crazy, and took the spin_lock with it. but this is not
spinlock_debug's business.
console_flush_on_panic() _probably_ would help in this particular
case -- it does not care about console_sem state and goes to
console_unlock() directly -- but it still locks the logbuf_lock.
so if CPUz died with logbuf_lock being locked, then nothing will
save your day.
do you have any reproducer or you've seen it once?
-ss
Powered by blists - more mailing lists