[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160318054913.GN5220@X58A-UD3R>
Date: Fri, 18 Mar 2016 14:49:13 +0900
From: Byungchul Park <byungchul.park@....com>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Jan Kara <jack@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Jan Kara <jack@...e.com>, Petr Mladek <pmladek@...e.com>,
Tejun Heo <tj@...nel.org>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH v4 1/2] printk: Make printk() completely async
On Thu, Mar 17, 2016 at 09:34:50AM +0900, Sergey Senozhatsky wrote:
> > I am curious about how you make the wake_up_process() call and I may want
> > to talk about it at the next spin. Anyway, then we will lose the last
> > message when "if (logbuf_cpu == this_cpu)" acts. Is it acceptible?
>
> yes, this is how it is. "BUG: recent printk recursion!" will be printed
> instead of the message.
I am not sure if it's the best way. For example, in the case suspecting
rq->lock, we cannot inform about the rq's "lockup suspected" while a
printk() is printing something, whatever it is, asynchronously. We can
avoid the infinite recursion with the patch I attached below, even though
the wake_up() and friends are used out of the section protected by
logbuf_lock.
> > IMHO it's not a good choice to use wake_up() and friend within a printk()
> > since it can additionally cause another recursion. Of course, it does not
> > happen if the condition (logbuf_cpu == this_cpu) acts. But I don't think
> > it's good to rely on the condition with losing a message. Anyway I really
> > really want to see your next spin and talk.
>
> the alternative is NOT significantly better. pending bit is checked in
> IRQ, so one simply can do
>
> local_irq_save();
> while (xxx) printk();
> local_irq_restore();
>
> and _in the worst case_ nothing will be printed to console until IRQ are
Yes, you are right. But I am not sure yet.
> I'd probably prefer to add wake_up_process() to vprintk_emit() and do it
> under the logbuf lock. first, we don't suffer from disabled IRQs on current
> CPU, second we have somewhat better chances to break printk() recursion
> *in some cases*.
I think the logbuf_cpu is not for it. It's a kind of last resort. It would
be better to avoid using it if we can. And we can.
> > This cannot happen. console_lock() cannot continue because the prior
> > console_unlock() does not release console_sem.lock yet when
> > wake_up_process() is called. Only a deadlock exists. And my patch solves
> > the problem so that the deadlock cannot happen.
>
> ah, we lost in patches. I was talking about yet another patch
> (you probably not aware of. you were not Cc'd. Sorry!) that
> makes console_unlock() asynchronous:
>
> http://marc.info/?l=linux-kernel&m=145750373530161
I checked it now. Do you mean the wake_up_process() introduced in the new
patch in console_unlock()? If so, I also think it does not make a deadlock,
just can make a recursion in the worst case. I thought it was the
wake_up_process() in up() which is eventually called from console_unlock().
A deadlock can happen with the wake_up_proces() in up(). :-)
Thanks,
Byungchul
-----8<-----
>From 81f06a6f9c7f2e782267a2539c6c869d4214354c Mon Sep 17 00:00:00 2001
From: Byungchul Park <byungchul.park@....com>
Date: Fri, 18 Mar 2016 11:35:24 +0900
Subject: [PATCH] lib/spinlock_debug: Prevent a unnecessary recursive
spin_dump()
Printing "lockup suspected" for the same lock more than once is
meaningless. Furtheremore, it can cause an infinite recursion if it's
on the way printing something by printk().
Signed-off-by: Byungchul Park <byungchul.park@....com>
---
kernel/locking/spinlock_debug.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c
index fd24588..30559c6 100644
--- a/kernel/locking/spinlock_debug.c
+++ b/kernel/locking/spinlock_debug.c
@@ -138,14 +138,25 @@ static void __spin_lock_debug(raw_spinlock_t *lock)
{
u64 i;
u64 loops = loops_per_jiffy * HZ;
+ static raw_spinlock_t *suspected_lock = NULL;
for (i = 0; i < loops; i++) {
if (arch_spin_trylock(&lock->raw_lock))
return;
__delay(1);
}
- /* lockup suspected: */
- spin_dump(lock, "lockup suspected");
+
+ /*
+ * When we suspect a lockup, it's good enough to inform it once for
+ * the same lock. Otherwise it could cause an infinite recursion if
+ * it's within printk().
+ */
+ if (suspected_lock != lock) {
+ suspected_lock = lock;
+ /* lockup suspected: */
+ spin_dump(lock, "lockup suspected");
+ suspected_lock = NULL;
+ }
#ifdef CONFIG_SMP
trigger_all_cpu_backtrace();
#endif
--
1.9.1
Powered by blists - more mailing lists