[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160318071130.GA19655@swordfish>
Date: Fri, 18 Mar 2016 16:11:30 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: Byungchul Park <byungchul.park@....com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Jan Kara <jack@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Jan Kara <jack@...e.com>, Petr Mladek <pmladek@...e.com>,
Tejun Heo <tj@...nel.org>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH v4 1/2] printk: Make printk() completely async
On (03/18/16 14:49), Byungchul Park wrote:
[..]
> > http://marc.info/?l=linux-kernel&m=145750373530161
>
> I checked it now. Do you mean the wake_up_process() introduced in the new
> patch in console_unlock()? If so, I also think it does not make a deadlock,
> just can make a recursion in the worst case. I thought it was the
> wake_up_process() in up() which is eventually called from console_unlock().
> A deadlock can happen with the wake_up_proces() in up(). :-)
I'm not addressing already existing problems here. I'm trying to
minimise the impact of new code only.
[..]
> diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c
> index fd24588..30559c6 100644
> --- a/kernel/locking/spinlock_debug.c
> +++ b/kernel/locking/spinlock_debug.c
> @@ -138,14 +138,25 @@ static void __spin_lock_debug(raw_spinlock_t *lock)
> {
> u64 i;
> u64 loops = loops_per_jiffy * HZ;
> + static raw_spinlock_t *suspected_lock = NULL;
this has no chances to survive on SMP systems that have spin_lockup-ed on at
least two different spin locks.
I'd really prefer not to mix-in spin_dump/printk recursion problems into this
patch set. it makes sense not to make printk recursion detection worse due to
newly added spin_locks to vprintk_emit(), but that's it. this patch set set is
fixing other things in the first place.
-ss
> for (i = 0; i < loops; i++) {
> if (arch_spin_trylock(&lock->raw_lock))
> return;
> __delay(1);
> }
> - /* lockup suspected: */
> - spin_dump(lock, "lockup suspected");
> +
> + /*
> + * When we suspect a lockup, it's good enough to inform it once for
> + * the same lock. Otherwise it could cause an infinite recursion if
> + * it's within printk().
> + */
> + if (suspected_lock != lock) {
> + suspected_lock = lock;
> + /* lockup suspected: */
> + spin_dump(lock, "lockup suspected");
> + suspected_lock = NULL;
> + }
> #ifdef CONFIG_SMP
> trigger_all_cpu_backtrace();
> #endif
> --
> 1.9.1
>
Powered by blists - more mailing lists