[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20161130125113.a5f520aa5e660514c423683e@linux-foundation.org>
Date: Wed, 30 Nov 2016 12:51:13 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: <linyongting@...wei.com>
Cc: <kejinling@...wei.com>, <pmladek@...e.com>,
<sergey.senozhatsky@...il.com>, <bp@...e.de>, <tj@...nel.org>,
<treding@...dia.com>, <linux-kernel@...r.kernel.org>,
<leisure.wang@...wei.com>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH] printk: Fix spinlock deadlock in printk reenty
On Wed, 30 Nov 2016 15:15:19 +0800 <linyongting@...wei.com> wrote:
> From: Jinling Ke <kejinling@...wei.com>
>
> when Oops in printk, printk will call zap_locks() to reinitialize
> spinlock to prevent deadlock. In arm, arm64, x86 or other
> architecture smp cpu, race condition will occur in printk spinlock
> logbuf_lock and then it will result other cpu that is waiting printk
> spinlock in deadlock(in function raw_spin_lock). Because the cpus
> deadlock, you can see the error printk log:
>
> "SMP: failed to stop secondary CPUs"
>
> In arm, arm64, x86 or other architecture, spinlock variable
> is divided into 2 parts, for example they are 'owner' and 'next' in arm.
> When get a spinlock, the 'next' part will add 1 and wait 'next' being
> equal to 'owner'. However, at this moment, the 'next' part is local
> variable, but 'owner' part value is get from global variable logbuf_lock.
> However,raw_spin_lock_init(&logbuf_lock) will set 'owner' part and
> 'next' part to zero, the result is that cpu deadlock in function
> raw_spin_lock( while loop in function arch_spin_lock ).
>
> struct of arm spinlock
> union {
> u32 slock;
> struct __raw_tickets {
> u16 owner;
> u16 next;
> } tickets;
> };
> } arch_spinlock_t;
> static inline void arch_spin_lock(arch_spinlock_t *lock)
> {...
> <--- At the moment, other cpu call zap_locks()->spin_lock_init(),
> <--- set the 'owner' part to zero, but lockval.tickets.next is a
> <--- local variable
> while (lockval.tickets.next != lockval.tickets.owner) {
> lockval.tickets.owner = ACCESS_ONCE(lock->tickets.owner);
> }
> ...
> }
>
> The solution is that In function zap_locks(), replace
> raw_spin_lock_init(&logbuf_lock) with raw_spin_unlock(&logbuf_lock),
> to let spin_lock stay in unlocked.
>
> ...
>
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -1603,7 +1603,7 @@ static void zap_locks(void)
>
> debug_locks_off();
> /* If a crash is occurring, make sure we can't deadlock */
> - raw_spin_lock_init(&logbuf_lock);
> + raw_spin_unlock(&logbuf_lock);
> /* And make sure that we print immediately */
> sema_init(&console_sem, 1);
OK, so it's a race between raw_spin_lock() and raw_spin_lock_init()?
I wonder if there's a more general way of preventing this, within
raw_spin_lock_init()?
Of course, printk is special and the situation is unlikely to occur
elsewhere.
I guess the raw_spin_unlock() is OK - lockdep would have warned about
unlock-of-unlocked-lock but we did a debug_locks_off() to prevent that.
Powered by blists - more mailing lists