[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161219102024.GC3107@twins.programming.kicks-ass.net>
Date: Mon, 19 Dec 2016 11:20:24 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Subject: Re: [RFC][PATCH] spinlock_debug: report spinlock lockup from unlock
On Sun, Dec 18, 2016 at 01:19:11AM +0900, Sergey Senozhatsky wrote:
> There is a race window between the point when __spin_lock_debug()
> detects spinlock lockup and the time when CPU that caused the
> lockup receives its backtrace interrupt.
>
> Before __spin_lock_debug() triggers all_cpu_backtrace() it calls
> spin_dump() to printk() the current state of the lock and CPU
> backtrace. These printk() calls can take some time to print the
> messages to serial console, for instance (we are not talking
> about console_unlock() loop and a flood of messages from other
> CPUs, but just spin_dump() printk() and serial console).
>
> All those preparation steps can give CPU that caused the lockup
> enough time to run away, so when it receives a backtrace interrupt
> it can look completely innocent.
>
> The patch extends `struct raw_spinlock' with additional variable
> that stores jiffies of successful do_raw_spin_lock() and checks
> in debug_spin_unlock() whether the spin_lock has been locked for
> too long. So we will have a reliable backtrace from CPU that
> locked up and a reliable backtrace from CPU that caused the
> lockup.
But why? Also, why jiffies, that's a horrible source of time.
Powered by blists - more mailing lists