lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161219105008.GA24780@jagdpanzerIV.localdomain>
Date:   Mon, 19 Dec 2016 19:50:08 +0900
From:   Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
        Ingo Molnar <mingo@...hat.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        linux-kernel@...r.kernel.org,
        Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Subject: Re: [RFC][PATCH] spinlock_debug: report spinlock lockup from unlock

Hello,

On (12/19/16 11:20), Peter Zijlstra wrote:
> On Sun, Dec 18, 2016 at 01:19:11AM +0900, Sergey Senozhatsky wrote:
> > There is a race window between the point when __spin_lock_debug()
> > detects spinlock lockup and the time when CPU that caused the
> > lockup receives its backtrace interrupt.
> > 
> > Before __spin_lock_debug() triggers all_cpu_backtrace() it calls
> > spin_dump() to printk() the current state of the lock and CPU
> > backtrace. These printk() calls can take some time to print the
> > messages to serial console, for instance (we are not talking
> > about console_unlock() loop and a flood of messages from other
> > CPUs, but just spin_dump() printk() and serial console).
> > 
> > All those preparation steps can give CPU that caused the lockup
> > enough time to run away, so when it receives a backtrace interrupt
> > it can look completely innocent.
> > 
> > The patch extends `struct raw_spinlock' with additional variable
> > that stores jiffies of successful do_raw_spin_lock() and checks
> > in debug_spin_unlock() whether the spin_lock has been locked for
> > too long. So we will have a reliable backtrace from CPU that
> > locked up and a reliable backtrace from CPU that caused the
> > lockup.
> 
> But why?

my setups printk() to a slow serial console; so spin_dump() from
locked up CPU spends enough time on printing to render all-CPU-backtrace
useless: a CPU that caused the lockup may be in user space or somewhere
else by the time locked-up CPU finally asks it to IPI/NMI backtrace.

CPU1					CPU2
					spin_lock(&lock)
spin_lock(&lock)
 spin_dump()
  printk()
  printk()				spin_unlock(&lock)
  printk()
  ...
  trigger_all_cpu_backtrace()
   send out IPI/NMI
  					-> backtrace IPI/NMI.
					   may be irrelevant at this
					   point.

which complicates analysis, when spinlock can be taken from multiple
paths.

the amount of time spent in spin_dump() also depends on the
dump_stack() - the deeper the call stack is the more time we
give to CPU that cause the lock up.


> Also, why jiffies, that's a horrible source of time.

well, __spin_lock_debug() sort of depends on jiffies: `loops_per_jiffy * HZ',
so I also used jiffies. can use something better. like actual time (?)

	lock()
		lock->acquired = running_clock() >> 30LL
	
	unlock()
		(running_clock() >> 30LL) - lock->acquired >= 1

	-ss

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ