lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 14 Feb 2013 15:45:10 +0100
From:	Ingo Molnar <mingo@...nel.org>
To:	Thomas Gleixner <tglx@...utronix.de>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Jens Axboe <axboe@...nel.dk>,
	Alexander Viro <viro@....linux.org.uk>,
	Theodore Ts'o <tytso@....edu>, "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [-rc7 regression] Block IO/VFS/ext3/timer spinlock lockup?


* Thomas Gleixner <tglx@...utronix.de> wrote:

> On Wed, 13 Feb 2013, Linus Torvalds wrote:
> 
> > On Wed, Feb 13, 2013 at 3:10 AM, Ingo Molnar <mingo@...nel.org> wrote:
> > >
> > >
> > > Setting up Logical Volume Management: [   13.140000] BUG: spinlock lockup suspected on CPU#1, lvm.static/139
> > > [   13.140000] BUG: spinlock lockup suspected on CPU#1, lvm.static/139
> > > [   13.140000]  lock: 0x97fe9fc0, .magic: dead4ead, .owner: <none>/-1, .owner_cpu: -1
> > > [   13.140000] Pid: 139, comm: lvm.static Not tainted 3.8.0-rc7 #216702
> > > [   13.140000] Call Trace:
> > > [   13.140000]  [<792b5e66>] spin_dump+0x73/0x7d
> > > [   13.140000]  [<7916a347>] do_raw_spin_lock+0xb2/0xe8
> > > [   13.140000]  [<792b9412>] _raw_spin_lock_irqsave+0x35/0x3e
> > > [   13.140000]  [<790391e8>] prepare_to_wait+0x18/0x57
> > 
> > The wait-queue spinlock? That sounds *very* unlikely to deadlock due
> > to any bugs in block layer or filesystems. There are never any
> > downcalls to those from within that spinlock or any other locks taken
> > inside of it.
> 
> The way more interesting information is:
> 
> [   13.140000]  lock: 0x97fe9fc0, .magic: dead4ead, .owner: <none>/-1, .owner_cpu: -1
> 
> That lock is not contended, which makes no sense at all. The only
> explanation for such a behaviour would be a tight spin_lock/unlock
> loop on the other core which is exposed through the spinlock debugging
> code (it uses trylocks instead of queueing in the ticket lock).
> 
> Ingo, can you provide the backtrace of CPU0 please?

CPU0 appears to be idle:

[  118.510000] Call Trace:
[  118.510000]  [<7900844b>] cpu_idle+0x86/0xb4
[  118.510000]  [<792a91df>] rest_init+0x103/0x108
[  118.510000]  [<794558cc>] start_kernel+0x2c7/0x2cc
[  118.510000]  [<7945528e>] i386_start_kernel+0x44/0x46

which suggests memory corruption - but, if then it's a special 
type of memory corruption because AFAIR I always saw similar 
patterns to the lockup, never other signs of memory corruption.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ