lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090107184721.GA16193@elte.hu>
Date:	Wed, 7 Jan 2009 19:47:21 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Alexey Zaytsev <alexey.zaytsev@...il.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: linux-next: Tree for December 11


* Alexey Zaytsev <alexey.zaytsev@...il.com> wrote:

> And last time I bisected, it pointed to:
> 
> commit 7317d7b87edb41a9135e30be1ec3f7ef817c53dd
> Author: Nick Piggin <nickpiggin@...oo.com.au>
> Date:   Tue Sep 30 20:50:27 2008 +1000
> 
>    sched: improve preempt debugging
>
> 
>    This patch helped me out with a problem I recently had....
> 
>    Basically, when the kernel lock is held, then preempt_count
> underflow does not
>    get detected until it is released which may be a long time (and arbitrarily,
>    eg at different points it may be rescheduled). If the bkl is released at
>    schedule, the resulting output is actually fairly cryptic...
> 
>    With any other lock that elevates preempt_count, it is illegal to schedule
>    under it (which would get found pretty quickly). bkl allows scheduling with
>    preempt_count elevated, which makes underflows hard to debug.
> 
>    Signed-off-by: Ingo Molnar <mingo@...e.hu>
> 
> so at least a dumb bisection won't do here.

ah, sorry for being a slow starter, i missed that bit - merge window 
attention span troubles ...

I think the kernel_locked() check added here is plain buggy against IRQ 
contexts: we drop the BKL spinlock and reduce current->kernel_depth 
non-atomically.

So kernel_locked() can become detached from the preempt_count().

Nick, can you think of any better way of still saving this debug check, or 
should we revert it?

Although it seems a bit weird how consistently you seem to be able to 
trigger it - as this seems to be a narrow race. Is there an IRQ storm 
there perhaps, or something widens things up for Qemu to inject an IRQ 
right there?

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ