lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 31 Mar 2010 14:12:54 -0700
From:	"H. Peter Anvin" <hpa@...or.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	Yinghai Lu <yinghai@...nel.org>, Rabin Vincent <rabin@....in>,
	lkml <linux-kernel@...r.kernel.org>, penberg@...helsinki.fi,
	cl@...ux-foundation.org,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	linux-arch@...r.kernel.org, David Howells <dhowells@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: start_kernel(): bug: interrupts were enabled early

On 03/31/2010 01:52 PM, Andrew Morton wrote:
> On Wed, 31 Mar 2010 13:47:23 -0700
> Yinghai Lu <yinghai@...nel.org> wrote:
> 
>> spin_unlock_irq from arm is different from other archs?
> 
> No, spin_unlock_irq() unconditionally enables interrupts on all
> architectures.

So I found checkin 60ba96e546da45d9e22bb04b84971a25684e4d46 in the
bk-historic git tree:

[PATCH] rwsem: Make rwsems use interrupt disabling spinlocks

The attached patch makes read/write semaphores use interrupt disabling
spinlocks in the slow path, thus rendering the up functions and trylock
functions available for use in interrupt context.  This matches the
regular semaphore behaviour.

I've assumed that the normal down functions must be called with
interrupts enabled (since they might schedule), and used the
irq-disabling spinlock variants that don't save the flags.

Signed-Off-By: David Howells <dhowells@...hat.com>
Tested-by: Badari Pulavarty <pbadari@...ibm.com>
Signed-off-by: Linus Torvalds <torvalds@...l.org>

What we have here is a case of this assumption being violated, because
the lock is taken with interrupts disabled on a path where contention
cannot happen (because the code is single-threaded at this point), but
the lock is taken due to reuse of generic code.

The obvious way to fix this would be to use
spin_lock_irqsave..spin_lock_irqrestore in __down_read as well as in the
other locations; I don't have a good feel for what the cost of doing so
would be, though.  On x86 it's fairly expensive simply because the only
way to save the state is to push it on the stack, which the compiler
doesn't deal well with, but this code isn't used on x86.

	-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ