lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Fri, 17 Aug 2007 11:44:31 +0100
From:	"Jan Beulich" <jbeulich@...ell.com>
To:	<linux-kernel@...r.kernel.org>
Subject: recursive use of bust_spinlocks()

Various architectures may call bust_spinlocks() recursively (when calling die() in
the context do an unresolved page fault); the function itself, however, doesn't
appear to be meant to be called in this manner. Nevertheless, this doesn't
appear to be a problem as long as bust_spinlocks(0) doesn't get called twice
in a row (otherwise, unblank_screen() may enter the scheduler). However, at
least on i386 die() has been capable of returning (and on other architectures
this should really be that way, too) when notify_die() returns NOTIFY_STOP.

The question now is: Should bust_spinlocks() increment/decrement
oops_in_progress (and kick klogd only when the count drops to zero), or
should we just avoid entering the scheduler by forcibly setting
oops_in_progress to 1 prior to calling unblank_screen(), or should all
architectures currently doing so avoid calling bust_spinlocks() recursively?

Further, many (if not all) architectures seem to have adopted the recursive
die() invocation protection. However, the logic there unconditionally calls
spin_unlock_irqrestore() (besides also allowing for bust_spinlocks() to be
used recursively), instead of undoing just what had been done for the
current function invocation.

Suggestions?

Thanks, Jan

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ