lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Date:	Thu, 02 Dec 2010 08:37:30 +0000
From:	"Jan Beulich" <JBeulich@...ell.com>
To:	<tglx@...utronix.de>
Cc:	<linux-kernel@...r.kernel.org>
Subject: use of set_irq_chip_and_handler...() for chained handlers vs
	 sparse IRQs

Thomas,

looking (originally from a Xen perspective) at the use of non-platform
specific drivers that set chained IRQ handlers (drivers/mfd/ezx-pcap.c,
drivers/gpio/langwell_gpio.c, and drivers/gpio/timbgpio.c are the ones
that I could clearly identify) I wonder not only how a conflict between
the IRQ ranges they use with "normal" IRQs is being avoided, but
also how they can work at all with sparse IRQs, and how races in
trying to set up IRQs' chips/handlers are supposed to be avoided (on
x86, alloc_irq_and_cfg_at() blindly takes the result of
get_irq_chip_data() no matter what ->chip actually points to, and
the call to set_irq_chip_data() is all but race free).

Is it possible that the setup of chained handlers really isn't meant
to be used without precise knowledge of the platform, possibly
including the knowledge that sparse IRQs aren't in use there (and
hence the cited drivers have incomplete Kconfig dependencies)?

While for native x86 it may be that races in setting up IRQ chips
and handlers can be considered implicitly race free (leaving aside
the chained handler situation), under Xen and in the general
case (given that set_irq_chip() and __set_irq_handler() are
exported symbols) currently there seems to be no way to
avoid collisions.

Thanks, Jan

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists