lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20111216105101.GB30477@mudshark.cambridge.arm.com>
Date:	Fri, 16 Dec 2011 10:51:01 +0000
From:	Will Deacon <will.deacon@....com>
To:	"Eric W. Biederman" <ebiederm@...ssion.com>
Cc:	"tglx@...utronix.de" <tglx@...utronix.de>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: IRQ migration on CPU offline path

On Fri, Dec 16, 2011 at 05:26:46AM +0000, Eric W. Biederman wrote:
> > Argh, ok. Does this mean that other architectures should just preserve the
> > interface that x86 gives (for example not triggering IRQ affinity
> > notifiers)?
> 
> Interesting.  In this case the affinity notifier is an ugly hack for
> exactly one driver.  The affinity notifier is new (This January) and
> buggy.  Among other things there appears to be a clear reference count
> leak on the affinity notify structure. 
> 
> Honestly I don't see much to justify the existence of the affinity
> notifiers, and especially their requirement that they be called in
> process context.

One case I could see (ok, I'm clutching slightly at straws here) is for
modules that want to control the affinity of an IRQ that they are
controlling. irq_set_affinity is not an exported symbol, so they could use
irq_set_affinity_hint to try and stop userspace daemons from messing with
them and use notifiers to keep track of what they ended up with.

> At a practical level since the architects of the affinity notifier
> didn't choose to add notification on migration I don't see why
> you should care.

Suits me :)

> This isn't an x86 versus the rest of the world.  This is a 
> Solarflare driver vs the rest of the kernel issue.  When the Solarflar
> developers care they can fix up arm and all of the rest of the
> architectures that support cpu hot unplug.

Sure, I just think that whatever we do, it should be consistent across
archs, even if it's a driver that is to blame.

> As for threaded interrupt handlers there is probably something
> reasonable that can be done there.  My guess is threaded interrupt
> handlers should be handled the same way any other thread is handled
> during cpu hot-unplug.  And if something needs to be done I expect the
> generic code can do it.

My first thoughts were that we needed to call irq_set_thread_affinity to set
the IRQTF_AFFINITY bit in the threaD_flags, but actually looking at
irq_thread_check_affinity, I think you're right. The scheduler will deal
with this for us when it migrates kernel threads off the dying CPU.

So the conclusion is: ignore the IRQ affinity notifiers, update the affinity
mask in the irq_data and let the scheduler do the rest.

Thanks for the help!

Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ