lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Nov 2009 14:20:23 +0100 (CET)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Dimitri Sivanich <sivanich@....com>
cc:	"Eric W. Biederman" <ebiederm@...ssion.com>,
	Ingo Molnar <mingo@...e.hu>,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Yinghai Lu <yinghai@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Jesse Barnes <jbarnes@...tuousgeek.org>,
	Arjan van de Ven <arjan@...radead.org>,
	David Miller <davem@...emloft.net>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@...el.com>
Subject: Re: [PATCH v6] x86/apic: limit irq affinity

On Sat, 21 Nov 2009, Dimitri Sivanich wrote:

> On Sat, Nov 21, 2009 at 10:49:50AM -0800, Eric W. Biederman wrote:
> > Dimitri Sivanich <sivanich@....com> writes:
> > 
> > > This patch allows for hard numa restrictions to irq affinity on x86 systems.
> > >
> > > Affinity is masked to allow only those cpus which the subarchitecture
> > > deems accessible by the given irq.
> > >
> > > On some UV systems, this domain will be limited to the nodes accessible
> > > to the irq's node.  Initially other X86 systems will not mask off any cpus
> > > so non-UV systems will remain unaffected.
> > 
> > Is this a hardware restriction you are trying to model?
> > If not this seems wrong.
> 
> Yes, it's a hardware restriction.

Nevertheless I think that this is the wrong approach.

What we really want is a notion in the irq descriptor which tells us:
this interrupt is restricted to numa node N.

The solution in this patch is just restricted to x86 and hides that
information deep in the arch code. 

Further the patch adds code which should be in the generic interrupt
management code as it is useful for other purposes as well:

Driver folks are looking for a way to restrict irq balancing to a
given numa node when they have all the driver data allocated on that
node. That's not a hardware restriction as in the UV case but requires
a similar infrastructure.

One possible solution would be to have a new flag:
 IRQF_NODE_BOUND    - irq is bound to desc->node

When an interrupt is set up we would query with a new irq_chip
function chip->get_node_affinity(irq) which would default to an empty
implementation returning -1. The arch code can provide its own
function to return the numa affinity which would express the hardware
restriction.

The core code would restrict affinity settings to the cpumask of that
node without any need for the arch code to check it further.

That same infrastructure could be used for the software restriction of
interrupts to a node on which the device is bound.

Having it in the core code also allows us to expose this information
to user space so that the irq balancer knows about it and does not try
to randomly move the affinity to cpus which are not in the allowed set
of the node.

Thanks,

	tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ