lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 Oct 2009 07:56:49 -0500
From:	Dimitri Sivanich <>
To:	Yinghai Lu <>, Ingo Molnar <>
Cc:	"H. Peter Anvin" <>,
	Thomas Gleixner <>,
Subject: Re: [PATCH v2] x86/apic: limit irq affinity

On Thu, Oct 15, 2009 at 08:50:39AM -0500, Dimitri Sivanich wrote:
> On Wed, Oct 14, 2009 at 10:30:12PM -0700, Yinghai Lu wrote:
> > Dimitri Sivanich wrote:
> > > This patch allows for hard restrictions to irq affinity via a new cpumask and
> > > device node value in the irq_cfg structure.
> > > 
> > > The mask forces IRQ affinity to remain within the specified cpu domain.
> > > On some UV systems, this domain will be limited to the nodes accessible
> > > to the given node.  Currently other X86 systems will have all bits in
> > > the cpumask set, so non-UV systems will remain unaffected at this time.
> > > 
> > 
> > can you check if we can reuse target_cpus for this purpose?
> >
> The 'target_cpus' mask is in struct 'apic'.  It is a platform level mask
> (only one mask per platform).
> The 'allowed' mask that I am adding is a per irq level mask (one mask per irq).
> Each irq might be coming from a device attached to a different node, and each
> of those nodes might require its irqs to have a different mask.

Assuming that the real issue here is in adding any more cpumasks to irq_cfg, I've created another version of the patch that does not add the cpumask to irq_cfg.  The UV specific irq code will store these cpumasks (one per node).

Will send this shortly.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists