lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160615200655.GB7637@localhost.localdomain>
Date:	Wed, 15 Jun 2016 16:06:55 -0400
From:	Keith Busch <keith.busch@...el.com>
To:	Bart Van Assche <bart.vanassche@...disk.com>
Cc:	Christoph Hellwig <hch@....de>,
	"tglx@...utronix.de" <tglx@...utronix.de>,
	"axboe@...com" <axboe@...com>,
	"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
	"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
	"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 02/13] irq: Introduce IRQD_AFFINITY_MANAGED flag

On Wed, Jun 15, 2016 at 09:36:54PM +0200, Bart Van Assche wrote:
> Sorry that I had not yet this made this clear but my concern is about a
> system equipped with two or more adapters and with more CPU cores than the
> number of MSI-X interrupts per adapter. Consider e.g. a system with two
> adapters (A and B), 8 interrupts per adapter (A0..A7 and B0..B7), 32 CPU
> cores and two NUMA nodes. Assuming that hyperthreading is disabled, will the
> patches from this patch series generate the following interrupt assignment?
> 
> 0: A0 B0
> 1: A1 B1
> 2: A2 B2
> 3: A3 B3
> 4: A4 B4
> 5: A5 B5
> 6: A6 B6
> 7: A7 B7
> 8: (none)
> ...
> 31: (none)

I'll need to look at the follow on patches do to confirm, but that's
not what this should do. All CPU's should have a vector assigned because
every CPU needs to be assigned a submission context using a vector. In
your example, every vector's affinity mask should be assigned to 4 CPUs:
vector '8' starts over with A0 B0, '9' gets A1 B1, and so on.

If it's done such that all CPUs are assigned and no sharing occurs across
NUMA nodes, does that change your concern?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ