lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 15 Jul 2015 21:24:29 +0300
From:	Sagi Grimberg <sagig@....mellanox.co.il>
To:	Jens Axboe <axboe@...nel.dk>, Keith Busch <keith.busch@...el.com>,
	Bart Van Assche <bart.vanassche@...disk.com>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	Christoph Hellwig <hch@...radead.org>,
	linux-rdma@...r.kernel.org, linux-nvme@...ts.infradead.org,
	linux-kernel@...r.kernel.org,
	ksummit-discuss@...ts.linuxfoundation.org
Subject: Re: [Ksummit-discuss] [TECH TOPIC] IRQ affinity

On 7/15/2015 8:25 PM, Jens Axboe wrote:
> On 07/15/2015 11:19 AM, Keith Busch wrote:
>> On Wed, 15 Jul 2015, Bart Van Assche wrote:
>>> * With blk-mq and scsi-mq optimal performance can only be achieved if
>>>  the relationship between MSI-X vector and NUMA node does not change
>>>  over time. This is necessary to allow a blk-mq/scsi-mq driver to
>>>  ensure that interrupts are processed on the same NUMA node as the
>>>  node on which the data structures for a communication channel have
>>>  been allocated. However, today there is no API that allows
>>>  blk-mq/scsi-mq drivers and irqbalanced to exchange information
>>>  about the relationship between MSI-X vector ranges and NUMA nodes.
>>
>> We could have low-level drivers provide blk-mq the controller's irq
>> associated with a particular h/w context, and the block layer can provide
>> the context's cpumask to irqbalance with the smp affinity hint.
>>
>> The nvme driver already uses the hwctx cpumask to set hints, but this
>> doesn't seems like it should be a driver responsibility. It currently
>> doesn't work correctly anyway with hot-cpu since blk-mq could rebalance
>> the h/w contexts without syncing with the low-level driver.
>>
>> If we can add this to blk-mq, one additional case to consider is if the
>> same interrupt vector is used with multiple h/w contexts. Blk-mq's cpu
>> assignment needs to be aware of this to prevent sharing a vector across
>> NUMA nodes.
>
> Exactly. I may have promised to do just that at the last LSF/MM
> conference, just haven't done it yet. The point is to share the mask,
> I'd ideally like to take it all the way where the driver just asks for a
> number of vecs through a nice API that takes care of all this. Lots of
> duplicated code in drivers for this these days, and it's a mess.
>

These are all good points.

But I'm not sure the block layer is always the correct place to take
care of msix vector assignments. It's probably a perfect fit for NVME
and other storage devices, but if we take RDMA for example, block
storage co-exists with file storage, Ethernet traffic and user-space 
applications that do RDMA. All of which share the device MSI-X vectors.
So in this case, the block layer would not be a suitable place to set
IRQ affinity since each deployment might present different constraints.

In any event, the irqbalance daemon is not helping here. Unfortunately
the common practice is to just turn it off in order to get optimized
performance.

Sagi.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ