lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <453338ab43d43c0bf24acf1aeba95251@mail.gmail.com>
Date:   Tue, 28 Aug 2018 12:17:33 +0530
From:   Sumit Saxena <sumit.saxena@...adcom.com>
To:     tglx@...utronix.de
Cc:     Ming Lei <ming.lei@...hat.com>, hch@....de,
        linux-kernel@...r.kernel.org
Subject: Affinity managed interrupts vs non-managed interrupts

Hi Thomas,

We are working on next generation MegaRAID product where requirement is-
to allocate additional 16 MSI-x vectors in addition to number of MSI-x
vectors megaraid_sas driver usually allocates.  MegaRAID adapter supports
128 MSI-x vectors.

To explain the requirement and solution, consider that we have 2 socket
system (each socket having 36 logical CPUs). Current driver will allocate
total 72 MSI-x vectors by calling API- pci_alloc_irq_vectors(with flag-
PCI_IRQ_AFFINITY).  All 72 MSI-x vectors will have affinity across NUMA
nodes and interrupts are affinity managed.

If driver calls- pci_alloc_irq_vectors_affinity() with pre_vectors = 16
and, driver can allocate 16 + 72 MSI-x vectors.
All pre_vectors (16) will be mapped to all available online CPUs but
effective affinity of each vector is to CPU 0. Our requirement is to have
pre_vectors 16 reply queues to be mapped to local NUMA node with effective
CPU should be spread within local node cpu mask. Without changing kernel
code, we can achieve this by driver calling pci_enable_msix_range()
(requesting to allocate 16 + 72 MSI-x vectors) instead of
pci_alloc_irq_vectors() API. If we use pci_enable_msix_range(), it also
requires MSI-x to CPU affinity handled by driver and these interrupts will
be non-managed.

Question is-
Is there any restriction or preference of using
pci_alloc_irq_vectors{/_affinity} vs pci_enable_msix_range in low level
driver?
If driver uses non-managed interrupt, all cases are handled correctly
through irqbalancer. Is there any plan in future to migrate to managed
interrupts entirely or it is a choice based call for driver maintainers?

Thanks,
Sumit

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ