lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 2 Jul 2008 21:59:10 -0600
From:	Matthew Wilcox <matthew@....cx>
To:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc:	linux-pci@...r.kernel.org,
	Kenji Kaneshige <kaneshige.kenji@...fujitsu.com>,
	Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	David Miller <davem@...emloft.net>,
	Dan Williams <dan.j.williams@...el.com>,
	Martine.Silbermann@...com, linux-kernel@...r.kernel.org,
	Michael Ellerman <michaele@....ibm.com>
Subject: Re: Multiple MSI

On Thu, Jul 03, 2008 at 01:24:29PM +1000, Benjamin Herrenschmidt wrote:
> On Wed, 2008-07-02 at 20:44 -0600, Matthew Wilcox wrote:
> > At the moment, devices with the MSI-X capability can request multiple
> > interrupts, but devices with MSI can only request one.  This isn't an
> > inherent limitation of MSI, it's just the way that Linux currently
> > implements it.  I intend to lift that restriction, so I'm throwing out
> > some idea that I've had while looking into it.
> 
> Interesting. I've been thinking about that one for some time but
> back then, the feedback I got left and right is that nobody cares :-)

The AHCI spec only includes MSI.  So I have a reason to care.

> I'm adding Michael Ellerman to the CC list, he's done a good part of the
> PowerPC MSI stuff.

Doh!  I was sure I added him to the CC list.  Sorry.

> > Next, MSI requires that you assign a block of interrupts that is a power
> > of two in size (between 2^0 and 2^5), and aligned to at least that power
> > of two.  I've looked at the x86 code and I think this is doable there
> > [1]. I don't know how doable it is on other architectures.  If not, just
> > ignore all this and continue to have MSI hardware use a single interrupt.
> 
> Well, it requires that for HW number. But I don't think it should
> require that at API level (ie. for driver visible irq numbers). Some
> architectures can fully remap between HW sources and "linux" visible IRQ
> numbers and thus wouldn't have that limitation from an API point of
> view.

This is true and worth considering carefully.  Are IRQ numbers a scarce
resource on PowerPC?  They are considerably less scarce than interrupt
vectors are on x86-64.  How hard is it to make IRQ numbers an abundent
resource?  Is it simply a question of increasing NR_IRQS?

This cost should be traded off against the cost of allocating something
like the msix_entry array in each driver that wants to use multiple MSIs,
passing that array around, using it properly, etc.

It would make some sense to pass nr_irqs all the way down to arch code
and let arch code take care of reserving the block of vectors (aligned
appropriately).  That would conserve IRQ numbers, though not vectors.
I think we have to consider excess vectors reserved.  If we don't, we
could get into the situation where a device uses more interrupts than
the driver thinks it will and problems ensue.


By the way, would people be interested in changing the MSI-X API to get
rid of the msix_entry array?  If allocating consecutive IRQs isn't a
problem, then we could switch the MSI-X code to use consecutive IRQs.

-- 
Intel are signing my paycheques ... these opinions are still mine
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ