lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 24 Apr 2008 21:19:19 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	rdreier@...co.com
Cc:	jeff@...zik.org, ebiederm@...ssion.com,
	torvalds@...ux-foundation.org, rene.herman@...access.nl,
	bunk@...nel.org, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, rmk@....linux.org.uk,
	tglx@...utronix.de, mingo@...hat.com
Subject: Re: MSI, fun for the whole family

From: Roland Dreier <rdreier@...co.com>
Date: Thu, 24 Apr 2008 20:57:48 -0700

> Now, it is true that the kernel could do something crazy and collapse
> all these interrupt vectors into a single "IRQ" and then tell the
> interrupt handler which vector it was by passing some "metadata" in, but
> why not just give each MSI message it's own IRQ?

Actually, it doesn't make any sense to have more MSI, or "MSI queue"
interrupts than you have cpus.

Non-x86 PCI-E controller impelemntations that I am familiar with collect
MSI and MSI-X interrupts into "queues", these queues being non-empty
is what actually triggers an interrupt to the CPU.  And, there are
enough MSI queue instances such that you can direct each one to a
unique cpu.

The MSI queue interrupt simply scans the ring buffer of pending MSI
interrupts and dispatches them to the device.

You can handle PCI-E frabric error messages the same way, and in fact
that's what the controllers I am familiar with do.

A Linux implementation of support for this kind of setup can be seen
in arch/sparc64/kernel/pci_msi.c:sparc64_msiq_interrupt().  It's
very generic and doesn't care whether it's talking to real PCI
controller hardware or a hypervisor based interface.

Besides the obvious extra indirection overhead, our IRQ layer is very
much capable of supporting multi-level dispatch like this correctly.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ