lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 24 May 2007 23:20:51 -0600
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Greg Kroah-Hartman <gregkh@...e.de>,
	Jay Cliburn <jacliburn@...lsouth.net>,
	Grzegorz Krzystek <ninex@...eX.eu.org>,
	Andi Kleen <ak@...e.de>, ninex@...pl,
	<linux-kernel@...r.kernel.org>,
	<linux-pci@...ey.karlin.mff.cuni.cz>,
	Michael Ellerman <michael@...erman.id.au>,
	David Miller <davem@...emloft.net>,
	Tony Luck <tony.luck@...el.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Roland Dreier <rdreier@...co.com>
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

Andrew Morton <akpm@...ux-foundation.org> writes:

> On Thu, 24 May 2007 22:19:09 -0600 ebiederm@...ssion.com (Eric W. Biederman)
> wrote:
>
>> Currently we blacklist known bad msi configurations which means we
>> keep getting MSI enabled on chipsets that either do not support MSI,
>> or MSI is implemented improperly.  Since the normal IRQ routing
>> mechanism seems to works even when MSI does not, this is a bad default
>> and causes non-functioning systems for no good reason.
>> 
>> So this patch inverts the sense of the MSI bus flag to only enable
>> MSI on known good systems.  I am seeding that list with the set of
>> chipsets with an enabled hypertransport MSI mapping capability.  Which
>> is as close as I can come to an generic MSI enable.  So for actually
>> using MSI this patch is a regression, but for just having MSI enabled
>> in the kernel by default things should just work with this patch
>> applied.
>> 
>> People can still enable MSI on a per bus level for testing by writing
>> to sysfs so discovering chipsets that actually work (assuming we are
>> using modular drivers) should be pretty straight forward.
>
> Yup.
>
> Do we have a feel for how much performace we're losing on those 
> systems which _could_ do MSI, but which will end up defaulting
> to not using it?

I don't have any good numbers, although it is enough that you can
measure the difference.  I think Roland Drier and the other IB
guys have actually made the measurements.  For low-end hardware
I expect the performance difference is currently in the noise.

However because MSI irqs are not shared a lot of things are
simplified.  In particular it means that you should be able to have an
irq fastpath that does not read the hardware at all, and hardware
register reads can be comparatively very slow.

Further because all MSI irq are not shared and edge triggered we don't
have a danger of screaming irqs or drivers not handling a shared
irq properly.

The big performance win is most likely to be for fast I/O devices
where the multiple IRQs per card will allow the irqs for different
flows of data to be directed to different cpus.

So for the short term I don't think there is much downside in having
MSI disabled.  For the long term I think MSI will be quite useful.

Further my gut feel is that my two patches together will enable MSI on
most of the x86 chipsets where it currently works.  Because most
chipsets are either from Intel or are hypertransport based.

Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ