lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 25 May 2007 15:06:22 -0600
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Greg KH <gregkh@...e.de>
Cc:	Jay Cliburn <jacliburn@...lsouth.net>,
	Grzegorz Krzystek <ninex@...eX.eu.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Andi Kleen <ak@...e.de>, ninex@...pl,
	linux-kernel@...r.kernel.org, linux-pci@...ey.karlin.mff.cuni.cz,
	Michael Ellerman <michael@...erman.id.au>,
	David Miller <davem@...emloft.net>,
	Tony Luck <tony.luck@...el.com>
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

Greg KH <gregkh@...e.de> writes:
>> MSI appears to have enough problems that enabling it in a kernel
>> that is supposed to run lots of different hardware (like a distro
>> kernel) is a recipe for disaster.
>
> Oh, I agree it's a major pain in the ass at times...
>
> But I'm real hesitant to change things this way.  We'll get reports of
> people who used to have MSI working, and now it will not (like all AMD
> chipsets).  That's a regression...

You saw my quirk that enabled MSI if you happen to have a
hypertransport msi mapping capability.  That should take care of
all hypertransport compliant AMD chipsets.

So at least 90% of what we have now should work.

> Perhaps we can trigger off of the same flag that Vista uses like Andi
> suggested?  That's one way to "know" that the hardware works, right?

A little.  It is redefining the problem as squarely a BIOS writers
problem and possibly we want to do that in the presence of pci-express.

> For non-x86 arches, they all seem to want to always enable MSI as they
> don't have to deal with as many broken chipsets, if any at all.  So for
> them, we'd have to just whitelist the whole arch.  Does that really make
> sense?

Non-x86 (unless I'm confused) already has white lists or some
equivalent because they can't use generic code for configuring MSI
because non-x86 does not have a standard MSI target window with a
standard meaning for the bits.  So non-x86 has it's own set of arch
specific mechanisms to handle this, and they just don't want generic
code getting in the way.

So it may makes sense make the default all to be x86 specific.

> And again, over time, like years, this list is going to grow way beyond
> a managable thing, especially as any new chipset that comes out in 2009
> is going to have working MSI, right?

I haven't a clue.  I know we are in we are in the teething pain stage
now which just generally makes things difficult.

> I think our blacklist is easier to
> manage over time, while causing a problem for some users in trying to
> determine their broken hardware that they currently have.

Possibly.  It just doesn't seem straight forward to add something
safely to our blacklist.

> It's a trade off, and I'd like to choose the one that over the long
> term, causes the least ammount of work and maintaiblity.  I think the
> current blacklist meets that goal.

A reasonable goal.  I will come back to this after the long holiday
weekend here and see what makes sense.

I think for most of Intel I can reduce my test to:
If (bus == 0 , device == 0, function == 0 && vendor == Intel &&
    has a pci express capability) {
	Enable msi on all busses().
}

At which point I don't think we will need to do much maintenance.

Eric

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ