lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 14 Dec 2010 23:03:15 -0800
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Bjorn Helgaas <bjorn.helgaas@...com>
Cc:	Jesse Barnes <jbarnes@...tuousgeek.org>,
	Len Brown <lenb@...nel.org>, linux-pci@...r.kernel.org,
	linux-kernel@...r.kernel.org, "Rafael J. Wysocki" <rjw@...k.pl>,
	linux-acpi@...r.kernel.org, "H. Peter Anvin" <hpa@...or.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>, Adam Belay <abelay@....edu>
Subject: Re: [PATCH 5/5] PNP: HP nx6325 fixup: reserve unreported resources

On Tue, Dec 14, 2010 at 10:26 PM, Bjorn Helgaas <bjorn.helgaas@...com> wrote:
>
> I don't know whether the other patches in this series make you
> unhappy.  I'm really not happy with how I implemented the avoidance
> of ACPI devices when doing PCI allocation, but I do think we need
> to avoid them *somehow*, and I was looking for a minimal quick
> fix at this point in the cycle.

So the "avoid ACPI devices" part makes sense, and doesn't involved
quirks, so I don't hate it at all the same way I hated the HP quirk.

However, I hate how it makes the allocation logic opaque. You can no
longer tell from the regular non-debug dmesg and the /proc/iomem _why_
something got allocated the way it did, because there are hidden
rules. That makes things awkward, methinks.

Also, quite frankly, I wonder what happens after release when somebody
shows another machine that simply stopped working because the
allocation strategy didn't work for it. The hw coverage that -rc6 gets
is tiny compared to a real release.

IOW, what's the long-term strategy for this? The only sane long-term
strategy I can see is the one we have _always_ done, which is to try
to populate the memory resource tree with what simply matches reality.
The whole "ok, we know the hardware better than the BIOS does" is a
_stable_ strategy. In contrast, the things you propose are NOT stable
strategies, they all depend on basically "we match windows exactly
and/or trust ACPI". Both of which are *known* to be failing models.

That's why I'm somewhat upset. Your whole strategy seems to depend on
a known broken model. We _know_ ACPI tables are crap much of the time.
So we know that "avoiding ACPI resources" is inevitably insufficient.

And that's why I hate the "switch everything around" model. Yes, we
have a known way to fix things up - namely to actually detect the
hardware itself properly when firmware inevitably screws up - but the
very act of switching things around will pretty much guarantee that
all our years of effort is of dubious value, and we'll end up finding
other laptops that used to work and no longer does.

Only switching around when _CRS is used is possible, and shouldn't
cause any regressions if we continue to default to not using _CRS. But
you want to switch that default around at some point, don't you? At
which point we'll be up sh*t creek again. See what I'm saying?

Which all makes me suspect that we'd be much better off just doing the
bottom-up allocation even for _CRS. And maybe CRS works fine then when
we combine our hardware knowledge with the ACPI region avoidance.

                     Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ