[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <481A23DE.6070905@zytor.com>
Date: Thu, 01 May 2008 13:11:10 -0700
From: "H. Peter Anvin" <hpa@...or.com>
To: Andi Kleen <andi@...stfloor.org>
CC: Jesse Barnes <jbarnes@...tuousgeek.org>,
linux-pci@...ey.karlin.mff.cuni.cz, TJ <linux@...orld.net>,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: Why such a big difference in init-time PCI resource call-paths
(x86 vs x86_64) ?
Andi Kleen wrote:
> On Thu, May 01, 2008 at 11:16:31AM -0700, Jesse Barnes wrote:
>> On Wednesday, April 30, 2008 9:07 am TJ wrote:
>>> In preparation for writing a Windows-style PCI resource allocation
>>> strategy
>>>
>>> - use all e820 gaps for IOMEM resources; top-down allocation -
>>>
>>> and thus giving devices with large IOMEM requirements more chance of
>>> allocation in the 32-bit address space below 4GB (see bugzilla #10461),
>
> I tried that some time ago and it turned out that some systems have
> mappings in holes and don't boot anymore when you fill the holes too much.
> But that was only considering e820. if you do this it might work if you
> do it really like windows and consider all resources, including ACPI.
Yes, considering all possible reservation schemes is really critical
here (including the magic knowledge of the legacy region).
>>> So, why the big difference in implementations?
>>> What are the implications of each?
>>> Is one preferable to the other?
>
> I don't remember why it is different. Probably wasn't intentional.
> But in general x86-64 is less fragile than i386 here because it has the e820
> allocator and can deal better with conflicts.
Yes, and that's definitely the model we want on both sides. Working on
this is way high on my personal priority list at the moment.
-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists