lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <200805011116.31782.jbarnes@virtuousgeek.org>
Date:	Thu, 1 May 2008 11:16:31 -0700
From:	Jesse Barnes <jbarnes@...tuousgeek.org>
To:	linux-pci@...ey.karlin.mff.cuni.cz
Cc:	TJ <linux@...orld.net>, Andi Kleen <andi@...stfloor.org>,
	Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>,
	"linux-kernel" <linux-kernel@...r.kernel.org>
Subject: Re: Why such a big difference in init-time PCI resource call-paths (x86 vs x86_64) ?

On Wednesday, April 30, 2008 9:07 am TJ wrote:
> In preparation for writing a Windows-style PCI resource allocation
> strategy
>
>  - use all e820 gaps for IOMEM resources; top-down allocation -
>
> and thus giving devices with large IOMEM requirements more chance of
> allocation in the 32-bit address space below 4GB (see bugzilla #10461),
> I'm mapping out the current implementation to ensure I understand the
> implications and guard against breaking any quirks or undocumented
> features.

Excellent, definitely a worthwhile change...

> There are significant differences between the x86 and x86_64
> implementations and I've not been able to find any explanation or deduce
> a reason for it.
>
> The first difference is mainly cosmetic. With one minor difference the
> source code of:
>
>   arch/x86/kernel/e820_32.c::e820_register_memory()
>   arch/x86/kernel/e820_64.c::e820_setup_gap()
>
> is identical. The purpose is to find the single largest gap in the e820
> map and point pci_mem_start to the start of the region (with a
> rounding-up adjustment).
>
> Why do these functions have such differing names?
> Which name is preferable?

Well e820_setup_gap is a bit more descriptive, but if you're factoring out 
common code in the x86 tree, it may as well be even more descriptive, e.g. 
e820_find_pci_gap or somesuch.

> The second difference appears more profound. In the x86_64 call path,
> the code that reserves resources and requests the ioport resource is
> called in:
>
>   arch/x86/kernel/setup_64.c::setup_arch()
>
> immediately *before* the call to e820_setup_gap():
>
> 	e820_reserve_resources();
> 	e820_mark_nosave_regions();
>
> 	/* request I/O space for devices used on all i[345]86 PCs */
> 	for (i = 0; i < ARRAY_SIZE(standard_io_resources); i++)
> 		request_resource(&ioport_resource, &standard_io_resources[i]);
>
> 	e820_setup_gap();
>
>
> On x86 however:
>
> 	e820_register_memory();
> 	e820_mark_nosave_regions();
>
> Although e820_register_memory() and e820_mark_nosave_regions() are
> called at basically the same point in:
>
>   arch/x86/kernel/setup_32.c::setup_arch()
>
> as the 64-bit setup_arch() call to e820_setup_gap(), the equivalent of
> the 64-bit e820_reserve_resources():
>
>   arch/x86/kernel/e820_32.c::init_iomem_resources()
>
> and the loop calling request_resource() is in:
>
>   arch/x86/kernel/setup_32.c::request_standard_resources()
>
> which is declared as:
>
>   subsys_initcall(request_standard_resources);
>
>
> Now, unless my call-path tracing has gone way wrong, this initcall won't
> happen until *much* later after a *lot* of other 'stuff' has been done.
> If my understanding of the initcall mechanism is correct, it finally
> gets called in:
>
>   init/main.c::do_initcalls()
>
> as part of the generic multi-level device init call-by-address
> functionality where each function is in the .initcallX.init section and
> they have names of the form __initcall_functionX (where X is the level
> id).
>
> So, why the big difference in implementations?
> What are the implications of each?
> Is one preferable to the other?

Andi may remember some of the history here.  It seems like we should be 
reserving those regions early on like the x86_64 code does, but it's possible 
that will create more conflicts later on for some platforms (I don't know of 
any offhand though).

> Any other observations or caveats?

For future patches and x86 specific questions, you should probably cc lkml and 
the x86 maintainers, they may be able to provide additional feedback.

Thanks,
Jesse
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ