lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 11 Jun 2009 13:43:10 -0700
From:	Yinghai Lu <yhlu.kernel@...il.com>
To:	"H. Peter Anvin" <hpa@...ux.intel.com>
Cc:	Matthew Wilcox <matthew@....cx>,
	Jesse Barnes <jbarnes@...tuousgeek.org>,
	Martin Mares <mj@....cz>, LKML <linux-kernel@...r.kernel.org>,
	linux-pci@...r.kernel.org,
	"the arch/x86 maintainers" <x86@...nel.org>,
	David Woodhouse <dwmw2@...radead.org>,
	linux-arch@...r.kernel.org
Subject: Re: RFC: x86: cap iomem_resource to addressable physical memory

On Tue, Jun 9, 2009 at 6:32 PM, H. Peter Anvin<hpa@...ux.intel.com> wrote:
> x86 cannot generate full 64-bit addresses; this patch clamps iomem
> addresses to the accessible range.
>
> I wanted to post it for review before committing it, however; comments
> would be appreciated, especially of the kind "this is done too early/too
> late/in the wrong place/incorrectly".
| --- a/arch/x86/kernel/cpu/common.c
| +++ b/arch/x86/kernel/cpu/common.c
| @@ -839,6 +839,9 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
| #if defined(CONFIG_NUMA) && defined(CONFIG_X86_64)
| 	numa_add_cpu(smp_processor_id());
| #endif
|+
|+	/* Cap the iomem address space to what is addressable on all CPUs */
|+	iomem_resource.end &= (1ULL << c->x86_phys_bits) - 1;
| }
|


do we need do that on every cpu?

looks like we could do that in identify_boot_cpu.

YH
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ