lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 22 Feb 2008 10:11:33 +0530
From:	Balbir Singh <balbir@...ux.vnet.ibm.com>
To:	Andi Kleen <andi@...stfloor.org>
CC:	Nick Piggin <nickpiggin@...oo.com.au>, akpm@...l.org,
	torvalds@...l.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] Document huge memory/cache overhead of memory controller
 in Kconfig

Andi Kleen wrote:
>> 1. We could create something similar to mem_map, we would need to handle 4
> 
> 4? At least x86 mainline only has two ways now. flatmem and vmemmap.
> 
>> different ways of creating mem_map.
> 
> Well it would be only a single way to create the "aux memory controller
> map" (or however it will be called). Basically just a call to single
> function from a few different places.
> 
>> 2. On x86 with 64 GB ram, 
> 
> First i386 with 64GB just doesn't work, at least not with default 3:1
> split. Just calculate it yourself how much of the lowmem area is left
> after the 64GB mem_map is allocated. Typical rule of thumb is that 16GB
> is the realistic limit for 32bit x86 kernels. Worrying about
> anything more does not make much sense.
> 

I understand what you say Andi, but nothing in the kernel stops us from
supporting 64GB. Should a framework like memory controller make an assumption
that not more than 16GB will be configured on an x86 box?

>> if we decided to use vmalloc space, we would need 64
>> MB of vmalloc'ed memory
> 
> Yes and if you increase mem_map you need exactly the same space
> in lowmem too. So increasing the vmalloc reservation for this is
> equivalent. Just make sure you use highmem backed vmalloc.
> 

I see two problems with using vmalloc. One, the reservation needs to be done
across architectures. Two, a big vmalloc chunk is not node aware, if all the
pages come from the same node, we have a penalty to pay in a NUMA system.

-- 
	Warm Regards,
	Balbir Singh
	Linux Technology Center
	IBM, ISTL
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ