lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BA9A8F8.5070206@kernel.org>
Date:	Tue, 23 Mar 2010 22:54:00 -0700
From:	Yinghai Lu <yinghai@...nel.org>
To:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
CC:	Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de>,
	David Miller <davem@...emloft.net>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>, hpa@...or.com,
	jbarnes@...tuousgeek.org, ebiederm@...ssion.com,
	linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org
Subject: Re: [PATCH 06/20] early_res: seperate common memmap func from e820.c
 to fw_memmap.cy

On 03/23/2010 09:44 PM, Benjamin Herrenschmidt wrote:
> 
>> I though one possibility would be to have LMB regions become more lists
>> than arrays, so that the static storage only needs to cover as much as
>> is needed during really early boot (and we could probably still move the
>> BSS top point on some archs to dynamically make more ... actually we
>> could be smart arses and use LMB to allocate more LMB list heads if we
>> are reaching the table limit :-)
> 
> Actually what about that:
> 
> LMB entries are linked-listed. The array is just storage for those entry
> "heads".
> 
> The initial static array only needs to be big enough for very very early
> platform specific kernel bits and pieces, so it could even be sized by a
> Kconfig option. Or it could just use a klimit moving trick to pick up a
> page right after the BSS but that may need to be arch specific.
> 
> lmb_init() queues all the entries from the initial array in a freelist
> 
> lmb_alloc() and lmb_reserve() just pop entries from that freelist to
> populate the two main linked lists (memory and reserved).
> 
> When something tries to dequeue up the last freelist entry, then under
> the hood, LMB uses it instead to allocate a new block of LMB entries
> that gets added to the freelist.
> 
> We never free blocks of LMB entries.
> 
> That way, we can fine tine the static array to be as small as we can
> realistically make it be, and we have no boundary limitations on the
> amount of entries in either the memory list or the reserved list.
> 
> I'm a bit too flat out right now to write code, but if there's no
> objection, I might give that a go either later this week or next week,
> see if I can replace bootmem on powerpc.
> 

if the array can be doubled and have old one copied to new one. 
then we don't change lmb.c too much.

new early_res.c exend lmb. and another half already works with x86 to replace bootmem.

will check if i can produce one patch to make powerpc to reuse early_res/nobootmem

Thanks

Yinghai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ