lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 16 May 2010 23:06:56 -0700
From:	Yinghai <yinghai.lu@...cle.com>
To:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
CC:	David Miller <davem@...emloft.net>, mingo@...e.hu,
	tglx@...utronix.de, hpa@...or.com, akpm@...ux-foundation.org,
	torvalds@...ux-foundation.org, hannes@...xchg.org,
	linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org
Subject: Re: lmb type features.

On 05/16/2010 05:46 PM, Benjamin Herrenschmidt wrote:
> On Fri, 2010-05-14 at 16:51 -0700, Yinghai wrote:
> 
>  .../...
> 
>> #define LMB_ADD_MERGE (1<<0) 
>> #define LMB_ARRAY_DOUBLE (1<<1)
>>
>> so before call double_lmb_array(), should check the features to bit is set or not.
>> otherwise should emit PANIC with clear message.
>>
>> Usage:
>>
>> for range replacement,
>>
>> 1. early stage before lmb.reserved, lmb.memory is there.
>> so can not use lmb_find_base yet.
> 
> Let me make sure I understand: You mean when doing all the memory
> lmb_add() early during boot, we haven't done the various lmb_reserve()
> for all potentially reserved areas and thus cannot rely on
> double_lmb_array() doing the right thing ?

yes, 

thinking to use lmb_type to replace struct for mtrr trimming

> 
> I think this is a good point. However, a better way to do that would
> be to set the default alloc limit to 0 instead of LMB_ALLOC_ANYWHERE.
> 
> I haven't done that yet though I certainly intend to, but I'll need
> to ensure all the archs using LMB set a decent limit at some stage. You
> can in the meantime do it explicitely in x86.
> 
> Additionally, it should be possible in most cases to do all the critical
> lmb_reserve() early, before lmb_add()'s, and thus remove the problem,
> though that is indeed not the case today.

need the initial array size is big enough to use lmb_reserve() early.

in my patchset, for x86, already call lmb_reserve() early, and later all lmb_add_memory()
to fill lmb.


> 
> It would be nice to be able to extend the array for memory addition
> since that would allow us to have much smaller static arrays in the
> first place.

not on x86, we could to put them in __init with them.

hotplug mem should use resource tree instead of lmb?


> 
>> 2. for bootmem replacement, when do range set subtraction for final free range list,
>> don't want to change lmb.reserved in the middle.  callee should make sure to have big
>> enough temperately lmb_regions in lmb_type. 
> 
> Sorry, I'm not sure I grasped your explanation above. You mean when
> transitioning from LMB to the page allocator, the page freeing needs to
> be done after substracting the reserved array from the memory, and that
> substraction might cause the arrays to increase in size, thus affecting
> the reserved array ?

right. 

> 
> That could be solved by not doing the substraction and doing things a
> bit differently. You could have a single function that walks both arrays
> at the same time, and calls a callback for all memory ranges it finds
> that aren't reserved. Not -that- tricky to code.

but need to make sure lmb.reserved doesn't have overlapped eentries.

will check that later.

Thanks

Yinghai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ