lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 17 May 2010 10:46:36 +1000
From:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
To:	Yinghai <yinghai.lu@...cle.com>
Cc:	David Miller <davem@...emloft.net>, mingo@...e.hu,
	tglx@...utronix.de, hpa@...or.com, akpm@...ux-foundation.org,
	torvalds@...ux-foundation.org, hannes@...xchg.org,
	linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org
Subject: Re: lmb type features.

On Fri, 2010-05-14 at 16:51 -0700, Yinghai wrote:

 .../...

> #define LMB_ADD_MERGE (1<<0) 
> #define LMB_ARRAY_DOUBLE (1<<1)
> 
> so before call double_lmb_array(), should check the features to bit is set or not.
> otherwise should emit PANIC with clear message.
> 
> Usage:
> 
> for range replacement,
> 
> 1. early stage before lmb.reserved, lmb.memory is there.
> so can not use lmb_find_base yet.

Let me make sure I understand: You mean when doing all the memory
lmb_add() early during boot, we haven't done the various lmb_reserve()
for all potentially reserved areas and thus cannot rely on
double_lmb_array() doing the right thing ?

I think this is a good point. However, a better way to do that would
be to set the default alloc limit to 0 instead of LMB_ALLOC_ANYWHERE.

I haven't done that yet though I certainly intend to, but I'll need
to ensure all the archs using LMB set a decent limit at some stage. You
can in the meantime do it explicitely in x86.

Additionally, it should be possible in most cases to do all the critical
lmb_reserve() early, before lmb_add()'s, and thus remove the problem,
though that is indeed not the case today.

It would be nice to be able to extend the array for memory addition
since that would allow us to have much smaller static arrays in the
first place.

> 2. for bootmem replacement, when do range set subtraction for final free range list,
> don't want to change lmb.reserved in the middle.  callee should make sure to have big
> enough temperately lmb_regions in lmb_type. 

Sorry, I'm not sure I grasped your explanation above. You mean when
transitioning from LMB to the page allocator, the page freeing needs to
be done after substracting the reserved array from the memory, and that
substraction might cause the arrays to increase in size, thus affecting
the reserved array ?

That could be solved by not doing the substraction and doing things a
bit differently. You could have a single function that walks both arrays
at the same time, and calls a callback for all memory ranges it finds
that aren't reserved. Not -that- tricky to code.

Cheers,
Ben.

> Thanks
> 
> Yinghai
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists