lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 6 Jul 2011 12:01:45 +0300
From:	Pekka Enberg <penberg@...nel.org>
To:	Ankita Garg <ankita@...ibm.com>
Cc:	linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, linux-pm@...ts.linux-foundation.org,
	svaidy@...ux.vnet.ibm.com, thomas.abraham@...aro.org,
	Dave Hansen <dave@...ux.vnet.ibm.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Matthew Garrett <mjg59@...f.ucam.org>,
	Arjan van de Ven <arjan@...radead.org>,
	Christoph Lameter <cl@...ux.com>
Subject: Re: [PATCH 00/10] mm: Linux VM Infrastructure to support Memory Power Management

On Wed, Jul 6, 2011 at 11:45 AM, Pekka Enberg <penberg@...nel.org> wrote:
> Hi Ankita,
>
> [ I don't really know anything about memory power management but
>  here's my two cents since you asked for it. ]
>
> On Wed, Jun 29, 2011 at 4:00 PM, Ankita Garg <ankita@...ibm.com> wrote:
>> I) Dynamic Power Transition
>>
>> The goal here is to ensure that as much as possible, on an idle system,
>> the memory references do not get spread across the entire RAM, a problem
>> similar to memory fragmentation. The proposed approach is as below:
>>
>> 1) One of the first things is to ensure that the memory allocations do
>> not spill over to more number of regions. Thus the allocator needs to
>> be aware of the address boundary of the different regions.
>
> Why does the allocator need to know about address boundaries? Why
> isn't it enough to make the page allocator and reclaim policies favor using
> memory from lower addresses as aggressively as possible? That'd mean
> we'd favor the first memory banks and could keep the remaining ones
> powered off as much as possible.
>
> IOW, why do we need to support scenarios such as this:
>
>   bank 0     bank 1   bank 2    bank3
>  | online  | offline | online  | offline |
>
> instead of using memory compaction and possibly something like the
> SLUB defragmentation patches to turn the memory map into this:
>
>   bank 0     bank 1   bank 2   bank3
>  | online  | online  | offline | offline |
>
>> 2) At the time of allocation, before spilling over allocations to the
>> next logical region, the allocator needs to make a best attempt to
>> reclaim some memory from within the existing region itself first. The
>> reclaim here needs to be in LRU order within the region.  However, if
>> it is ascertained that the reclaim would take a lot of time, like there
>> are quite a fe write-backs needed, then we can spill over to the next
>> memory region (just like our NUMA node allocation policy now).
>
> I think a much more important question is what happens _after_ we've
> allocated and free'd tons of memory few times. AFAICT, memory
> regions don't help with that kind of fragmentation that will eventually
> happen anyway.

Btw, I'd also decouple the 'memory map' required for PASR from
memory region data structure and use page allocator hooks for letting
the PASR driver know about allocated and unallocated memory. That
way the PASR driver could automatically detect if full banks are
unused and power them off. That'd make memory power management
transparent to the VM regardless of whether we're using hardware or
software poweroff.

                        Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ