lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 7 Jul 2011 21:00:44 +0300 (EEST)
From:	Pekka Enberg <penberg@...nel.org>
To:	david@...g.hm
cc:	Ankita Garg <ankita@...ibm.com>,
	linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, linux-pm@...ts.linux-foundation.org,
	svaidy@...ux.vnet.ibm.com, thomas.abraham@...aro.org,
	Dave Hansen <dave@...ux.vnet.ibm.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Matthew Garrett <mjg59@...f.ucam.org>,
	Arjan van de Ven <arjan@...radead.org>,
	Christoph Lameter <cl@...ux.com>
Subject: Re: [PATCH 00/10] mm: Linux VM Infrastructure to support Memory
 Power Management

On Wed, 6 Jul 2011, Pekka Enberg wrote:
>> Why does the allocator need to know about address boundaries? Why
>> isn't it enough to make the page allocator and reclaim policies favor using
>> memory from lower addresses as aggressively as possible? That'd mean
>> we'd favor the first memory banks and could keep the remaining ones
>> powered off as much as possible.
>> 
>> IOW, why do we need to support scenarios such as this:
>>
>>   bank 0     bank 1   bank 2    bank3
>> | online  | offline | online  | offline |
>
On Wed, 6 Jul 2011, david@...g.hm wrote:
> I believe that there are memory allocations that cannot be moved after they 
> are made (think about regions allocated to DMA from hardware where the 
> hardware has already been given the address space to DMA into)
>
> As a result, you may not be able to take bank 2 offline, so your option is to 
> either leave banks 0-2 all online, or support emptying bank 1 and taking it 
> offline.

But drivers allocate DMA memory for hardware during module load and stay 
pinned there until the driver is unloaded, no? So in practice DMA buffers 
are going to be in banks 0-1?

 				Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ