lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 11 Jun 2007 17:38:58 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	"Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com>
cc:	Andrew Morton <akpm@...ux-foundation.org>, Andi Kleen <ak@...e.de>,
	linux-kernel@...r.kernel.org, gregkh@...e.de, muli@...ibm.com,
	asit.k.mallick@...el.com, suresh.b.siddha@...el.com,
	arjan@...ux.intel.com, ashok.raj@...el.com, shaohua.li@...el.com,
	davem@...emloft.net
Subject: Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling

On Mon, 11 Jun 2007, Keshavamurthy, Anil S wrote:

> slab allocators don;t reserve the memory, in other words this memory 
> can be consumed by VM under memory pressure which we don;t want in
> IOMMU case.

So mempools....

> Nope,they both are exactly opposite. 
> mempool with GFP_ATOMIC, first tries to get memory from OS and
> if that fails, it looks for the object in the pool and returns.

How does the difference matter? In both cases you get the memory you want.

> Where as resource pool is exactly opposite of mempool, where each 
> time it looks for an object in the pool and if it exist then we 
> return that object else we try to get the memory for OS while 
> scheduling the work to grow the pool objects. In fact, the  work
> is schedule to grow the pool when the low threshold point is hit.

Grow the mempool when the low level point is hit? Or equip mempools with 
the functionality that you want?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ