lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070608120107.245eba96.akpm@linux-foundation.org>
Date:	Fri, 8 Jun 2007 12:01:07 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	"Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com>
Cc:	linux-kernel@...r.kernel.org, ak@...e.de, gregkh@...e.de,
	muli@...ibm.com, asit.k.mallick@...el.com,
	suresh.b.siddha@...el.com, arjan@...ux.intel.com,
	ashok.raj@...el.com, shaohua.li@...el.com, davem@...emloft.net
Subject: Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool
 handling

On Fri, 8 Jun 2007 11:21:57 -0700
"Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com> wrote:

> On Thu, Jun 07, 2007 at 04:27:26PM -0700, Andrew Morton wrote:
> > On Wed, 06 Jun 2007 11:57:00 -0700
> > anil.s.keshavamurthy@...el.com wrote:
> > 
> > > Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@...el.com>
> > 
> > That was a terse changelog.
> > 
> > Obvious question: how does this differ from mempools, and would it be
> > better to fill in any gaps in mempool functionality instead of
> > implementing something similar-looking?
> 
> Very good question. Mempool pre-allocates the elements
> to the required minimum count size during its initilization time.
> However when mempool_alloc() is called it tries to obtain the
> element from OS and if that fails then it looks for the element in 
> its pool. If there are no elements in its pool and if the gpf_t 
> flags says it can wait then it waits untill someone puts the element 
> back to pool, else if gpf_t flag say it can;t wait then it returns NULL. 
> In other words, mempool acts as *emergency* pool, i.e only if the OS fails 
> to allocate the required memory, then the pool object is used.
> 
> 
> In the IOMMU case, we need exactly opposite of what mempool provides,
> i.e we always want to look for the element in the pool and if the pool
> has no element then go to OS as a worst case. This resource pool
> library routines do the same. Again, this resource pools 
> grows and shrinks automatically to maintain the minimum pool 
> elements in the background. I am not sure whether this totally
> opposite functionality of mempools and resource pools can be 
> merged.

Confused.

If resource pools are not designed to provide extra robustness via an
emergency pool, then what _are_ they designed for?  (Boy this is a hard way
to write a changelog!)

> In fact the very first version of this IOMMU patch used mempools
> and the performance was worse because mempool did not help as
> IOMMU did a very frequent alloc and free of pool objects and
> every call to alloc/free used to go to os. Andi Kleen, 
> noticied and told us that mempool usage for IOMMU is wrong and
> hence we came up with resource pool concept.

You _seem_ to be saying that the resource pools are there purely for
alloc/free performance reasons.  If so, I'd be skeptical: slab is pretty
darned fast.

> > 
> > The changelog very much should describe all this, as well as explaining
> > what the dynamic behaviour of this new thing is, and what applications are
> > envisaged, what problems it solves, etc, etc.
> 
> I can gladly update the changelog if the resource pool concept is 
> approved. I will fix all the below minor comments.
> 
> I envision that this might be useful for all vendor's (IBM, AMD, Intel, etc) IOMMU driver
> and for any kernel component which does lots of dynamic alloc/free an object of same size.
> 

That's what kmem_cache_alloc() is for?!?!
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ