[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070611173001.e0355af3.akpm@linux-foundation.org>
Date: Mon, 11 Jun 2007 17:30:01 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com>
Cc: Andi Kleen <ak@...e.de>, Christoph Lameter <clameter@....com>,
linux-kernel@...r.kernel.org, gregkh@...e.de, muli@...ibm.com,
asit.k.mallick@...el.com, suresh.b.siddha@...el.com,
arjan@...ux.intel.com, ashok.raj@...el.com, shaohua.li@...el.com,
davem@...emloft.net
Subject: Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool
handling
On Mon, 11 Jun 2007 16:52:08 -0700 "Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com> wrote:
> On Mon, Jun 11, 2007 at 02:14:49PM -0700, Andrew Morton wrote:
> > On Mon, 11 Jun 2007 13:44:42 -0700
> > "Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com> wrote:
> >
> > > In the first implementation of ours, we had used mempools api's to
> > > allocate memory and we were told that mempools with GFP_ATOMIC is
> > > useless and hence in the second implementation we came up with
> > > resource pools ( which is preallocate pools) and again as I understand
> > > the argument is why create another when we have slab allocation which
> > > is similar to this resource pools.
> >
> > Odd. mempool with GFP_ATOMIC is basically equivalent to your
> > resource-pools, isn't it?: we'll try the slab allocator and if that failed,
> > fall back to the reserves.
>
> slab allocators don;t reserve the memory, in other words this memory
> can be consumed by VM under memory pressure which we don;t want in
> IOMMU case.
>
> Nope,they both are exactly opposite.
> mempool with GFP_ATOMIC, first tries to get memory from OS and
> if that fails, it looks for the object in the pool and returns.
>
> Where as resource pool is exactly opposite of mempool, where each
> time it looks for an object in the pool and if it exist then we
> return that object else we try to get the memory for OS while
> scheduling the work to grow the pool objects. In fact, the work
> is schedule to grow the pool when the low threshold point is hit.
I realise all that. But I'd have thought that the mempool approach is
actually better: use the page allocator and only deplete your reserve pool
when the page allocator fails.
The refill-the-pool-in-the-background feature sounds pretty worthless to
me. On a uniprocessor machine (for example), the kernel thread may not get
scheduled for tens of milliseconds (easily), which is far, far more than is
needed for that reserve pool to become fully consumed.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists