[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070612020806.GA17143@linux-os.sc.intel.com>
Date: Mon, 11 Jun 2007 19:08:06 -0700
From: "Siddha, Suresh B" <suresh.b.siddha@...el.com>
To: Arjan van de Ven <arjan@...ux.intel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com>,
Andi Kleen <ak@...e.de>, Christoph Lameter <clameter@....com>,
linux-kernel@...r.kernel.org, gregkh@...e.de, muli@...ibm.com,
asit.k.mallick@...el.com, suresh.b.siddha@...el.com,
ashok.raj@...el.com, shaohua.li@...el.com, davem@...emloft.net
Subject: Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
On Mon, Jun 11, 2007 at 06:55:46PM -0700, Arjan van de Ven wrote:
> Andrew Morton wrote:
> >On Mon, 11 Jun 2007 18:10:40 -0700 Arjan van de Ven
> ><arjan@...ux.intel.com> wrote:
> >
> >>Andrew Morton wrote:
> >>>>Where as resource pool is exactly opposite of mempool, where each
> >>>>time it looks for an object in the pool and if it exist then we
> >>>>return that object else we try to get the memory for OS while
> >>>>scheduling the work to grow the pool objects. In fact, the work
> >>>>is schedule to grow the pool when the low threshold point is hit.
> >>>I realise all that. But I'd have thought that the mempool approach is
> >>>actually better: use the page allocator and only deplete your reserve
> >>>pool
> >>>when the page allocator fails.
> >>the problem with that is that if anything downstream from the iommu
> >>layer ALSO needs memory, we've now eaten up the last free page and
> >>things go splat.
> >
> >If that happens, we still have the mempool reserve to fall back to.
>
> we do, except that we just ate the memory the downstream code would
> use and get ... so THAT can't get any.
Then this problem can happen, irrespective of the changes we are
reviewing in this patch set. Is n't it?
thanks,
suresh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists