lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200706132104.02884.ak@suse.de>
Date:	Wed, 13 Jun 2007 21:04:02 +0200
From:	Andi Kleen <ak@...e.de>
To:	Matt Mackall <mpm@...enic.com>
Cc:	Arjan van de Ven <arjan@...ux.intel.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com>,
	Christoph Lameter <clameter@....com>,
	linux-kernel@...r.kernel.org, gregkh@...e.de, muli@...ibm.com,
	asit.k.mallick@...el.com, suresh.b.siddha@...el.com,
	ashok.raj@...el.com, shaohua.li@...el.com, davem@...emloft.net
Subject: Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling

On Wednesday 13 June 2007 20:40:11 Matt Mackall wrote:
> On Mon, Jun 11, 2007 at 06:55:46PM -0700, Arjan van de Ven wrote:
> > Andrew Morton wrote:
> > >On Mon, 11 Jun 2007 18:10:40 -0700 Arjan van de Ven 
> > ><arjan@...ux.intel.com> wrote:
> > >
> > >>Andrew Morton wrote:
> > >>>>Where as resource pool is exactly opposite of mempool, where each 
> > >>>>time it looks for an object in the pool and if it exist then we 
> > >>>>return that object else we try to get the memory for OS while 
> > >>>>scheduling the work to grow the pool objects. In fact, the  work
> > >>>>is schedule to grow the pool when the low threshold point is hit.
> > >>>I realise all that.  But I'd have thought that the mempool approach is
> > >>>actually better: use the page allocator and only deplete your reserve 
> > >>>pool
> > >>>when the page allocator fails.
> > >>the problem with that is that if anything downstream from the iommu 
> > >>layer ALSO needs memory, we've now eaten up the last free page and 
> > >>things go splat.
> > >
> > >If that happens, we still have the mempool reserve to fall back to.
> > 
> > we do, except that we just ate the memory the downstream code would 
> > use and get ... so THAT can't get any.
> 
> Then the downstream ought to be using a mempool?

Normally there shouldn't be a downstream. PCI IO tends to be not stacked,
but at the edges  (unless you're talking hypervisors with virtual devices but those 
definitely have separate memory pools). And the drivers I'm familiar with tend to 
first grab whatever resources they need and then map the DMA mappings.

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ