lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <466DF290.2040503@linux.intel.com>
Date:	Mon, 11 Jun 2007 18:10:40 -0700
From:	Arjan van de Ven <arjan@...ux.intel.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	"Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com>,
	Andi Kleen <ak@...e.de>, Christoph Lameter <clameter@....com>,
	linux-kernel@...r.kernel.org, gregkh@...e.de, muli@...ibm.com,
	asit.k.mallick@...el.com, suresh.b.siddha@...el.com,
	ashok.raj@...el.com, shaohua.li@...el.com, davem@...emloft.net
Subject: Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling

Andrew Morton wrote:
>> Where as resource pool is exactly opposite of mempool, where each 
>> time it looks for an object in the pool and if it exist then we 
>> return that object else we try to get the memory for OS while 
>> scheduling the work to grow the pool objects. In fact, the  work
>> is schedule to grow the pool when the low threshold point is hit.
> 
> I realise all that.  But I'd have thought that the mempool approach is
> actually better: use the page allocator and only deplete your reserve pool
> when the page allocator fails.

the problem with that is that if anything downstream from the iommu 
layer ALSO needs memory, we've now eaten up the last free page and 
things go splat.

in terms of deadlock avoidance... I wonder if we need something 
similar to the swap token; once a process dips into the emergency 
pool, it becomes the only one that gets to use this pool, so that it's 
entire chain of allocations will succeed, rather than each process 
only getting halfway through...

But yeah it's minute details and you can argue either way is the right 
approach.

You can even argue for the old highmem.c approach; go into half the 
pool before going to the VM, then to kmalloc() and if that fails dip 
into the second half of the pool.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ