lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070608144207.07341ee7.akpm@linux-foundation.org>
Date:	Fri, 8 Jun 2007 14:42:07 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	"Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com>
Cc:	Andreas Kleen <ak@...e.de>, linux-kernel@...r.kernel.org,
	gregkh@...e.de, muli@...ibm.com, asit.k.mallick@...el.com,
	suresh.b.siddha@...el.com, arjan@...ux.intel.com,
	ashok.raj@...el.com, shaohua.li@...el.com, davem@...emloft.net
Subject: Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool
 handling

On Fri, 8 Jun 2007 14:20:54 -0700
"Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com> wrote:

> > This means mempools don't work for those (the previous version had non
> > sensical
> > constructs like GFP_ATOMIC mempool calls)
> > 
> > __I haven't looked at Anil's code, but I suspect the only really robust
> > way to handle this case is to always preallocate everything. But I'm not
> > sure
> > why that would need new library functions; it should be just some simple
> > lists that could be open coded.
> 
> Since it is practically impossible to predicit how much to preallocate,
> we have a min_count+grow_count of object allocated and we always use from this
> pool. If the object count goes below certain low threshold(which acts as 
> emergency pool from this point), the pool grows by allocating and 
> adding the newly allocated object into the pool in the
> worker (keventd) thread. 

Asking keventd to do this might be problematic: there may be code in various
dark corners of device drivers which also depend upon keventd services for
IO completion, in which case there might be obscure deadlocks, dunno. 
otoh, keventd already surely does GFP_KERNEL allocations...

But still, the whole thing seems pointless: kswapd is already doing all of
this, replenishing the page reserves.  So why not use that?

> Again, once the IO pressure is over, the PCI driver
> does the unmap calls and we put back the objects back to preallocate pools.
> The smartness is builtin to the pool as the elements are put back to the 
> pool it detects that the pool count is greater then the threshold and 
> it automagically queues the work to free the objects and bring back the 
> pre-allocated object count back to minimum threshold.
> Thus this preallocated pool grow and shrinks based on the demand, while
> acting as both pre-allocated pools and as emergency pool.
> 
> Currently I have made this as a libray functions, if that is not correct,
> we can pull this and make it part of the Intel IOMMU driver itself.
> Please do let me know your suggestions.

I'd say just remove the whole thing and use kmem_cache_alloc().

Put much effort into removing the GFP_ATOMIC and using GFP_NOIO instead:
there's your problem right there.

If for some reason you really can't do that (and a requirement for
allocation-in-interrupt is the only valid reason, really) and if you indeed
can demonstrate memory allocation failures with certain workloads then
let's take a look at that.  As I said, attaching a reserve pool to your
slab cache might be a suitable approach.  But none of these things are
magic: if memory allcoation failures or deadlocks or livelocks are
demonstrable with the reserves absent, then they'll also be possible with
the reserves present.

Unless you use mempools, and can sleep.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ