[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1182326799.21117.19.camel@twins>
Date: Wed, 20 Jun 2007 10:06:39 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
ak@...e.de, gregkh@...e.de, muli@...ibm.com,
suresh.b.siddha@...el.com, arjan@...ux.intel.com,
ashok.raj@...el.com, davem@...emloft.net, clameter@....com
Subject: Re: [Intel IOMMU 06/10] Avoid memory allocation failures in dma
map api calls
On Tue, 2007-06-19 at 14:37 -0700, Keshavamurthy, Anil S wrote:
> plain text document attachment (intel_iommu_pf_memalloc.patch)
> Intel IOMMU driver needs memory during DMA map calls to setup its internal
> page tables and for other data structures. As we all know that these DMA
> map calls are mostly called in the interrupt context or with the spinlock
> held by the upper level drivers(network/storage drivers), so in order to
> avoid any memory allocation failure due to low memory issues,
> this patch makes memory allocation by temporarily setting PF_MEMALLOC
> flags for the current task before making memory allocation calls.
>
> We evaluated mempools as a backup when kmem_cache_alloc() fails
> and found that mempools are really not useful here because
> 1) We don;t know for sure how much to reserve in advance
So you just unleashed an unbounded allocation context on PF_MEMALLOC?
seems like a really really bad idea.
> 2) And mempools are not useful for GFP_ATOMIC case (as we call
> memory alloc functions with GFP_ATOMIC)
Mempools work as intended with GFP_ATOMIC, it gets filled up to the
specified number of elements using GFP_KERNEL (at creation time). This
gives GFP_ATOMIC allocations nr_elements extra items once it would start
failing.
> With PF_MEMALLOC flag set in the current->flags, the VM subsystem avoids
> any watermark checks before allocating memory thus guarantee'ing the
> memory till the last free page.
PF_MEMALLOC as is, is meant to salvage the VM from the typical VM
deadlock. Using it as you do now is not something a driver should ever
do, and I'm afraid I will have to strongly oppose this patch.
You really really want to calculate an upper bound on your memory
consumption and reserve this.
So, I'm afraid I'll have to..
NACK!
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists