[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1182407374.21117.106.camel@twins>
Date: Thu, 21 Jun 2007 08:29:34 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Arjan van de Ven <arjan@...ux.intel.com>
Cc: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@...el.com>,
"Siddha, Suresh B" <suresh.b.siddha@...el.com>,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
ak@...e.de, gregkh@...e.de, muli@...ibm.com, ashok.raj@...el.com,
davem@...emloft.net, clameter@....com
Subject: Re: [Intel IOMMU 06/10] Avoid memory allocation
failures in dma map api calls
On Wed, 2007-06-20 at 23:11 -0700, Arjan van de Ven wrote:
> Peter Zijlstra wrote:
> > What I'm saying is that if you do use the reserves, you should ensure
> > the use is bounded. I'm not seeing anything like that.
>
> each mapping takes at most 3 pages
That is a start, but the thing I'm worried most about is non-reclaim
related devices using the thing when in dire straights.
> > This is a generic API, who is to ensure some other non-swap device will
> > not deplete memory and deadlock the reclaim process?
>
> that information is not available at this level ;(
Can we bring it there?
> > Also, explain to me how an IOMMU is different from bounce buffers? They
> > both do the same thing, no? They both need memory in order to complete
> > DMA.
>
> bounce buffers happen in a place where you can sleep.... that makes a
> lot of difference.
Right, can't you stick part of this work there?
> >
> > Is it just a broken API you're working against? If so, isn't the Linux
> > way to fix these things, that is why we have the source code after all.
>
> well yes and no... the other iommu's snuck in as well... it's not
> entirely fair to hold this one back until a 2 year, 1400 driver
> project is completed ;(
I understand, but at some point we should stop; we cannot keep taking
crap in deference of such things.
Also, the other iommu code you pointed me to, was happy to fail, it did
not attempt to use the emergency reserves.
But you left out the mempools question again. I have read the earlier
threads, and it was said mempools are no good because they first deplete
GFP_ATOMIC reserves and then down-stream allocs could go splat.
PF_MEMALLOC/GFP_EMERGENCY has exactly the same problem...
So why no mempools?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists