[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081121171952.GL733@elte.hu>
Date: Fri, 21 Nov 2008 18:19:52 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Joerg Roedel <joerg.roedel@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
iommu@...ts.linux-foundation.org
Subject: Re: [PATCH 03/10] x86: add initialization code for DMA-API
debugging
* Joerg Roedel <joerg.roedel@....com> wrote:
> On Fri, Nov 21, 2008 at 05:56:28PM +0100, Ingo Molnar wrote:
> > > + return (entry->dev_addr >> HASH_FN_SHIFT) & HASH_FN_MASK;
> > > +}
> > > +
> > > +static struct dma_debug_entry *dma_entry_alloc(void)
> > > +{
> > > + gfp_t gfp = GFP_KERNEL | __GFP_ZERO;
> > > +
> > > + if (in_atomic())
> > > + gfp |= GFP_ATOMIC;
> > > +
> > > + return kmem_cache_alloc(dma_entry_cache, gfp);
> >
> > hm. that slab allocation in the middle of DMA mapping ops makes me a
> > bit nervous. the DMA mapping ops are generally rather atomic.
> >
> > and in_atomic() check there is a bug on !PREEMPT kernels, so it wont
> > fly.
>
> I am not sure I understand this correctly. You say the check for
> in_atomic() will break on !PREEMPT kernels?
Correct. There is no check to be able to tell whether we can schedule
or not. I.e. on !PREEMPT your patches will crash and burn.
> > We dont have a gfp flag passed in as all the DMA mapping APIs
> > really expect all allocations having been done in advance already.
>
> Hmm, I can change the code to pre-allocate a certain (configurable?)
> number of these entries and disble the checking if we run out of it.
yeah, that's a good approach too - that's what lockdep does. Perhaps
make the max entries nr a Kconfig entity - so it can be tuned up/down
depending on what type of iommu scheme is enabled.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists