[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081123112818.GC29663@elte.hu>
Date: Sun, 23 Nov 2008 12:28:18 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Joerg Roedel <joro@...tes.org>
Cc: Joerg Roedel <joerg.roedel@....com>, netdev@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
iommu@...ts.linux-foundation.org
Subject: Re: [PATCH 03/10] x86: add initialization code for DMA-API
debugging
* Joerg Roedel <joro@...tes.org> wrote:
> On Fri, Nov 21, 2008 at 06:43:48PM +0100, Ingo Molnar wrote:
> >
> > * Joerg Roedel <joerg.roedel@....com> wrote:
> >
> > > +static struct list_head dma_entry_hash[HASH_SIZE];
> > > +
> > > +/* A slab cache to allocate dma_map_entries fast */
> > > +static struct kmem_cache *dma_entry_cache;
> > > +
> > > +/* lock to protect the data structures */
> > > +static DEFINE_SPINLOCK(dma_lock);
> >
> > some more generic comments about the data structure: it's main purpose
> > is to provide a mapping based on (dev,addr). There's little if any
> > cross-entry interaction - same-address+same-dev DMA is checked.
> >
> > 1)
> >
> > the hash:
> >
> > + return (entry->dev_addr >> HASH_FN_SHIFT) & HASH_FN_MASK;
> >
> > should mix in entry->dev as well - that way we get not just per
> > address but per device hash space separation as well.
> >
> > 2)
> >
> > HASH_FN_SHIFT is 1MB chunks right now - that's probably fine in
> > practice albeit perhaps a bit too small. There's seldom any coherency
> > between the physical addresses of DMA - we rarely have any real
> > (performance-relevant) physical co-location of DMA addresses beyond 4K
> > granularity. So using 1MB chunking here will discard a good deal of
> > random low bits we should be hashing on.
> >
> > 3)
> >
> > And the most scalable locking would be per hash bucket locking - no
> > global lock is needed. The bucket hash heads should probably be
> > cacheline sized - so we'd get one lock per bucket.
>
> Hmm, I just had the idea of saving this data in struct device. How
> about that? The locking should scale too and we can extend it
> easier. For example it simplifys a per-device disable function for
> the checking. Or another future feature might be leak tracing.
that will help with spreading the hash across devices, but brings in
lifetime issues: you must be absolutely sure all DMA has drained at
the point a device is deinitialized.
Dunno ... i think it's still better to have a central hash for all DMA
ops that is aware of per device details.
The moment we spread this out to struct device we've lost the ability
to _potentially_ do inter-device checking. (for example to make sure
no other device is DMA-ing on the same address - which is a possible
bug pattern as well although right now Linux does not really avoid it
actively)
Hm?
Btw., also have a look at lib/debugobjects.c: i think we should also
consider extending that facility and then just hook it up to the DMA
ops. The DMA checking is kind of a similar "op lifetime" scenario -
with a few extras to extend lib/debugobjects.c with. Note how it
already has pooling, a good hash, etc.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists