[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r652i69e.fsf@basil.nowhere.org>
Date: Sun, 23 Nov 2008 20:36:45 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Joerg Roedel <joerg.roedel@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
iommu@...ts.linux-foundation.org
Subject: Re: [PATCH 03/10] x86: add initialization code for DMA-API debugging
Joerg Roedel <joerg.roedel@....com> writes:
> +/* Hash list to save the allocated dma addresses */
> +static struct list_head dma_entry_hash[HASH_SIZE];
Hash tables should use hlists.
> +static int hash_fn(struct dma_debug_entry *entry)
> +{
> + /*
> + * Hash function is based on the dma address.
> + * We use bits 20-27 here as the index into the hash
> + */
> + BUG_ON(entry->dev_addr == bad_dma_address);
> +
> + return (entry->dev_addr >> HASH_FN_SHIFT) & HASH_FN_MASK;
It would be probably safer to use a stronger hash like FNV
There are a couple to reuse in include/
> +}
> +
> +static struct dma_debug_entry *dma_entry_alloc(void)
> +{
> + gfp_t gfp = GFP_KERNEL | __GFP_ZERO;
> +
> + if (in_atomic())
> + gfp |= GFP_ATOMIC;
> +
> + return kmem_cache_alloc(dma_entry_cache, gfp);
> +}
While the basic idea is reasonable this function is unfortunately
broken. It's not always safe to allocate memory (e.g. in the block
write out path which uses map_sg). You would need to use
a mempool or something.
Besides the other problem of using GFP_ATOMIC is that it can
fail under high load and you don't handle this case very well
(would report a bug incorrectly). And stress tests tend to
trigger that, reporting false positives in such a case is a very very
bad thing, it leads to QA people putting these messages
on their blacklists.
-Andi
--
ak@...ux.intel.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists