[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <84144f020811201135l7b83404etb311a7b62390dd19@mail.gmail.com>
Date: Thu, 20 Nov 2008 21:35:33 +0200
From: "Pekka Enberg" <penberg@...helsinki.fi>
To: "Catalin Marinas" <catalin.marinas@....com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support
Hi Catalin,
On Thu, Nov 20, 2008 at 1:30 PM, Catalin Marinas
<catalin.marinas@....com> wrote:
> +#ifdef CONFIG_SMP
> +#define cache_line_align(x) L1_CACHE_ALIGN(x)
> +#else
> +#define cache_line_align(x) (x)
> +#endif
Maybe we should be put to <linux/cache.h> and call it cache_line_align_in_smp()?
> +/*
> + * Object allocation
> + */
> +static void *fast_cache_alloc(struct fast_cache *cache)
> +{
> + unsigned int cpu = get_cpu();
> + unsigned long flags;
> + struct list_head *entry;
> + struct fast_cache_page *page;
> +
> + local_irq_save(flags);
> +
> + if (list_empty(&cache->free_list[cpu]))
> + __fast_cache_grow(cache, cpu);
> +
> + entry = cache->free_list[cpu].next;
> + page = entry_to_page(entry);
> + list_del(entry);
> + page->free_nr[cpu]--;
> + BUG_ON(page->free_nr[cpu] < 0);
> + fast_cache_dec_free(cache, cpu);
> +
> + local_irq_restore(flags);
> + put_cpu_no_resched();
> +
> + return (void *)(entry + 1);
> +}
The slab allocators are pretty fast as well. Is there a reason you
can't use kmalloc() or kmem_cache_alloc() for this? You can fix the
recursion problem by adding a new GFP_NOLEAKTRACK flag that makes sure
memleak hooks are not invoked if it's set.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists