[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1311076740.5161.3.camel@jaguar>
Date: Tue, 19 Jul 2011 14:59:00 +0300
From: Pekka Enberg <penberg@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Li Zefan <lizf@...fujitsu.com>,
Eduard - Gabriel Munteanu <eduard.munteanu@...ux360.ro>,
Ingo Molnar <mingo@...e.hu>,
Frederic Weisbecker <fweisbec@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>, cl@...ux.com
Subject: Re: [RFA][PATCH] trace/mm: Remove kmem.h from slab_def.h
On Wed, 2011-07-13 at 23:31 -0400, Steven Rostedt wrote:
> [ RFA - Request for Acks ]
>
> Having event headers in other headers can cause the necessary macro
> magic to break. For example, if we have in some C file:
>
> #define CREATE_TRACE_POINTS
> #include <trace/events/foo.h>
>
> But then in trace/events/foo.h, for some reason it needs to include
> linux/slab.h. Which includes linux/slab_def.h which happens to include
> the trace/events/kmem.h, and causes the TRACE_EVENT() macros in that
> file to be processed, then all hell breaks loose.
>
> To avoid this problem, it is better to just include trace/events/kmem.h
> in the C files that need it. It is already included in most of them,
> only mm/slab.c also needed to include it.
>
> I tested this with ktest running 10 randconfigs each with SLAB, SLUB,
> and SLOB enabled (for a total of 30 randconfig builds). With 30
> successful builds, this change should not be an issue. SLUB and SLOB
> have it already done.
>
> Signed-off-by: Steven Rostedt <rostedt@...dmis.org>
Acked-by: Pekka Enberg <penberg@...nel.org>
> diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
> index 83203ae..d12c70a 100644
> --- a/include/linux/slab_def.h
> +++ b/include/linux/slab_def.h
> @@ -15,8 +15,6 @@
> #include <asm/cache.h> /* kmalloc_sizes.h needs L1_CACHE_BYTES */
> #include <linux/compiler.h>
>
> -#include <trace/events/kmem.h>
> -
> /*
> * Enforce a minimum alignment for the kmalloc caches.
> * Usually, the kmalloc caches are cache_line_size() aligned, except when
> diff --git a/mm/slab.c b/mm/slab.c
> index d96e223..e4bd1ec 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -121,6 +121,8 @@
> #include <asm/tlbflush.h>
> #include <asm/page.h>
>
> +#include <trace/events/kmem.h>
> +
> /*
> * DEBUG - 1 for kmem_cache_create() to honour; SLAB_RED_ZONE & SLAB_POISON.
> * 0 for faster, smaller code (especially in the critical paths).
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists