[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4B8BDA6E.6000902@cs.helsinki.fi>
Date: Mon, 01 Mar 2010 17:17:02 +0200
From: Pekka Enberg <penberg@...helsinki.fi>
To: Stephen Rothwell <sfr@...b.auug.org.au>
CC: Linus <torvalds@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, linux-next@...r.kernel.org,
Len Brown <lenb@...nel.org>, Dave Jones <davej@...hat.com>,
Jean Delvare <khali@...ux-fr.org>, Greg KH <greg@...ah.com>,
"J. Bruce Fields" <bfields@...ldses.org>,
Trond Myklebust <trond.myklebust@....uio.no>,
Sage Weil <sage@...dream.net>,
Christoph Lameter <cl@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>,
Rusty Russell <rusty@...tcorp.com.au>,
Ingo Molnar <mingo@...e.hu>, Al Viro <viro@...IV.linux.org.uk>,
"" <joern@...fs.org>
Subject: Re: linux-next: current pending merge fix patches
Stephen Rothwell wrote:
> 4) The slab[7] tree adds a percpu variable usage case (commit
> 9dfc6e68bfe6ee452efb1a4e9ca26a9007f2b864 "SLUB: Use this_cpu operations in
> slub"), but the percpu[8] tree removes the prefixing of percpu variables
> (commit dd17c8f72993f9461e9c19250e3f155d6d99df22 "percpu: remove per_cpu__
> prefix"), thus the fooling patch after mergeing these trees:
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 6e34309..9e86e6b 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2071,7 +2071,7 @@ static inline int alloc_kmem_cache_cpus(struct kmem_cache *s, gfp_t flags)
> * Boot time creation of the kmalloc array. Use static per cpu data
> * since the per cpu allocator is not available yet.
> */
> - s->cpu_slab = per_cpu_var(kmalloc_percpu) + (s - kmalloc_caches);
> + s->cpu_slab = kmalloc_percpu + (s - kmalloc_caches);
> else
> s->cpu_slab = alloc_percpu(struct kmem_cache_cpu);
Thanks for the reminder Stephen! I'll add this to slab.git as soon as
the per-cpu changes land into Linus' tree.
Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists