[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090806161552.GE7198@alberich.amd.com>
Date: Thu, 6 Aug 2009 18:15:52 +0200
From: Andreas Herrmann <andreas.herrmann3@....com>
To: Stephen Rothwell <sfr@...b.auug.org.au>
CC: Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org,
Borislav Petkov <borislav.petkov@....com>,
Rusty Russell <rusty@...tcorp.com.au>
Subject: Re: [PATCH 2/5] x86: Provide CPU topology information for
multi-node processors
On Thu, Aug 06, 2009 at 06:30:46PM +1000, Stephen Rothwell wrote:
> Hi Andrea,
>
> On Wed, 5 Aug 2009 17:48:11 +0200 Andreas Herrmann <andreas.herrmann3@....com> wrote:
> >
> > @@ -1061,8 +1070,10 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
> > for_each_possible_cpu(i) {
> > alloc_cpumask_var(&per_cpu(cpu_sibling_map, i), GFP_KERNEL);
> > alloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL);
> > + alloc_cpumask_var(&per_cpu(cpu_node_map, i), GFP_KERNEL);
> > alloc_cpumask_var(&cpu_data(i).llc_shared_map, GFP_KERNEL);
> > cpumask_clear(per_cpu(cpu_core_map, i));
> > + cpumask_clear(per_cpu(cpu_node_map, i));
>
> I noticed this in linux-next ... you can use zalloc_cpumask_var() instead
> of alloc_cpumask_var() followed by cpumask_clear().
I know, there is a collision with a patch in linux-next that replaced
alloc_cpumask_var/cpumask_clear with the zalloc version.
(a) Either that patch should be adapted to change also the new allocation.
(b) I can also change all those allocation to zalloc with my patch.
Make your choice:
[ ] (a)
[ ] (b)
Thanks,
Andreas
--
Operating | Advanced Micro Devices GmbH
System | Karl-Hammerschmidt-Str. 34, 85609 Dornach b. München, Germany
Research | Geschäftsführer: Thomas M. McCoy, Giuliano Meroni
Center | Sitz: Dornach, Gemeinde Aschheim, Landkreis München
(OSRC) | Registergericht München, HRB Nr. 43632
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists