lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 6 Aug 2009 19:44:33 +0200
From:	Andreas Herrmann <andreas.herrmann3@....com>
To:	Stephen Rothwell <sfr@...b.auug.org.au>,
	Ingo Molnar <mingo@...e.hu>
CC:	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org,
	Borislav Petkov <borislav.petkov@....com>,
	Rusty Russell <rusty@...tcorp.com.au>
Subject: [PATCH] x86, smpboot: use zalloc_cpumask_var instead of alloc/clear

On Thu, Aug 06, 2009 at 06:15:52PM +0200, Andreas Herrmann wrote:
> On Thu, Aug 06, 2009 at 06:30:46PM +1000, Stephen Rothwell wrote:
> > Hi Andrea,
> > 
> > On Wed, 5 Aug 2009 17:48:11 +0200 Andreas Herrmann <andreas.herrmann3@....com> wrote:
> > >
> > > @@ -1061,8 +1070,10 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
> > >  	for_each_possible_cpu(i) {
> > >  		alloc_cpumask_var(&per_cpu(cpu_sibling_map, i), GFP_KERNEL);
> > >  		alloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL);
> > > +		alloc_cpumask_var(&per_cpu(cpu_node_map, i), GFP_KERNEL);
> > >  		alloc_cpumask_var(&cpu_data(i).llc_shared_map, GFP_KERNEL);
> > >  		cpumask_clear(per_cpu(cpu_core_map, i));
> > > +		cpumask_clear(per_cpu(cpu_node_map, i));
> > 
> > I noticed this in linux-next ... you can use zalloc_cpumask_var() instead
> > of alloc_cpumask_var() followed by cpumask_clear().
> 
> I know, there is a collision with a patch in linux-next that replaced
> alloc_cpumask_var/cpumask_clear with the zalloc version.
> 
> (a) Either that patch should be adapted to change also the new allocation.
> (b) I can also change all those allocation to zalloc with my patch.
> 
> Make your choice:
> 
> [ ] (a)
> [ ] (b)


My choice is
 [X] (c) Do this in a separate patch.

See below.

Regards,
Andreas

-- 
>From 5d7e6138d4bc2d8c5191e6bf9cfbe9d5a27d79b0 Mon Sep 17 00:00:00 2001
From: Andreas Herrmann <andreas.herrmann3@....com>
Date: Thu, 6 Aug 2009 19:31:58 +0200
Subject: [PATCH] x86, smpboot: use zalloc_cpumask_var instead of alloc/clear

Suggested-by: Stephen Rothwell <sfr@...b.auug.org.au>
Signed-off-by: Andreas Herrmann <andreas.herrmann3@....com>
---
 arch/x86/kernel/smpboot.c |   12 ++++--------
 1 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index f50af56..f797214 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1068,14 +1068,10 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
 #endif
 	current_thread_info()->cpu = 0;  /* needed? */
 	for_each_possible_cpu(i) {
-		alloc_cpumask_var(&per_cpu(cpu_sibling_map, i), GFP_KERNEL);
-		alloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL);
-		alloc_cpumask_var(&per_cpu(cpu_node_map, i), GFP_KERNEL);
-		alloc_cpumask_var(&cpu_data(i).llc_shared_map, GFP_KERNEL);
-		cpumask_clear(per_cpu(cpu_core_map, i));
-		cpumask_clear(per_cpu(cpu_node_map, i));
-		cpumask_clear(per_cpu(cpu_sibling_map, i));
-		cpumask_clear(cpu_data(i).llc_shared_map);
+		zalloc_cpumask_var(&per_cpu(cpu_sibling_map, i), GFP_KERNEL);
+		zalloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL);
+		zalloc_cpumask_var(&per_cpu(cpu_node_map, i), GFP_KERNEL);
+		zalloc_cpumask_var(&cpu_data(i).llc_shared_map, GFP_KERNEL);
 	}
 	set_cpu_sibling_map(0);
 
-- 
1.6.3.3



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ