lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 12 Dec 2014 12:52:44 -0800 (PST) From: David Rientjes <rientjes@...gle.com> To: Pranith Kumar <bobby.prani@...il.com> cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, "H. Peter Anvin" <hpa@...or.com>, "maintainer:X86 ARCHITECTURE..." <x86@...nel.org>, Toshi Kani <toshi.kani@...com>, Igor Mammedov <imammedo@...hat.com>, Dave Hansen <dave.hansen@...ux.intel.com>, Lan Tianyu <tianyu.lan@...el.com>, "open list:X86 ARCHITECTURE..." <linux-kernel@...r.kernel.org> Subject: Re: [PATCH v2] smpboot: Check for successfull allocation of cpumask vars On Tue, 9 Dec 2014, Pranith Kumar wrote: > diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c > index 7a8f584..35bc3f1 100644 > --- a/arch/x86/kernel/smpboot.c > +++ b/arch/x86/kernel/smpboot.c > @@ -1083,6 +1083,7 @@ static void __init smp_cpu_index_default(void) > void __init native_smp_prepare_cpus(unsigned int max_cpus) > { > unsigned int i; > + bool ret = true; It's better to put this in the scope of the for_each_possible_cpu() loop so we don't implicitly rely on the fact that we always assume this to remain true from the previous iteration if the implementation subsequently changes. > > preempt_disable(); > smp_cpu_index_default(); > @@ -1096,9 +1097,23 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus) > > current_thread_info()->cpu = 0; /* needed? */ > for_each_possible_cpu(i) { > - zalloc_cpumask_var(&per_cpu(cpu_sibling_map, i), GFP_KERNEL); > - zalloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL); > - zalloc_cpumask_var(&per_cpu(cpu_llc_shared_map, i), GFP_KERNEL); > + ret &= zalloc_cpumask_var(&per_cpu(cpu_sibling_map, i), GFP_KERNEL); > + ret &= zalloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL); > + ret &= zalloc_cpumask_var(&per_cpu(cpu_llc_shared_map, i), GFP_KERNEL); > + > + if (!ret) { > + /* cpumask allocation failed, remove this and next cpus from > + * possible/present/online/active masks > + */ Please read Documentation/CodingStyle. > + pr_warn("cpumask allocation failed!\n"); This is unnecessary since zalloc_cpumask_var() failure will already report this incident and provide a stacktrace. > + for (; i < nr_cpu_ids; i = cpumask_next(i, &cpu_possible_mask)) { > + set_cpu_possible(i, false); > + set_cpu_active(i, false); > + set_cpu_online(i, false); > + set_cpu_present(i, false); > + } > + break; Looks like a memory leak of some maps that may have been successfully allocated. > + } > } > set_cpu_sibling_map(0); > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists