lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <66acfbf79cf6f97b0935a5830703166fae3104c1.1411724723.git.jslaby@suse.cz>
Date:	Fri, 26 Sep 2014 11:43:35 +0200
From:	Jiri Slaby <jslaby@...e.cz>
To:	stable@...r.kernel.org
Cc:	linux-kernel@...r.kernel.org, Prarit Bhargava <prarit@...hat.com>,
	Andi Kleen <ak@...ux.intel.com>,
	Michel Lespinasse <walken@...gle.com>,
	Seiji Aguchi <seiji.aguchi@....com>,
	Yang Zhang <yang.z.zhang@...el.com>,
	Paul Gortmaker <paul.gortmaker@...driver.com>,
	Janet Morgan <janet.morgan@...el.com>,
	Tony Luck <tony.luck@...el.com>,
	Ruiv Wang <ruiv.wang@...il.com>,
	Gong Chen <gong.chen@...ux.intel.com>,
	Yinghai Lu <yinghai@...nel.org>,
	"H. Peter Anvin" <hpa@...ux.intel.com>, Jiri Slaby <jslaby@...e.cz>
Subject: [PATCH 3.12 004/142] x86, cpu hotplug: Fix stack frame warning in check_irq_vectors_for_cpu_disable()

From: Prarit Bhargava <prarit@...hat.com>

3.12-stable review patch.  If anyone has any objections, please let me know.

===============

commit 39424e89d64661faa0a2e00c5ad1e6dbeebfa972 upstream.

Further discussion here: http://marc.info/?l=linux-kernel&m=139073901101034&w=2

kbuild, 0day kernel build service, outputs the warning:

arch/x86/kernel/irq.c:333:1: warning: the frame size of 2056 bytes
is larger than 2048 bytes [-Wframe-larger-than=]

because check_irq_vectors_for_cpu_disable() allocates two cpumasks on the
stack.   Fix this by moving the two cpumasks to a global file context.

Reported-by: Fengguang Wu <fengguang.wu@...el.com>
Tested-by: David Rientjes <rientjes@...gle.com>
Signed-off-by: Prarit Bhargava <prarit@...hat.com>
Link: http://lkml.kernel.org/r/1390915331-27375-1-git-send-email-prarit@redhat.com
Cc: Andi Kleen <ak@...ux.intel.com>
Cc: Michel Lespinasse <walken@...gle.com>
Cc: Seiji Aguchi <seiji.aguchi@....com>
Cc: Yang Zhang <yang.z.zhang@...el.com>
Cc: Paul Gortmaker <paul.gortmaker@...driver.com>
Cc: Janet Morgan <janet.morgan@...el.com>
Cc: Tony Luck <tony.luck@...el.com>
Cc: Ruiv Wang <ruiv.wang@...il.com>
Cc: Gong Chen <gong.chen@...ux.intel.com>
Cc: Yinghai Lu <yinghai@...nel.org>
Signed-off-by: H. Peter Anvin <hpa@...ux.intel.com>
Signed-off-by: Jiri Slaby <jslaby@...e.cz>
---
 arch/x86/kernel/irq.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index 4207e8d1a094..39100783cf26 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -262,6 +262,14 @@ __visible void smp_trace_x86_platform_ipi(struct pt_regs *regs)
 EXPORT_SYMBOL_GPL(vector_used_by_percpu_irq);
 
 #ifdef CONFIG_HOTPLUG_CPU
+
+/* These two declarations are only used in check_irq_vectors_for_cpu_disable()
+ * below, which is protected by stop_machine().  Putting them on the stack
+ * results in a stack frame overflow.  Dynamically allocating could result in a
+ * failure so declare these two cpumasks as global.
+ */
+static struct cpumask affinity_new, online_new;
+
 /*
  * This cpu is going to be removed and its vectors migrated to the remaining
  * online cpus.  Check to see if there are enough vectors in the remaining cpus.
@@ -273,7 +281,6 @@ int check_irq_vectors_for_cpu_disable(void)
 	unsigned int this_cpu, vector, this_count, count;
 	struct irq_desc *desc;
 	struct irq_data *data;
-	struct cpumask affinity_new, online_new;
 
 	this_cpu = smp_processor_id();
 	cpumask_copy(&online_new, cpu_online_mask);
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ