lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160301234538.593853257@linuxfoundation.org>
Date:	Tue, 01 Mar 2016 23:55:36 +0000
From:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:	<linux-kernel@...r.kernel.org>
Cc:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	<stable@...r.kernel.org>, Thomas Gleixner <tglx@...utronix.de>,
	Borislav Petkov <bp@...en8.de>,
	Jiang Liu <jiang.liu@...ux.intel.com>,
	Jeremiah Mahler <jmmahler@...il.com>,
	<andy.shevchenko@...il.com>, Guenter Roeck <linux@...ck-us.net>
Subject: [PATCH 4.4 332/342] x86/irq: Clear move_in_progress before sending cleanup IPI

4.4-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <tglx@...utronix.de>

commit c1684f5035b60e9f98566493e869496fb5de1d89 upstream.

send_cleanup_vector() fiddles with the old_domain mask unprotected because it
relies on the protection by the move_in_progress flag. But this is fatal, as
the flag is reset after the IPI has been sent. So a cpu which receives the IPI
can still see the flag set and therefor ignores the cleanup request. If no
other cleanup request happens then the vector stays stale on that cpu and in
case of an irq removal the vector still persists. That can lead to use after
free when the next cleanup IPI happens.

Protect the code with vector_lock and clear move_in_progress before sending
the IPI.

This does not plug the race which Joe reported because:

CPU0                          CPU1                      CPU2
lock_vector()
data->move_in_progress=0
sendIPI()
unlock_vector()
                              set_affinity()
                              assign_irq_vector()
                              lock_vector()             handle_IPI
                              move_in_progress = 1      lock_vector()
                              unlock_vector()
                                                        move_in_progress == 1

The full fix comes with a later patch.

Reported-and-tested-by: Joe Lawrence <joe.lawrence@...atus.com>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Tested-by: Borislav Petkov <bp@...en8.de>
Cc: Jiang Liu <jiang.liu@...ux.intel.com>
Cc: Jeremiah Mahler <jmmahler@...il.com>
Cc: andy.shevchenko@...il.com
Cc: Guenter Roeck <linux@...ck-us.net>
Link: http://lkml.kernel.org/r/20151231160106.892412198@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>

---
 arch/x86/kernel/apic/vector.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--- a/arch/x86/kernel/apic/vector.c
+++ b/arch/x86/kernel/apic/vector.c
@@ -530,6 +530,8 @@ static void __send_cleanup_vector(struct
 {
 	cpumask_var_t cleanup_mask;
 
+	raw_spin_lock(&vector_lock);
+	data->move_in_progress = 0;
 	if (unlikely(!alloc_cpumask_var(&cleanup_mask, GFP_ATOMIC))) {
 		unsigned int i;
 
@@ -541,7 +543,7 @@ static void __send_cleanup_vector(struct
 		apic->send_IPI_mask(cleanup_mask, IRQ_MOVE_CLEANUP_VECTOR);
 		free_cpumask_var(cleanup_mask);
 	}
-	data->move_in_progress = 0;
+	raw_spin_unlock(&vector_lock);
 }
 
 void send_cleanup_vector(struct irq_cfg *cfg)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ