lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170710171049.047440518@linuxfoundation.org>
Date:   Mon, 10 Jul 2017 19:11:01 +0200
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     linux-kernel@...r.kernel.org
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        stable@...r.kernel.org, Andre Przywara <andre.przywara@....com>,
        Juergen Gross <jgross@...e.com>
Subject: [PATCH 4.11 34/36] xen: avoid deadlock in xenbus driver

4.11-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Juergen Gross <jgross@...e.com>

commit 1a3fc2c402810bf336882e695abd1678dbc8d279 upstream.

There has been a report about a deadlock in the xenbus driver:

[  247.979498] ======================================================
[  247.985688] WARNING: possible circular locking dependency detected
[  247.991882] 4.12.0-rc4-00022-gc4b25c0 #575 Not tainted
[  247.997040] ------------------------------------------------------
[  248.003232] xenbus/91 is trying to acquire lock:
[  248.007875]  (&u->msgbuffer_mutex){+.+.+.}, at: [<ffff00000863e904>]
xenbus_dev_queue_reply+0x3c/0x230
[  248.017163]
[  248.017163] but task is already holding lock:
[  248.023096]  (xb_write_mutex){+.+...}, at: [<ffff00000863a940>]
xenbus_thread+0x5f0/0x798
[  248.031267]
[  248.031267] which lock already depends on the new lock.
[  248.031267]
[  248.039615]
[  248.039615] the existing dependency chain (in reverse order) is:
[  248.047176]
[  248.047176] -> #1 (xb_write_mutex){+.+...}:
[  248.052943]        __lock_acquire+0x1728/0x1778
[  248.057498]        lock_acquire+0xc4/0x288
[  248.061630]        __mutex_lock+0x84/0x868
[  248.065755]        mutex_lock_nested+0x3c/0x50
[  248.070227]        xs_send+0x164/0x1f8
[  248.074015]        xenbus_dev_request_and_reply+0x6c/0x88
[  248.079427]        xenbus_file_write+0x260/0x420
[  248.084073]        __vfs_write+0x48/0x138
[  248.088113]        vfs_write+0xa8/0x1b8
[  248.091983]        SyS_write+0x54/0xb0
[  248.095768]        el0_svc_naked+0x24/0x28
[  248.099897]
[  248.099897] -> #0 (&u->msgbuffer_mutex){+.+.+.}:
[  248.106088]        print_circular_bug+0x80/0x2e0
[  248.110730]        __lock_acquire+0x1768/0x1778
[  248.115288]        lock_acquire+0xc4/0x288
[  248.119417]        __mutex_lock+0x84/0x868
[  248.123545]        mutex_lock_nested+0x3c/0x50
[  248.128016]        xenbus_dev_queue_reply+0x3c/0x230
[  248.133005]        xenbus_thread+0x788/0x798
[  248.137306]        kthread+0x110/0x140
[  248.141087]        ret_from_fork+0x10/0x40

It is rather easy to avoid by dropping xb_write_mutex before calling
xenbus_dev_queue_reply().

Fixes: fd8aa9095a95c02dcc35540a263267c29b8fda9d ("xen: optimize xenbus
driver for multiple concurrent xenstore accesses").

Reported-by: Andre Przywara <andre.przywara@....com>
Signed-off-by: Juergen Gross <jgross@...e.com>
Tested-by: Andre Przywara <andre.przywara@....com>
Signed-off-by: Juergen Gross <jgross@...e.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>

---
 drivers/xen/xenbus/xenbus_comms.c |   21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -299,17 +299,7 @@ static int process_msg(void)
 		mutex_lock(&xb_write_mutex);
 		list_for_each_entry(req, &xs_reply_list, list) {
 			if (req->msg.req_id == state.msg.req_id) {
-				if (req->state == xb_req_state_wait_reply) {
-					req->msg.type = state.msg.type;
-					req->msg.len = state.msg.len;
-					req->body = state.body;
-					req->state = xb_req_state_got_reply;
-					list_del(&req->list);
-					req->cb(req);
-				} else {
-					list_del(&req->list);
-					kfree(req);
-				}
+				list_del(&req->list);
 				err = 0;
 				break;
 			}
@@ -317,6 +307,15 @@ static int process_msg(void)
 		mutex_unlock(&xb_write_mutex);
 		if (err)
 			goto out;
+
+		if (req->state == xb_req_state_wait_reply) {
+			req->msg.type = state.msg.type;
+			req->msg.len = state.msg.len;
+			req->body = state.body;
+			req->state = xb_req_state_got_reply;
+			req->cb(req);
+		} else
+			kfree(req);
 	}
 
 	mutex_unlock(&xs_response_mutex);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ