lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 08 Feb 2016 23:53:51 +0000
From:	Ben Hutchings <ben@...adent.org.uk>
To:	linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC:	akpm@...ux-foundation.org,
	"Rabin Vincent" <rabin.vincent@...s.com>,
	"Steve French" <sfrench@...alhost.localdomain>,
	"Shirish Pargaonkar" <shirishpargaonkar@...il.com>
Subject: [PATCH 3.2 51/87] cifs: fix race between call_async() and reconnect()

3.2.77-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: Rabin Vincent <rabin.vincent@...s.com>

commit 820962dc700598ffe8cd21b967e30e7520c34748 upstream.

cifs_call_async() queues the MID to the pending list and calls
smb_send_rqst().  If smb_send_rqst() performs a partial send, it sets
the tcpStatus to CifsNeedReconnect and returns an error code to
cifs_call_async().  In this case, cifs_call_async() removes the MID
from the list and returns to the caller.

However, cifs_call_async() releases the server mutex _before_ removing
the MID.  This means that a cifs_reconnect() can race with this function
and manage to remove the MID from the list and delete the entry before
cifs_call_async() calls cifs_delete_mid().  This leads to various
crashes due to the use after free in cifs_delete_mid().

Task1				Task2

cifs_call_async():
 - rc = -EAGAIN
 - mutex_unlock(srv_mutex)

				cifs_reconnect():
				 - mutex_lock(srv_mutex)
				 - mutex_unlock(srv_mutex)
				 - list_delete(mid)
				 - mid->callback()
				 	cifs_writev_callback():
				 		- mutex_lock(srv_mutex)
						- delete(mid)
				 		- mutex_unlock(srv_mutex)

 - cifs_delete_mid(mid) <---- use after free

Fix this by removing the MID in cifs_call_async() before releasing the
srv_mutex.  Also hold the srv_mutex in cifs_reconnect() until the MIDs
are moved out of the pending list.

Signed-off-by: Rabin Vincent <rabin.vincent@...s.com>
Acked-by: Shirish Pargaonkar <shirishpargaonkar@...il.com>
Signed-off-by: Steve French <sfrench@...alhost.localdomain>
[bwh: Backported to 3.2:
 - In cifs_call_async() there are two error paths jumping to 'out_err';
   fix both of them
 - s/cifs_delete_mid/delete_mid/
 - Adjust context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -134,7 +134,6 @@ cifs_reconnect(struct TCP_Server_Info *s
 	server->session_key.response = NULL;
 	server->session_key.len = 0;
 	server->lstrp = jiffies;
-	mutex_unlock(&server->srv_mutex);
 
 	/* mark submitted MIDs for retry and issue callback */
 	INIT_LIST_HEAD(&retry_list);
@@ -147,6 +146,7 @@ cifs_reconnect(struct TCP_Server_Info *s
 		list_move(&mid_entry->qhead, &retry_list);
 	}
 	spin_unlock(&GlobalMid_Lock);
+	mutex_unlock(&server->srv_mutex);
 
 	cFYI(1, "%s: issuing mid callbacks", __func__);
 	list_for_each_safe(tmp, tmp2, &retry_list) {
--- a/fs/cifs/transport.c
+++ b/fs/cifs/transport.c
@@ -370,10 +370,8 @@ cifs_call_async(struct TCP_Server_Info *
 	spin_unlock(&GlobalMid_Lock);
 
 	rc = cifs_sign_smb2(iov, nvec, server, &mid->sequence_number);
-	if (rc) {
-		mutex_unlock(&server->srv_mutex);
-		goto out_err;
-	}
+	if (rc)
+		goto out;
 
 	mid->receive = receive;
 	mid->callback = callback;
@@ -384,14 +382,15 @@ cifs_call_async(struct TCP_Server_Info *
 	rc = smb_sendv(server, iov, nvec);
 	cifs_in_send_dec(server);
 	cifs_save_when_sent(mid);
+out:
+	if (rc < 0)
+		delete_mid(mid);
+
 	mutex_unlock(&server->srv_mutex);
 
-	if (rc)
-		goto out_err;
+	if (rc == 0)
+		return 0;
 
-	return rc;
-out_err:
-	delete_mid(mid);
 	atomic_dec(&server->inFlight);
 	wake_up(&server->request_q);
 	return rc;

Powered by blists - more mailing lists