[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20121122004047.498885164@linuxfoundation.org>
Date: Wed, 21 Nov 2012 16:41:27 -0800
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
alan@...rguk.ukuu.org.uk, Sage Weil <sage@...tank.com>,
Alex Elder <elder@...tank.com>,
Yehuda Sadeh <yehuda@...tank.com>
Subject: [ 141/171] libceph: fix mutex coverage for ceph_con_close
3.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sage Weil <sage@...tank.com>
(cherry picked from commit 8c50c817566dfa4581f82373aac39f3e608a7dc8)
Hold the mutex while twiddling all of the state bits to avoid possible
races. While we're here, make not of why we cannot close the socket
directly.
Signed-off-by: Sage Weil <sage@...tank.com>
Reviewed-by: Alex Elder <elder@...tank.com>
Reviewed-by: Yehuda Sadeh <yehuda@...tank.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
net/ceph/messenger.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -503,6 +503,7 @@ static void reset_connection(struct ceph
*/
void ceph_con_close(struct ceph_connection *con)
{
+ mutex_lock(&con->mutex);
dout("con_close %p peer %s\n", con,
ceph_pr_addr(&con->peer_addr.in_addr));
clear_bit(NEGOTIATING, &con->state);
@@ -515,11 +516,16 @@ void ceph_con_close(struct ceph_connecti
clear_bit(KEEPALIVE_PENDING, &con->flags);
clear_bit(WRITE_PENDING, &con->flags);
- mutex_lock(&con->mutex);
reset_connection(con);
con->peer_global_seq = 0;
cancel_delayed_work(&con->work);
mutex_unlock(&con->mutex);
+
+ /*
+ * We cannot close the socket directly from here because the
+ * work threads use it without holding the mutex. Instead, let
+ * con_work() do it.
+ */
queue_con(con);
}
EXPORT_SYMBOL(ceph_con_close);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists