[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120912232455.784459448@linuxfoundation.org>
Date: Wed, 12 Sep 2012 16:28:48 -0700
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Greg KH <gregkh@...uxfoundation.org>,
Malahal Naineni <malahal@...ibm.com>,
"J. Bruce Fields" <bfields@...hat.com>
Subject: [ 050/108] svcrpc: sends on closed socket should stop immediately
From: Greg KH <gregkh@...uxfoundation.org>
3.5-stable review patch. If anyone has any objections, please let me know.
------------------
From: "J. Bruce Fields" <bfields@...hat.com>
commit f06f00a24d76e168ecb38d352126fd203937b601 upstream.
svc_tcp_sendto sets XPT_CLOSE if we fail to transmit the entire reply.
However, the XPT_CLOSE won't be acted on immediately. Meanwhile other
threads could send further replies before the socket is really shut
down. This can manifest as data corruption: for example, if a truncated
read reply is followed by another rpc reply, that second reply will look
to the client like further read data.
Symptoms were data corruption preceded by svc_tcp_sendto logging
something like
kernel: rpc-srv/tcp: nfsd: sent only 963696 when sending 1048708 bytes - shutting down socket
Reported-by: Malahal Naineni <malahal@...ibm.com>
Tested-by: Malahal Naineni <malahal@...ibm.com>
Signed-off-by: J. Bruce Fields <bfields@...hat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
net/sunrpc/svc_xprt.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -791,7 +791,8 @@ int svc_send(struct svc_rqst *rqstp)
/* Grab mutex to serialize outgoing data. */
mutex_lock(&xprt->xpt_mutex);
- if (test_bit(XPT_DEAD, &xprt->xpt_flags))
+ if (test_bit(XPT_DEAD, &xprt->xpt_flags)
+ || test_bit(XPT_CLOSE, &xprt->xpt_flags))
len = -ENOTCONN;
else
len = xprt->xpt_ops->xpo_sendto(rqstp);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists