[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1228090770.7112.14.camel@heimdal.trondhjem.org>
Date: Sun, 30 Nov 2008 19:19:30 -0500
From: Trond Myklebust <Trond.Myklebust@...app.com>
To: Ian Campbell <ijc@...lion.org.uk>
Cc: linux-nfs@...r.kernel.org, Max Kellermann <mk@...all.com>,
linux-kernel@...r.kernel.org, gcosta@...hat.com,
Grant Coady <grant_lkml@...o.com.au>,
"J. Bruce Fields" <bfields@...ldses.org>,
Tom Tucker <tom@...ngridcomputing.com>
Subject: Re: [PATCH 2/3] SUNRPC: We only need to call svc_delete_xprt()
once...
Use XPT_DEAD to ensure that we only call xpo_detach & friends once.
Signed-off-by: Trond Myklebust <Trond.Myklebust@...app.com>
---
net/sunrpc/svc_xprt.c | 17 +++++++++++------
1 files changed, 11 insertions(+), 6 deletions(-)
diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index bf5b5cd..a417064 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -838,6 +838,10 @@ void svc_delete_xprt(struct svc_xprt *xprt)
{
struct svc_serv *serv = xprt->xpt_server;
+ /* Only do this once */
+ if (test_and_set_bit(XPT_DEAD, &xprt->xpt_flags) != 0)
+ return;
+
dprintk("svc: svc_delete_xprt(%p)\n", xprt);
xprt->xpt_ops->xpo_detach(xprt);
@@ -851,13 +855,14 @@ void svc_delete_xprt(struct svc_xprt *xprt)
* while still attached to a queue, the queue itself
* is about to be destroyed (in svc_destroy).
*/
- if (!test_and_set_bit(XPT_DEAD, &xprt->xpt_flags)) {
- BUG_ON(atomic_read(&xprt->xpt_ref.refcount) < 2);
- if (test_bit(XPT_TEMP, &xprt->xpt_flags))
- serv->sv_tmpcnt--;
- svc_xprt_put(xprt);
- }
+ if (test_bit(XPT_TEMP, &xprt->xpt_flags))
+ serv->sv_tmpcnt--;
spin_unlock_bh(&serv->sv_lock);
+
+ /* FIXME: Is this really needed here? */
+ BUG_ON(atomic_read(&xprt->xpt_ref.refcount) < 2);
+
+ svc_xprt_put(xprt);
}
void svc_close_xprt(struct svc_xprt *xprt)
--
Trond Myklebust
Linux NFS client maintainer
NetApp
Trond.Myklebust@...app.com
www.netapp.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists