[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <efb4f7e7637104196aa4e6825294efec838f1990.1404128998.git.jslaby@suse.cz>
Date: Mon, 30 Jun 2014 13:51:08 +0200
From: Jiri Slaby <jslaby@...e.cz>
To: stable@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, Jeff Layton <jlayton@...hat.com>,
"J. Bruce Fields" <bfields@...hat.com>, Jiri Slaby <jslaby@...e.cz>
Subject: [PATCH 3.12 047/181] nfsd: don't try to reuse an expired DRC entry off the list
From: Jeff Layton <jlayton@...hat.com>
3.12-stable review patch. If anyone has any objections, please let me know.
===============
commit a0ef5e19684f0447da9ff0654a12019c484f57ca upstream.
Currently when we are processing a request, we try to scrape an expired
or over-limit entry off the list in preference to allocating a new one
from the slab.
This is unnecessarily complicated. Just use the slab layer.
Signed-off-by: Jeff Layton <jlayton@...hat.com>
Signed-off-by: J. Bruce Fields <bfields@...hat.com>
Signed-off-by: Jiri Slaby <jslaby@...e.cz>
---
fs/nfsd/nfscache.c | 36 ++++--------------------------------
1 file changed, 4 insertions(+), 32 deletions(-)
diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c
index b6af150c96b8..f8f060ffbf4f 100644
--- a/fs/nfsd/nfscache.c
+++ b/fs/nfsd/nfscache.c
@@ -132,13 +132,6 @@ nfsd_reply_cache_alloc(void)
}
static void
-nfsd_reply_cache_unhash(struct svc_cacherep *rp)
-{
- hlist_del_init(&rp->c_hash);
- list_del_init(&rp->c_lru);
-}
-
-static void
nfsd_reply_cache_free_locked(struct svc_cacherep *rp)
{
if (rp->c_type == RC_REPLBUFF && rp->c_replvec.iov_base) {
@@ -416,22 +409,8 @@ nfsd_cache_lookup(struct svc_rqst *rqstp)
/*
* Since the common case is a cache miss followed by an insert,
- * preallocate an entry. First, try to reuse the first entry on the LRU
- * if it works, then go ahead and prune the LRU list.
+ * preallocate an entry.
*/
- spin_lock(&cache_lock);
- if (!list_empty(&lru_head)) {
- rp = list_first_entry(&lru_head, struct svc_cacherep, c_lru);
- if (nfsd_cache_entry_expired(rp) ||
- num_drc_entries >= max_drc_entries) {
- nfsd_reply_cache_unhash(rp);
- prune_cache_entries();
- goto search_cache;
- }
- }
-
- /* No expired ones available, allocate a new one. */
- spin_unlock(&cache_lock);
rp = nfsd_reply_cache_alloc();
spin_lock(&cache_lock);
if (likely(rp)) {
@@ -439,7 +418,9 @@ nfsd_cache_lookup(struct svc_rqst *rqstp)
drc_mem_usage += sizeof(*rp);
}
-search_cache:
+ /* go ahead and prune the cache */
+ prune_cache_entries();
+
found = nfsd_cache_search(rqstp, csum);
if (found) {
if (likely(rp))
@@ -453,15 +434,6 @@ search_cache:
goto out;
}
- /*
- * We're keeping the one we just allocated. Are we now over the
- * limit? Prune one off the tip of the LRU in trade for the one we
- * just allocated if so.
- */
- if (num_drc_entries >= max_drc_entries)
- nfsd_reply_cache_free_locked(list_first_entry(&lru_head,
- struct svc_cacherep, c_lru));
-
nfsdstats.rcmisses++;
rqstp->rq_cacherep = rp;
rp->c_state = RC_INPROG;
--
2.0.0
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists