[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1407268348-25949-1-git-send-email-steven@uplinklabs.net>
Date: Tue, 5 Aug 2014 12:52:27 -0700
From: Steven Noonan <steven@...inklabs.net>
To: lkml <linux-kernel@...r.kernel.org>, linux-fsdevel@...r.kernel.org
Cc: Steven Noonan <steven@...inklabs.net>, NeilBrown <neilb@...e.de>,
Al Viro <viro@...IV.linux.org.uk>
Subject: [PATCH 1/2] lockref: make lockref count signed
There are numerous places where this is casted to a signed value anyway, for
comparisons checking that the value hasn't been set to the 'dead' value of
-128. This change turns the count value into a signed integer, which is how
it's already being treated anyway. This reduces the chance for developer errors
when making those comparisons.
Suggested-by: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: NeilBrown <neilb@...e.de>
Cc: Al Viro <viro@...IV.linux.org.uk>
Signed-off-by: Steven Noonan <steven@...inklabs.net>
---
fs/dcache.c | 6 +++---
include/linux/lockref.h | 2 +-
lib/lockref.c | 4 ++--
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/dcache.c b/fs/dcache.c
index 06f6585..f7a592e 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -479,7 +479,7 @@ static void __dentry_kill(struct dentry *dentry)
* dentry_iput drops the locks, at which point nobody (except
* transient RCU lookups) can reach this dentry.
*/
- BUG_ON((int)dentry->d_lockref.count > 0);
+ BUG_ON(dentry->d_lockref.count > 0);
this_cpu_dec(nr_dentry);
if (dentry->d_op && dentry->d_op->d_release)
dentry->d_op->d_release(dentry);
@@ -532,7 +532,7 @@ static inline struct dentry *lock_parent(struct dentry *dentry)
struct dentry *parent = dentry->d_parent;
if (IS_ROOT(dentry))
return NULL;
- if (unlikely((int)dentry->d_lockref.count < 0))
+ if (unlikely(dentry->d_lockref.count < 0))
return NULL;
if (likely(spin_trylock(&parent->d_lock)))
return parent;
@@ -848,7 +848,7 @@ static void shrink_dentry_list(struct list_head *list)
* We found an inuse dentry which was not removed from
* the LRU because of laziness during lookup. Do not free it.
*/
- if ((int)dentry->d_lockref.count > 0) {
+ if (dentry->d_lockref.count > 0) {
spin_unlock(&dentry->d_lock);
if (parent)
spin_unlock(&parent->d_lock);
diff --git a/include/linux/lockref.h b/include/linux/lockref.h
index 4bfde0e..8558ff1 100644
--- a/include/linux/lockref.h
+++ b/include/linux/lockref.h
@@ -28,7 +28,7 @@ struct lockref {
#endif
struct {
spinlock_t lock;
- unsigned int count;
+ int count;
};
};
};
diff --git a/lib/lockref.c b/lib/lockref.c
index d2233de..e4c4255 100644
--- a/lib/lockref.c
+++ b/lib/lockref.c
@@ -158,7 +158,7 @@ int lockref_get_not_dead(struct lockref *lockref)
CMPXCHG_LOOP(
new.count++;
- if ((int)old.count < 0)
+ if (old.count < 0)
return 0;
,
return 1;
@@ -166,7 +166,7 @@ int lockref_get_not_dead(struct lockref *lockref)
spin_lock(&lockref->lock);
retval = 0;
- if ((int) lockref->count >= 0) {
+ if (lockref->count >= 0) {
lockref->count++;
retval = 1;
}
--
2.0.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists