[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110106121056.0758b16f.sfr@canb.auug.org.au>
Date: Thu, 6 Jan 2011 12:10:56 +1100
From: Stephen Rothwell <sfr@...b.auug.org.au>
To: Nick Piggin <npiggin@...nel.dk>
Cc: linux-next@...r.kernel.org, linux-kernel@...r.kernel.org,
Dave Chinner <dchinner@...hat.com>
Subject: linux-next: manual merge of the vfs-scale tree with the xfs tree
Hi Nick,
Today's linux-next merge of the vfs-scale tree got a conflict in
fs/xfs/xfs_iget.c between commits
d95b7aaf9ab6738bef1ebcc52ab66563085e44ac ("xfs: rcu free inodes") and
1a3e8f3da09c7082d25b512a0ffe569391e4c09a ("xfs: convert inode cache
lookups to use RCU locking") from the xfs tree and commit
bb3e8c37a0af21d0a8fe54a0b0f17aca16335a82 ("fs: icache RCU free inodes")
from the vfs-scale tree.
OK, so looking at this, the first xfs tree patch above does the same as
the vfs-scale tree patch (just using i_dentry instead of the (union
eqivalent) i_rcu. I fixed it up (see below - the diff does not show that
__xfs_inode_free has been removed) and can carry the fix as necessary.
--
Cheers,
Stephen Rothwell sfr@...b.auug.org.au
diff --cc fs/xfs/xfs_iget.c
index 3ecad00,d7de5a3..0000000
--- a/fs/xfs/xfs_iget.c
+++ b/fs/xfs/xfs_iget.c
@@@ -157,17 -145,7 +156,17 @@@ xfs_inode_free
ASSERT(!spin_is_locked(&ip->i_flags_lock));
ASSERT(completion_done(&ip->i_flush));
+ /*
+ * Because we use RCU freeing we need to ensure the inode always
+ * appears to be reclaimed with an invalid inode number when in the
+ * free state. The ip->i_flags_lock provides the barrier against lookup
+ * races.
+ */
+ spin_lock(&ip->i_flags_lock);
+ ip->i_flags = XFS_IRECLAIM;
+ ip->i_ino = 0;
+ spin_unlock(&ip->i_flags_lock);
- call_rcu((struct rcu_head *)&VFS_I(ip)->i_dentry, __xfs_inode_free);
+ call_rcu(&ip->i_vnode.i_rcu, xfs_inode_free_callback);
}
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists