[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060803002038.GI29686@ca-server1.us.oracle.com>
Date: Wed, 2 Aug 2006 17:20:38 -0700
From: Mark Fasheh <mark.fasheh@...cle.com>
To: Dave Hansen <haveblue@...ibm.com>
Cc: linux-kernel@...r.kernel.org, viro@....linux.org.uk,
herbert@...hfloor.at, hch@...radead.org
Subject: Re: [PATCH 04/28] OCFS2 is screwy
On Tue, Aug 01, 2006 at 08:21:46PM -0700, Dave Hansen wrote:
> Please ignore that last one. It didn't correctly handle directories'
> with a remaining i_nlink of 2.
Oops, one more minor thing wrong with this patch:
> @@ -823,16 +835,6 @@
> }
> }
>
> - /* There are still a few steps left until we can consider the
> - * unlink to have succeeded. Save off nlink here before
> - * modification so we can set it back in case we hit an issue
> - * before commit. */
> - saved_nlink = inode->i_nlink;
> - if (S_ISDIR(inode->i_mode))
> - inode->i_nlink = 0;
> - else
> - inode->i_nlink--;
> -
> status = ocfs2_request_unlink_vote(inode, dentry,
> (unsigned int) inode->i_nlink);
The network request above needs to send what the new nlink will be set to.
This is so that the other nodes which have an interest in the inode can
determine whether the will need to do orphan processing on their last iput.
Really what they do is just compare against zero and mark the inode with a
flag indicating that it may have been orphaned. So we could just pass the
result of inode_is_unlinkable(), but I'd rather preserve the behavior that
we have today.
The ugly method is:
- status = ocfs2_request_unlink_vote(inode, dentry,
- (unsigned int) inode->i_nlink);
+ status = ocfs2_request_unlink_vote(inode, dentry,
+ S_ISDIR(inode->i_mode) && inode->i_nlink == 2 ? 0 : inode->i_nlink - 1);
Better yet, we could store the result of the evaluation in a temporary
variable and pass that through to the request function.
Thanks,
--Mark
--
Mark Fasheh
Senior Software Developer, Oracle
mark.fasheh@...cle.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists