[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080814001938.GC6119@disturbed>
Date: Thu, 14 Aug 2008 10:19:38 +1000
From: Dave Chinner <david@...morbit.com>
To: Daniel Walker <dwalker@...sta.com>
Cc: xfs@....sgi.com, linux-kernel@...r.kernel.org, matthew@....cx
Subject: Re: [PATCH 4/6] Replace inode flush semaphore with a completion
On Wed, Aug 13, 2008 at 08:34:01AM -0700, Daniel Walker wrote:
> On Wed, 2008-08-13 at 17:50 +1000, Dave Chinner wrote:
>
> > Right now we have the case where no matter what type of flush
> > is done, the caller does not have to worry about unlocking
> > the flush lock - it will be done as part of the flush. You're
> > suggestion makes that conditional based on whether we did a
> > sync flush or not.
> >
> > So, what happenѕ when you call:
> >
> > xfs_iflush(ip, XFS_IFLUSH_DELWRI_ELSE_SYNC);
> >
> > i.e. xfs_iflush() may do an delayed flush or a sync flush depending
> > on the current state of the inode. The caller has no idea what type
> > of flush was done, so will have no idea whether to unlock or not.
>
> You wouldn't base the unlock on what iflush does, you would
> unconditionally unlock.
It's not really a flush lock at that point - it's a state lock.
We've already got one of those, and a set of state flags that it
protects.
Basically you're suggesting that we keep external state to the
completion that tracks whether a completion is in progress
or not. You can't use a mutex like you suggested to protect
state because you can't hold it while doing a wait_for_completion()
and then use it to clear the state flag before calling complete().
We can use the internal inode state flags and lock to keep
track of this. i.e:
xfs_iflock(
xfs_inode_t *ip)
{
xfs_iflags_set(ip, XFS_IFLUSH_INPROGRESS);
wait_for_completion(ip->i_flush_wq);
}
xfs_iflock_nowait(
xfs_inode_t *ip)
{
if (xfs_iflags_test(ip, XFS_IFLUSH_INPROGRESS))
return 1;
xfs_iflags_set(ip, XFS_IFLUSH_INPROGRESS);
wait_for_completion(ip->i_flush_wq);
return 0;
}
xfs_ifunlock(
xfs_inode_t *ip)
{
xfs_iflags_clear(ip, XFS_IFLUSH_INPROGRESS);
complete(ip->i_flush_wq);
}
*However*, given that we already have this exact state in the
completion itself, I see little reason for adding the additional
locking overhead and the complexity of race conditions of keeping
this state coherent with the completion. Modifying the completion
API slightly to export this state is the simplest, easiest solution
to the problem....
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists