[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101022044812.GB6899@amd>
Date: Fri, 22 Oct 2010 15:48:12 +1100
From: Nick Piggin <npiggin@...nel.dk>
To: Al Viro <viro@...IV.linux.org.uk>
Cc: Nick Piggin <npiggin@...nel.dk>,
Dave Chinner <david@...morbit.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: Inode Lock Scalability V7 (was V6)
On Fri, Oct 22, 2010 at 04:12:11AM +0100, Al Viro wrote:
> On Fri, Oct 22, 2010 at 01:48:34PM +1100, Nick Piggin wrote:
> > On Fri, Oct 22, 2010 at 01:41:52PM +1100, Nick Piggin wrote:
> > > The locking in my lock break patch is ugly and wrong, yes. But it is
> > > always an intermediate step. I want to argue that with RCU inode work
> > > *anyway*, there is not much point to reducing the strength of the
> > > i_lock property because locking can be cleaned up nicely and still
> > > keep i_lock ~= inode_lock (for a single inode).
> >
> > The other thing is that with RCU, the idea of locking an object in
> > the data structure with a per object lock actually *is* much more
> > natural. It's hard to do it properly with just a big data structure
> > lock.
> >
> > If I want to take a reference to an inode from a data structre, how
> > to do it with RCU?
> >
> > rcu_read_lock()
> > list_for_each(inode) {
> > spin_lock(&big_lock); /* oops, might as well not even use RCU then */
> > if (!unhashed) {
> > iget();
> > }
> > }
>
> Huh? Why the hell does it have to be a big lock? You grab ->i_lock,
> then look at the damn thing. You also grab it on eviction from the
> list - *inside* the lock used for serializing the write access to
> your RCU list.
That sucks, it requires more acquiring and dropping of i_lock and
it hits single threaded performance. I looked at that.
But it also loses the i_lock = inode_lock property.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists