[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101015181459.GA11273@amd>
Date: Sat, 16 Oct 2010 05:14:59 +1100
From: Nick Piggin <npiggin@...nel.dk>
To: Nick Piggin <npiggin@...nel.dk>
Cc: Christoph Hellwig <hch@...radead.org>,
Dave Chinner <david@...morbit.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 17/18] fs: icache remove inode_lock
On Sat, Oct 16, 2010 at 05:02:19AM +1100, Nick Piggin wrote:
> For those lookups where you are taking the i_lock anyway, they
> will look the same, except the i_lock lock width reduction
> loses the ability to lock all icache state of the inode (like
> we can practically do today with inode_lock).
>
> This was a key consideration for maintainability for me.
Maybe you've overlooked this point. It is, in fact, very important
in my opinion. With my locking approach, everywhere where today
we have:
spin_lock(&inode_lock);
do_something(inode);
spin_unlock(&inode_lock);
it can be replaced with
spin_lock(&inode->i_lock);
do_something(inode);
spin_unlock(&inode->i_lock);
Without worrying about the lock coverage. In fact, it is a tiny bit
stronger because you also get to hold the refcount at the same time
(doesn't really matter outside core icache though).
Ditto for my dcache_lock appraoch (it's far more important there, being
much more visible to filesystems IMO, but icache is still important).
I never totally objected to reductions in i_lock lock width if they
really are required for that last bit of performance, but I have always
maintained that I want these kinds of locking irregularities merged on
their own, on top of the base code. Especially with RCU inodes, I'm not
sure if they'll be needed, however.
Most of the slowpaths where that happens, the i_lock needs to be taken
somewhere anyway, so you probably don't really save anything.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists