[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1278001306.822.8.camel@bobble.smo.corp.google.com>
Date: Thu, 01 Jul 2010 09:21:46 -0700
From: Frank Mayhar <fmayhar@...gle.com>
To: Nick Piggin <npiggin@...e.de>
Cc: Dave Chinner <david@...morbit.com>, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, John Stultz <johnstul@...ibm.com>
Subject: Re: [patch 29/52] fs: icache lock i_count
On Thu, 2010-07-01 at 17:54 +1000, Nick Piggin wrote:
> On Thu, Jul 01, 2010 at 12:36:18PM +1000, Dave Chinner wrote:
> > Seriously: use a new lock for high level inode operations you are
> > optimising - don't repurpose an existing lock with different usage
> > rules just because it's convenient.
>
> That's what scalability development is all about, I'm afraid. Just
> adding more and more locks is what makes things more complex, so
> you have to juggle around or change locks when possible. If there is a
> difficulty with locking pops up in future, I'd prefer to look at it
> then.
I must agree strongly with Nick here. Lock proliferation is a Bad
Thing; adding a new lock here just adds complexity without really
improving anything, since you still have to deal with lock ordering
_and_ add a new one to the mix. It also makes struct inode bigger for
no real gain. Since i_lock is already well understood and its use is
pretty isolated, it seems ideal to extend it to more general use while
keeping the isolation intact as much as possible. Hell, if the code
were designed with this kind of scalability in mind, i_lock would be
doing all the locking directly related to struct inode. Much as it is
after Nick's work.
My argument here is that it's not just convenient, it makes the
implementation simpler since instead of dealing with two locks there's
just one.
--
Frank Mayhar <fmayhar@...gle.com>
Google, Inc.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists