[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101015132938.GA3936@amd>
Date: Sat, 16 Oct 2010 00:29:38 +1100
From: Nick Piggin <npiggin@...nel.dk>
To: Nick Piggin <npiggin@...nel.dk>
Cc: Dave Chinner <david@...morbit.com>,
Christoph Hellwig <hch@...radead.org>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 17/18] fs: icache remove inode_lock
On Sat, Oct 16, 2010 at 12:03:00AM +1100, Nick Piggin wrote:
> On Fri, Oct 15, 2010 at 09:59:43PM +1100, Dave Chinner wrote:
> > My series uses i_lock only to protect i_state and i_ref. It does not
> > need to protect any more of the inode than that as other locks
> > protect the other list fields. As a result, it's still the inermost
> > lock and there are no trylocks in the code at all.
> We discussed it and I didn't think latencies would be any worse
> a problem than they are today. I agreed it may be an issue and
> pointed out there are ways forward to fix it.
BTW. if a few trylocks are your biggest issue, this is a joke. I told
you how they can be fixed with incremental patches on top of the series
(which basically whittle down the lock coverage of the old inode_lock,
and so IMO need to be done in small chunks well bisectable and with
good rationale). So why you didn't submit a couple of incremental
patches to do just that is beyond me.
I've had prototypes in my tree actually to do that from a while back,
but actually I'm thinking that using RCU may be a better way to go
now that Linus has agreed on it and we have sketched a design to do
slab-free-RCU.
Either way, it's much easier to compare pros and cons of each, when
they are done incrementally on top of the existing base.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists