[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101021021751.GH12506@dastard>
Date: Thu, 21 Oct 2010 13:17:51 +1100
From: Dave Chinner <david@...morbit.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Trond Myklebust <Trond.Myklebust@...app.com>,
Peter Zijlstra <peterz@...radead.org>,
Eric Paris <eparis@...hat.com>, linux-kernel@...r.kernel.org,
linux-security-module@...r.kernel.org,
linux-fsdevel@...r.kernel.org, hch@...radead.org, zohar@...ibm.com,
warthog9@...nel.org, jmorris@...ei.org, kyle@...artin.ca,
hpa@...or.com, akpm@...ux-foundation.org, mingo@...e.hu,
viro@...iv.linux.org.uk
Subject: Re: [PATCH 5/6] IMA: use rbtree instead of radix tree for inode
information cache
On Wed, Oct 20, 2010 at 05:58:19PM -0700, Linus Torvalds wrote:
> On Wed, Oct 20, 2010 at 3:47 PM, Trond Myklebust
> <Trond.Myklebust@...app.com> wrote:
> >
> > That is a really interesting alternative to traditional locking. Could
> > we perhaps document it in Documentation/rbtree.txt?
>
> Well, I'd actually suggest avoiding it unless you feel that you
> _really_ need it. So I wouldn't want to really suggest it as a generic
> locking model - you had better have looked at pretty much all other
> alternatives first. And if that seqlock starts failing a lot under
> load, it ends up being _more_ expensive than just taking the lock in
> the first place.
Thanks for pointing out that caveat. I think that the XFS buffer
cache case won't have that problem - I'm seeing better than a 100:1
tree lookup (>1M/s) to modification rate (<10k inserts/s) under
workloads that are stressing the cache on an 8-way VM...
Still, as the only method I've heard of that allows RCU lookup on
rbtrees, perhaps it is still worth documenting along with all the
caveats of when not to use this.
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists