[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1287574269.3488.11.camel@twins>
Date: Wed, 20 Oct 2010 13:31:09 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Dave Chinner <david@...morbit.com>
Cc: Eric Paris <eparis@...hat.com>, linux-kernel@...r.kernel.org,
linux-security-module@...r.kernel.org,
linux-fsdevel@...r.kernel.org, hch@...radead.org, zohar@...ibm.com,
warthog9@...nel.org, jmorris@...ei.org, kyle@...artin.ca,
hpa@...or.com, akpm@...ux-foundation.org,
torvalds@...ux-foundation.org, mingo@...e.hu,
viro@...iv.linux.org.uk
Subject: Re: [PATCH 5/6] IMA: use rbtree instead of radix tree for inode
information cache
On Wed, 2010-10-20 at 10:17 +1100, Dave Chinner wrote:
> On Tue, Oct 19, 2010 at 06:58:39PM -0400, Eric Paris wrote:
> > @@ -36,12 +63,11 @@ struct ima_iint_cache *ima_iint_find_get(struct inode *inode)
> > struct ima_iint_cache *iint;
> >
> > rcu_read_lock();
> > - iint = radix_tree_lookup(&ima_iint_store, (unsigned long)inode);
> > - if (!iint)
> > - goto out;
> > - kref_get(&iint->refcount);
> > -out:
> > + iint = __ima_iint_find(inode);
> > + if (iint)
> > + kref_get(&iint->refcount);
> > rcu_read_unlock();
> > +
>
> This is wrong - the rbtree is protected only by the ima_iint_lock(),
> not RCU. Hence you can't do lockless lookups on an rbtree in this
> manner as they will race with inserts and deletes.
Correct, what can be made to work is combine RCU with a seqlock. Retry
the lookup using read_seqretry(), RCU here helps to ensure you're not
stepping on already freed memory.
So, tree modification does:
write_seqlock();
/* frob RB-tree, using call_rcu() for frees where needed */
write_sequnlock();
Lookup does:
unsigned seq;
rcu_read_lock()
again;
seq = read_seqbegin();
/* RB-tree lookup */
if (read_seqretry(seq))
goto again;
rcu_read_unlock();
return obj;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists