[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140110093148.GA26159@infradead.org>
Date: Fri, 10 Jan 2014 01:31:48 -0800
From: Christoph Hellwig <hch@...radead.org>
To: Al Viro <viro@...IV.linux.org.uk>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Eric Paris <eparis@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Dave Chinner <david@...morbit.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
James Morris <james.l.morris@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Stephen Smalley <sds@...ho.nsa.gov>,
Theodore Ts'o <tytso@....edu>, stable <stable@...r.kernel.org>,
Paul Moore <paul@...l-moore.com>,
LKML <linux-kernel@...r.kernel.org>,
Matthew Wilcox <matthew@....cx>, xfs@....sgi.com
Subject: Re: [PATCH] vfs: Fix possible NULL pointer dereference in
inode_permission()
On Fri, Jan 10, 2014 at 12:06:42AM +0000, Al Viro wrote:
> Check what XFS is doing ;-/ That's where those call_rcu() have come from.
> Sure, we can separate the simple "just do call_rcu(...->free_inode)" case
> and hit it whenever full ->free_inode is there and ->destroy_inode isn't.
> Not too pretty, but removal of tons of boilerplate might be worth doing
> that anyway. But ->destroy_inode() is still needed for cases where fs
> has its own idea of inode lifetime rules. Again, check what XFS is doing
> in that area...
Btw, I'd really love to get rid of the XFS ->destroy_inode abuse, it's
been a long time thorn in the flesh.
What's really needed there to make XFS behave more similar to everyone
else is a way for the filesystem to say: "I can't actually free this
inode right now, but I'll come back to you later". That's what we
actually do right now, except we pretend that the VFS inode gets freed,
while its memory lives on (punt intended).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists