[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFwcWcyRpwv3us9BbnsjUEfEr1mRjz4RWbcy_tSnZjc_Sw@mail.gmail.com>
Date: Tue, 29 Apr 2014 16:04:11 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Al Viro <viro@...iv.linux.org.uk>
Cc: Dave Chinner <david@...morbit.com>,
Miklos Szeredi <miklos@...redi.hu>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: dcache shrink list corruption?
On Tue, Apr 29, 2014 at 2:48 PM, Al Viro <viro@...iv.linux.org.uk> wrote:
>
> Ummm... You mean, have d_lookup() et.al. fail on something that is on
> a shrink list?
So I tried to see if that would work just consider it dead by the time
it hits the shrink list, and if somebody does a lookup on the dentry,
fail on it and just allocate a new dentry and do a real lookup.
But at a minimum, we have "d_op->d_prune()" that would now be possibly
be called for the old dentry *after* a new dentry has been allocated.
Not to mention the inode not having been dropped. So it looks like a
disaster where the filesystem now ends up having concurrent "live"
dentries for the same entry. Even if one of them is on its way out,
it's still visible to the filesystem. That makes me very
uncomfortable.
I'm starting to think that Miklos' original patch (perhaps with the
spinlock split to at least be one per superblock) is the most
straightforward approach after all. It's annoying having to add locks
here, but the whole pruning path should not be a hotpath, so maybe
it's not actually a big deal.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists