[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160705174207.GN14480@ZenIV.linux.org.uk>
Date: Tue, 5 Jul 2016 18:42:08 +0100
From: Al Viro <viro@...IV.linux.org.uk>
To: Oleg Drokin <green@...uxhacker.ru>
Cc: Mailing List <linux-kernel@...r.kernel.org>,
"<linux-fsdevel@...r.kernel.org>" <linux-fsdevel@...r.kernel.org>
Subject: Re: More parallel atomic_open/d_splice_alias fun with NFS and
possibly more FSes.
On Tue, Jul 05, 2016 at 11:21:32AM -0400, Oleg Drokin wrote:
> > ...
> > - if (d_unhashed(*de)) {
> > + if (d_in_lookup(*de)) {
> > struct dentry *alias;
> >
> > alias = ll_splice_alias(inode, *de);
>
> This breaks Lustre because we now might progress further in this function
> without calling into ll_splice_alias and that's the only place that we do
> ll_d_init() that later code depends on so we violently crash next time
> we call e.g. d_lustre_revalidate() further down that code.
Huh? How the hell do those conditions differ there?
> Also I still wonder what's to stop d_alloc_parallel() from returning
> a hashed dentry with d_in_lookup() still true?
The fact that such dentries do not exist at any point?
> Certainly there's a big gap between hashing the dentry and dropping the PAR
> bit in there that I imagine might allow __d_lookup_rcu() to pick it up
> in between?--
WTF? Where do you see that gap? in-lookup dentries get hashed only in one
place - __d_add(). And there (besides holding ->d_lock around both) we
drop that bit in flags *before* _d_rehash(). AFAICS, the situation with
barriers is OK there, due to lockref_get_not_dead() serving as ACQUIRE
operation; I could be missing something subtle, but a wide gap... Where?
Powered by blists - more mailing lists