[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <6D633478-6B94-465E-84D7-C0BA59C5E5F5@linuxhacker.ru>
Date: Tue, 5 Jul 2016 12:33:09 -0400
From: Oleg Drokin <green@...uxhacker.ru>
To: Al Viro <viro@...IV.linux.org.uk>
Cc: Mailing List <linux-kernel@...r.kernel.org>,
"<linux-fsdevel@...r.kernel.org>" <linux-fsdevel@...r.kernel.org>
Subject: Re: More parallel atomic_open/d_splice_alias fun with NFS and possibly more FSes.
On Jul 5, 2016, at 9:51 AM, Al Viro wrote:
> On Tue, Jul 05, 2016 at 01:31:10PM +0100, Al Viro wrote:
>> On Tue, Jul 05, 2016 at 02:22:48AM -0400, Oleg Drokin wrote:
>>
>>>> + if (!(open_flags & O_CREAT) && !d_unhashed(dentry)) {
>>
>> s/d_unhashed/d_in_lookup/ in that.
>>
>>> So we come racing here from multiple threads (say 3 or more - we have seen this
>>> in the older crash reports, so totally possible)
>>>
>>>> + d_drop(dentry);
>>>
>>> One lucky one does this first before the others perform the !d_unhashed check above.
>>> This makes the other ones to not enter here.
>>>
>>> And we are back to the original problem of multiple threads trying to instantiate
>>> same dentry as before.
>>
>> Yep. See above - it should've been using d_in_lookup() in the first place,
>> through the entire nfs_atomic_open(). Same in the Lustre part of fixes,
>> obviously.
>
> See current #for-linus for hopefully fixed variants (both lustre and nfs)
So at first it looked like we just need another d_init in the other arm
of that "if (d_in_lookup())" statement, but alas.
Also the patch that changed d_unhashed check for d_in_lookup() now results in
a stale comment:
/* Only hash *de if it is unhashed (new dentry).
* Atomic_open may passing hashed dentries for open.
*/
if (d_in_lookup(*de)) {
Since we no longer check for d_unhashed(), what would be a better choice of words here?
"Only hash *de if it is a new dentry coming from lookup"?
This also makes me question the whole thing some more. We are definitely in lookup
when this hits, so the dentry is already new, yet it does not check off as
d_in_lookup(). That also means that by skipping the ll_splice_alias we are failing
to hash it and that causing needless lookups later?
Looking some back into the history of commits, d_in_lookup() is to tell us
that we are in the middle of lookup. How can we be in the middle of lookup
path then and not have this set on a dentry? We know dentry was not
substituted with anything here because we did not call into ll_split_alias().
So what's going on then?
Here's a backtrace:
[ 146.045148] [<ffffffffa017baa6>] lbug_with_loc+0x46/0xb0 [libcfs]
[ 146.045158] [<ffffffffa05baef3>] ll_lookup_it_finish+0x713/0xaa0 [lustre]
[ 146.045160] [<ffffffff810e6dcd>] ? trace_hardirqs_on+0xd/0x10
[ 146.045167] [<ffffffffa05bb51b>] ll_lookup_it+0x29b/0x710 [lustre]
[ 146.045173] [<ffffffffa05b8830>] ? md_set_lock_data.part.25+0x60/0x60 [lustr
e]
[ 146.045179] [<ffffffffa05bc6a4>] ll_lookup_nd+0x84/0x190 [lustre]
[ 146.045180] [<ffffffff81276e94>] __lookup_hash+0x64/0xa0
[ 146.045181] [<ffffffff810e1b88>] ? down_write_nested+0xa8/0xc0
[ 146.045182] [<ffffffff8127d55f>] do_unlinkat+0x1bf/0x2f0
[ 146.045183] [<ffffffff8127e12b>] SyS_unlinkat+0x1b/0x30
[ 146.045185] [<ffffffff8188b3bc>] entry_SYSCALL_64_fastpath+0x1f/0xbd
__lookup_hash() does d_alloc (not parallel) and falls through into the ->lookup()
of the filesystem. So the dots do not connect.
The more I look at it the more I suspect it's wrong.
Everywhere else you changed in that patch, it was in *atomic_open() with
a very known impact. ll_lookup_it_finish() on the other hand is a generic lookup path,
not just for atomic opens.
I took out that part of your patch and problems went away it seems.
Powered by blists - more mailing lists