[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875ypf8s5m.fsf@stepbren-lnx.us.oracle.com>
Date: Tue, 15 Feb 2022 18:24:53 -0800
From: Stephen Brennan <stephen.s.brennan@...cle.com>
To: Al Viro <viro@...iv.linux.org.uk>
Cc: linux-kernel@...r.kernel.org, Luis Chamberlain <mcgrof@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Jan Kara <jack@...e.cz>, linux-fsdevel@...r.kernel.org,
Arnd Bergmann <arnd@...db.de>,
Amir Goldstein <amir73il@...il.com>
Subject: Re: [PATCH v2 1/4] dcache: sweep cached negative dentries to the
end of list of siblings
Hi Al,
Al Viro <viro@...iv.linux.org.uk> writes:
> On Wed, Feb 09, 2022 at 03:14:03PM -0800, Stephen Brennan wrote:
>
>> +static void sweep_negative(struct dentry *dentry)
>> +{
>> + struct dentry *parent;
>> +
>> + rcu_read_lock();
>> + parent = lock_parent(dentry);
>> + if (!parent) {
>> + rcu_read_unlock();
>> + return;
>> + }
>> +
>> + /*
>> + * If we did not hold a reference to dentry (as in the case of dput),
>> + * and dentry->d_lock was dropped in lock_parent(), then we could now be
>> + * holding onto a dead dentry. Be careful to check d_count and unlock
>> + * before dropping RCU lock, otherwise we could corrupt freed memory.
>> + */
>> + if (!d_count(dentry) && d_is_negative(dentry) &&
>> + !d_is_tail_negative(dentry)) {
>> + dentry->d_flags |= DCACHE_TAIL_NEGATIVE;
>> + list_move_tail(&dentry->d_child, &parent->d_subdirs);
>> + }
>> +
>> + spin_unlock(&parent->d_lock);
>> + spin_unlock(&dentry->d_lock);
>> + rcu_read_unlock();
>> +}
>
> I'm not sure if it came up the last time you'd posted this series
> (and I apologize if it had and I forgot the explanation), but... consider
> the comment in dentry_unlist(). What's to prevent the race described there
> making d_walk() skip a part of tree, by replacing the "lseek moving cursor
> in just the wrong moment" with "dput moving the negative dentry right next
> to the one being killed to the tail of the list"?
This did not come up previously, so thanks for pointing this out.
>
> The race in question:
> d_walk() is leaving a subdirectory. We are here:
> rcu_read_lock();
> ascend:
> if (this_parent != parent) {
>
> It isn't - we are not back to the root of tree being walked.
> At this point this_parent is the directory we'd just finished looking into.
>
> struct dentry *child = this_parent;
> this_parent = child->d_parent;
>
> ... and now child points to it, and this_parent points to its parent.
>
> spin_unlock(&child->d_lock);
>
> No locks held. Another CPU gets through successful rmdir(). child gets
> unhashed and dropped. It's off the ->d_subdirs of this_parent; its
> ->d_child.next is still pointing where it used to, and whatever it points
> to won't be physically freed until rcu_read_unlock().
>
> Moreover, in the meanwhile this next sibling (negative, pinned) got dput().
> And had been moved to the tail of the this_parent->d_subdirs. Since
> its ->d_child.prev does *NOT* point to child (which is off-list, about to
> be freed shortly, etc.), child->d_dchild.next is not modified - it still
> points to that (now moved) sibling.
It seems to me that, if we had taken a reference on child by
incrementing the reference count prior to unlocking it, then
dentry_unlist could never have been called, since we would never have
made it into __dentry_kill. child would still be on the list, and any
cursor (or sweep_negative) list updates would now be reflected in
child->d_child.next. But dput is definitely not safe while holding a
lock on a parent dentry (even more so now thanks to my patch), so that
is out of the question.
Would dput_to_list be an appropriate solution to that issue? We can
maintain a dispose list in d_walk and then for any dput which really
drops the refcount to 0, we can handle them after d_walk is done. It
shouldn't be that many dentries anyway.
>
> spin_lock(&this_parent->d_lock);
> Got it.
>
> /* might go back up the wrong parent if we have had a rename. */
> if (need_seqretry(&rename_lock, seq))
> goto rename_retry;
>
> Nope, hadn't happened.
>
> /* go into the first sibling still alive */
> do {
> next = child->d_child.next;
> ... and this is the moved sibling, now in the end of the ->d_subdirs.
>
> if (next == &this_parent->d_subdirs)
> goto ascend;
>
> No, it is not - it's the last element of the list, not its anchor.
>
> child = list_entry(next, struct dentry, d_child);
>
> Our moved negative dentry.
>
> } while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED));
>
> Not killed, that one.
> rcu_read_unlock();
> goto resume;
>
> ... and since that sucker has no children, we proceed to look at it,
> ascend and now we are at the end of this_parent->d_subdirs. And we
> ascend out of it, having entirely skipped all branches that used to
> be between the rmdir victim and the end of the parent's ->d_subdirs.
>
> What am I missing here? Unlike the trick we used with cursors (see
> dentry_unlist()) we can't predict who won't get moved in this case...
I don't think you're missing anything, unfortunately. Maybe if my above
idea pans out, we could prevent this, but I suppose without that,
reordering dentries among the subdirs list, and d_walk, are opposing
features. The cursor trick is neat but not applicable here.
>
> Note that treating "child is has DCACHE_DENTRY_KILLED" same as we do
> for rename_lock mismatches would not work unless you grab the spinlock
> component of rename_lock every time dentry becomes positive. Which
> is obviously not feasible (it's a system-wide lock and cacheline
> pingpong alone would hurt us very badly, not to mention the contention
> issues due to the frequency of grabbing it going up by several orders
> of magnitude).
You won't catch me advocating for a global lock like that :P
I'm going to keep looking into this, since some of our high-uptime
customer machines have steady workloads which just keep churning out
negative dentries, and they tend to be in a particular subdirectory. If
the machine has oodles of free memory then we just let them create
dentries like candy until something eventually topples over, and it
tends to be something like this.
Thanks,
Stephen
Powered by blists - more mailing lists