[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87fvlm860e.fsf_-_@x220.int.ebiederm.org>
Date: Wed, 09 Apr 2014 15:58:25 -0700
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Al Viro <viro@...IV.linux.org.uk>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
"Serge E. Hallyn" <serge@...lyn.com>,
Linux-Fsdevel <linux-fsdevel@...r.kernel.org>,
Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andy Lutomirski <luto@...capital.net>,
Rob Landley <rob@...dley.net>,
Miklos Szeredi <miklos@...redi.hu>,
Christoph Hellwig <hch@...radead.org>,
Karel Zak <kzak@...hat.com>,
"J. Bruce Fields" <bfields@...ldses.org>,
Fengguang Wu <fengguang.wu@...el.com>
Subject: [RFC][PATCH] vfs: In mntput run deactivate_super on a shallow stack.
mntput as part of pathput is called from all over the vfs sometimes as
in the case of symlink chasing from some rather deep call chains.
During filesystem unmount with the right set of races those innocuous
little mntput calls that take very little stack space can become calls
become mosters calling deactivate_super that can take up 3k or more of
stack space as syncrhonous filesystem I/O is performed, through
multiple levels of the I/O stack.
Avoid deactivate_super being called from a deep stack by converting
mntput to use task_work_add when the mnt_count goes to 0. The
filesystem is still unmounted synchronously preserving the semantics
that system calls like umount require.
Signed-off-by: "Eric W. Biederman" <ebiederm@...ssion.com>
---
This patch has only seen light testing so far but it emperically it
appears to solve the stack depth problem. A simple umount of ext4 went
from having 5162 stack bytes untouched to having 5568 stack bytes
untouched. Freeing up at least 416 bytes of stack in that simple case.
fs/mount.h | 2 +-
fs/namespace.c | 24 ++++++++++++++++++------
2 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/fs/mount.h b/fs/mount.h
index aa3c0aa473df..4e78ca90467f 100644
--- a/fs/mount.h
+++ b/fs/mount.h
@@ -30,7 +30,7 @@ struct mount {
struct mount *mnt_parent;
struct dentry *mnt_mountpoint;
struct vfsmount mnt;
- struct rcu_head mnt_rcu;
+ struct callback_head mnt_callback;
#ifdef CONFIG_SMP
struct mnt_pcp __percpu *mnt_pcp;
#else
diff --git a/fs/namespace.c b/fs/namespace.c
index c809205f30df..686afe9942bc 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -24,6 +24,7 @@
#include <linux/proc_ns.h>
#include <linux/magic.h>
#include <linux/bootmem.h>
+#include <linux/task_work.h>
#include "pnode.h"
#include "internal.h"
@@ -981,7 +982,7 @@ static struct mount *clone_mnt(struct mount *old, struct dentry *root,
static void delayed_free(struct rcu_head *head)
{
- struct mount *mnt = container_of(head, struct mount, mnt_rcu);
+ struct mount *mnt = container_of(head, struct mount, mnt_callback);
kfree(mnt->mnt_devname);
#ifdef CONFIG_SMP
free_percpu(mnt->mnt_pcp);
@@ -989,6 +990,17 @@ static void delayed_free(struct rcu_head *head)
kmem_cache_free(mnt_cache, mnt);
}
+static void mntput_delayed(struct callback_head *head)
+{
+ struct mount *mnt = container_of(head, struct mount, mnt_callback);
+
+ fsnotify_vfsmount_delete(&mnt->mnt);
+ dput(mnt->mnt.mnt_root);
+ deactivate_super(mnt->mnt.mnt_sb);
+ mnt_free_id(mnt);
+ call_rcu(&mnt->mnt_callback, delayed_free);
+}
+
static void mntput_no_expire(struct mount *mnt)
{
put_again:
@@ -1034,11 +1046,11 @@ put_again:
* so mnt_get_writers() below is safe.
*/
WARN_ON(mnt_get_writers(mnt));
- fsnotify_vfsmount_delete(&mnt->mnt);
- dput(mnt->mnt.mnt_root);
- deactivate_super(mnt->mnt.mnt_sb);
- mnt_free_id(mnt);
- call_rcu(&mnt->mnt_rcu, delayed_free);
+ /* The stack may be deep here so perform this where the stack
+ * is guaranteed to be shallow.
+ */
+ init_task_work(&mnt->mnt_callback, mntput_delayed);
+ WARN_ON(task_work_add(current, &mnt->mnt_callback, true) != 0);
}
void mntput(struct vfsmount *mnt)
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists