lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 16 Oct 2017 13:52:16 -0500
From:   Leon Yang <leon.gh.yang@...il.com>
To:     Alexander Viro <viro@...iv.linux.org.uk>,
        linux-fsdevel@...r.kernel.org (open list:FILESYSTEMS (VFS and
        infrastructure)), linux-kernel@...r.kernel.org (open list)
Cc:     Leon Yang <leon.gh.yang@...il.com>
Subject: [PATCH] Batch unmount cleanup

From: Leon Yang <leon.gh.yang@...il.com>

Each time the unmounted list is cleanup, synchronize_rcu() is
called, which is relatively costly. Scheduling the cleanup in a
workqueue, similar to what is being done in
net/core/net_namespace.c:cleanup_net, makes unmounting faster
without adding too much overhead. This is useful especially for
servers with many containers where mounting/unmounting happens a
lot.

Signed-off-by: Leon Yang <leon.gh.yang@...il.com>
---
 fs/namespace.c | 27 +++++++++++++++++++++++----
 1 file changed, 23 insertions(+), 4 deletions(-)

diff --git a/fs/namespace.c b/fs/namespace.c
index 3b601f1..864ce7e 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -68,6 +68,7 @@ static int mnt_group_start = 1;
 static struct hlist_head *mount_hashtable __read_mostly;
 static struct hlist_head *mountpoint_hashtable __read_mostly;
 static struct kmem_cache *mnt_cache __read_mostly;
+static struct workqueue_struct *unmounted_wq;
 static DECLARE_RWSEM(namespace_sem);
 
 /* /sys/fs */
@@ -1409,22 +1410,29 @@ EXPORT_SYMBOL(may_umount);
 
 static HLIST_HEAD(unmounted);	/* protected by namespace_sem */
 
-static void namespace_unlock(void)
+static void cleanup_unmounted(struct work_struct *work)
 {
 	struct hlist_head head;
+	down_write(&namespace_sem);
 
 	hlist_move_list(&unmounted, &head);
 
 	up_write(&namespace_sem);
 
-	if (likely(hlist_empty(&head)))
-		return;
-
 	synchronize_rcu();
 
 	group_pin_kill(&head);
 }
 
+static DECLARE_WORK(unmounted_cleanup_work, cleanup_unmounted);
+
+static void namespace_unlock(void)
+{
+	if (!likely(hlist_empty(&unmounted)))
+		queue_work(unmounted_wq, &unmounted_cleanup_work);
+	up_write(&namespace_sem);
+}
+
 static inline void namespace_lock(void)
 {
 	down_write(&namespace_sem);
@@ -3276,6 +3284,17 @@ void __init mnt_init(void)
 	init_mount_tree();
 }
 
+static int __init unmounted_wq_init(void)
+{
+	/* Create workqueue for cleanup */
+	unmounted_wq = create_singlethread_workqueue("unmounted");
+	if (!unmounted_wq)
+		panic("Could not create unmounted workq");
+	return 0;
+}
+
+pure_initcall(unmounted_wq_init);
+
 void put_mnt_ns(struct mnt_namespace *ns)
 {
 	if (!atomic_dec_and_test(&ns->count))
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ