lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220214155536.1e0da8b6@imladris.surriel.com>
Date:   Mon, 14 Feb 2022 15:55:36 -0500
From:   Rik van Riel <riel@...riel.com>
To:     "Paul E. McKenney" <paulmck@...nel.org>
Cc:     Chris Mason <clm@...com>, Giuseppe Scrivano <gscrivan@...hat.com>,
        "viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Kernel Team <Kernel-team@...com>
Subject: Re: [PATCH RFC fs/namespace] Make kern_unmount() use
 synchronize_rcu_expedited()

On Mon, 14 Feb 2022 11:44:40 -0800
"Paul E. McKenney" <paulmck@...nel.org> wrote:
> On Mon, Feb 14, 2022 at 07:26:49PM +0000, Chris Mason wrote:

> Moving from synchronize_rcu() to synchronize_rcu_expedited() does buy
> you at least an order of magnitude.  But yes, it should be possible to
> get rid of all but one call per batch, which would be better.  Maybe
> a bit more complicated, but probably not that much.

It doesn't look too bad, except for the include of ../fs/mount.h.

I'm hoping somebody has a better idea on how to deal with that.
Do we need a kern_unmount() variant that doesn't do the RCU wait,
or should it get a parameter, or something else?

Is there an ordering requirement between the synchronize_rcu call
and zeroing out n->mq_mnt->mnt_ls?

What other changes do we need to make everything right?

The change below also fixes the issue that to-be-freed items that
are queued up while the free_ipc work function runs do not result
in the work item being enqueued again.

This patch is still totally untested because the 4 year old is
at home today :)


diff --git a/ipc/namespace.c b/ipc/namespace.c
index 7bd0766ddc3b..321cbda17cfb 100644
--- a/ipc/namespace.c
+++ b/ipc/namespace.c
@@ -17,6 +17,7 @@
 #include <linux/proc_ns.h>
 #include <linux/sched/task.h>
 
+#include "../fs/mount.h"
 #include "util.h"
 
 static struct ucounts *inc_ipc_namespaces(struct user_namespace *ns)
@@ -117,10 +118,7 @@ void free_ipcs(struct ipc_namespace *ns, struct ipc_ids *ids,
 
 static void free_ipc_ns(struct ipc_namespace *ns)
 {
-	/* mq_put_mnt() waits for a grace period as kern_unmount()
-	 * uses synchronize_rcu().
-	 */
-	mq_put_mnt(ns);
+	mntput(ns->mq_mnt);
 	sem_exit_ns(ns);
 	msg_exit_ns(ns);
 	shm_exit_ns(ns);
@@ -134,11 +132,19 @@ static void free_ipc_ns(struct ipc_namespace *ns)
 static LLIST_HEAD(free_ipc_list);
 static void free_ipc(struct work_struct *unused)
 {
-	struct llist_node *node = llist_del_all(&free_ipc_list);
+	struct llist_node *node;
 	struct ipc_namespace *n, *t;
 
-	llist_for_each_entry_safe(n, t, node, mnt_llist)
-		free_ipc_ns(n);
+	while ((node = llist_del_all(&free_ipc_list))) {
+		llist_for_each_entry(n, node, mnt_llist)
+			real_mount(n->mq_mnt)->mnt_ns = NULL;
+
+		/* Wait for the last users to have gone away. */
+		synchronize_rcu();
+
+		llist_for_each_entry_safe(n, t, node, mnt_llist)
+			free_ipc_ns(n);
+	}
 }
 
 /*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ