lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220815172620.5d7d4a78@imladris.surriel.com>
Date:   Mon, 15 Aug 2022 17:26:20 -0400
From:   Rik van Riel <riel@...riel.com>
To:     linux-kernel@...r.kernel.org
Cc:     Al Viro <viro@...iv.linux.org.uk>,
        "Eric W. Biederman" <ebiederm@...ssion.com>,
        Alexey Gladkov <legion@...nel.org>, linux-fs@...r.kernel.org,
        kernel-team@...com
Subject: [PATCH RFC] fs,ipc: batch RCU synchronization in free_ipc

TL;DR: it runs better than it looks, and I am looking for ideas on how to make it look better

---8<---

The following program will get ENOSPACE sooner or later, because
the way ipc namespaces get freed currently results in only one
ipc namespace being freed every RCU grace period.

int main()
{
	int i;

	for (i = 0; i < 1000000; i++) {
		if (unshare(CLONE_NEWIPC) < 0)
			error(EXIT_FAILURE, errno, "unshare");
	}
}

There are various ways to solve this issue, they all come down to
batching the RCU synchronization, so multiple ipc namespaces can
be freed in the same RCU grace period.

Unfortunately there seems to be a tradeoff between temporarily
allocating things on the stack, and having slightly uglier code,
or adding a struct rcu_work to the struct vfsmount.

I am not entirely happy with the way this code looks, and hoping
for suggestions on how to improve it.

However, I am quite happy with how this code runs. Between batching
the kern_unmount RCU synchronization, moving to the expedited RCU
grace period in kern_unmount_array, and grabbing things off the
llist that were added after the work item started, freeing ipc
namespaces is 2-3 orders of magnitude faster than before, and
able to keep up with the test case above.

Signed-off-by: Rik van Riel <riel@...riel.com>
---
 ipc/namespace.c | 31 +++++++++++++++++++++++--------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/ipc/namespace.c b/ipc/namespace.c
index e1fcaedba4fa..ba33015f1a23 100644
--- a/ipc/namespace.c
+++ b/ipc/namespace.c
@@ -127,10 +127,6 @@ void free_ipcs(struct ipc_namespace *ns, struct ipc_ids *ids,
 
 static void free_ipc_ns(struct ipc_namespace *ns)
 {
-	/* mq_put_mnt() waits for a grace period as kern_unmount()
-	 * uses synchronize_rcu().
-	 */
-	mq_put_mnt(ns);
 	sem_exit_ns(ns);
 	msg_exit_ns(ns);
 	shm_exit_ns(ns);
@@ -144,14 +140,33 @@ static void free_ipc_ns(struct ipc_namespace *ns)
 	kfree(ns);
 }
 
+#define FREE_IPC_BATCH 64
 static LLIST_HEAD(free_ipc_list);
 static void free_ipc(struct work_struct *unused)
 {
-	struct llist_node *node = llist_del_all(&free_ipc_list);
-	struct ipc_namespace *n, *t;
+	struct ipc_namespace *ipc_nses[FREE_IPC_BATCH];
+	struct vfsmount *mounts[FREE_IPC_BATCH];
+	int i, j;
+
+ next_batch:
+	i = 0;
+	for (i = 0; !llist_empty(&free_ipc_list) && i < FREE_IPC_BATCH; i++) {
+		struct llist_node *node = llist_del_first(&free_ipc_list);
+		struct ipc_namespace *n = llist_entry(node,
+						      struct ipc_namespace,
+						      mnt_llist);
+		ipc_nses[i] = n;
+		mounts[i] = n->mq_mnt;
+	}
+
+	/* Consolidate the RCU synchronization across the whole batch. */
+	kern_unmount_array(mounts, i);
+
+	for (j = 0; j < i; j++)
+		free_ipc_ns(ipc_nses[j]);
 
-	llist_for_each_entry_safe(n, t, node, mnt_llist)
-		free_ipc_ns(n);
+	if (i == FREE_IPC_BATCH)
+		goto next_batch;
 }
 
 /*
-- 
2.24.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ