lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 30 Oct 2023 20:14:18 -0400
From:   Waiman Long <longman@...hat.com>
To:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>
Cc:     linux-kernel@...r.kernel.org, Phil Auld <pauld@...hat.com>,
        kernel test robot <oliver.sang@...el.com>,
        aubrey.li@...ux.intel.com, yu.c.chen@...el.com,
        Waiman Long <longman@...hat.com>
Subject: [PATCH] sched: Don't call any kfree*() API in do_set_cpus_allowed()

Commit 851a723e45d1 ("sched: Always clear user_cpus_ptr in
do_set_cpus_allowed()") added a kfree() call to free any user
provided affinity mask, if present. It was changed later to use
kfree_rcu() in commit 9a5418bc48ba ("sched/core: Use kfree_rcu()
in do_set_cpus_allowed()") to avoid a circular locking dependency
problem.

It turns out that even kfree_rcu() isn't safe for avoiding
circular locking problem. As reported by kernel test robot,
the following circular locking dependency still exists:

  &rdp->nocb_lock --> rcu_node_0 --> &rq->__lock

So no kfree*() API can be used in do_set_cpus_allowed(). To prevent
memory leakage, the unused user provided affinity mask is now saved in a
lockless list to be reused later by subsequent sched_setaffinity() calls.

Without kfree_rcu(), the internal cpumask_rcuhead union can be removed
too as a lockless list entry only holds a single pointer.

Fixes: 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()")
Reported-by: kernel test robot <oliver.sang@...el.com>
Closes: https://lore.kernel.org/oe-lkp/202310302207.a25f1a30-oliver.sang@intel.com
Signed-off-by: Waiman Long <longman@...hat.com>
---
 kernel/sched/core.c | 31 ++++++++++++++++++-------------
 1 file changed, 18 insertions(+), 13 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 802551e0009b..f536d11a284e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2789,6 +2789,11 @@ __do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
 		set_next_task(rq, p);
 }
 
+/*
+ * A lockless list of free cpumasks to be used for user cpumasks.
+ */
+static LLIST_HEAD(cpumask_free_lhead);
+
 /*
  * Used for kthread_bind() and select_fallback_rq(), in both cases the user
  * affinity (if any) should be destroyed too.
@@ -2800,29 +2805,29 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
 		.user_mask = NULL,
 		.flags     = SCA_USER,	/* clear the user requested mask */
 	};
-	union cpumask_rcuhead {
-		cpumask_t cpumask;
-		struct rcu_head rcu;
-	};
 
 	__do_set_cpus_allowed(p, &ac);
 
 	/*
-	 * Because this is called with p->pi_lock held, it is not possible
-	 * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
-	 * kfree_rcu().
+	 * We can't call any kfree*() API here as p->pi_lock and/or rq lock
+	 * may be held. So we save it in a llist to be reused in the next
+	 * sched_setaffinity() call.
 	 */
-	kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
+	if (ac.user_mask)
+		llist_add((struct llist_node *)ac.user_mask, &cpumask_free_lhead);
 }
 
 static cpumask_t *alloc_user_cpus_ptr(int node)
 {
-	/*
-	 * See do_set_cpus_allowed() above for the rcu_head usage.
-	 */
-	int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
+	struct cpumask *pmask = NULL;
+
+	if (!llist_empty(&cpumask_free_lhead))
+		pmask = (struct cpumask *)llist_del_first(&cpumask_free_lhead);
+
+	if (!pmask)
+		pmask = kmalloc_node(cpumask_size(), GFP_KERNEL, node);
 
-	return kmalloc_node(size, GFP_KERNEL, node);
+	return pmask;
 }
 
 int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
-- 
2.39.3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ