[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230120150246.20797-2-wander@redhat.com>
Date: Fri, 20 Jan 2023 12:02:39 -0300
From: Wander Lairson Costa <wander@...hat.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Wander Lairson Costa <wander@...hat.com>,
Stafford Horne <shorne@...il.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
Oleg Nesterov <oleg@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Andy Lutomirski <luto@...nel.org>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Fenghua Yu <fenghua.yu@...el.com>,
Andrei Vagin <avagin@...il.com>,
linux-kernel@...r.kernel.org (open list)
Cc: Paul McKenney <paulmck@...nel.org>
Subject: [PATCH v2 1/4] sched/task: Add the put_task_struct_atomic_safe function
With PREEMPT_RT, it is unsafe to call put_task_struct() in atomic
contexts because it indirectly acquires sleeping locks.
Introduce put_task_struct_atomic_safe(), which schedules
__put_task_struct() through call_rcu() when the kernel is compiled with
PREEMPT_RT.
A more natural approach would use a workqueue, but since
in PREEMPT_RT we can't allocate dynamic memory from atomic context,
the code would become more complex because we would need to put the
work_struct instance in the task_struct and initialize it when we
allocate a new task_struct.
Signed-off-by: Wander Lairson Costa <wander@...hat.com>
Cc: Paul McKenney <paulmck@...nel.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
---
include/linux/sched/task.h | 21 +++++++++++++++++++++
kernel/fork.c | 8 ++++++++
2 files changed, 29 insertions(+)
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index 357e0068497c..80b4c5812563 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -127,6 +127,27 @@ static inline void put_task_struct_many(struct task_struct *t, int nr)
void put_task_struct_rcu_user(struct task_struct *task);
+extern void __delayed_put_task_struct(struct rcu_head *rhp);
+
+static inline void put_task_struct_atomic_safe(struct task_struct *task)
+{
+ if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+ /*
+ * Decrement the refcount explicitly to avoid unnecessarily
+ * calling call_rcu.
+ */
+ if (refcount_dec_and_test(&task->usage))
+ /*
+ * under PREEMPT_RT, we can't call put_task_struct
+ * in atomic context because it will indirectly
+ * acquire sleeping locks.
+ */
+ call_rcu(&task->rcu, __delayed_put_task_struct);
+ } else {
+ put_task_struct(task);
+ }
+}
+
/* Free all architecture-specific resources held by a thread. */
void release_thread(struct task_struct *dead_task);
diff --git a/kernel/fork.c b/kernel/fork.c
index 9f7fe3541897..3d7a4e9311b3 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -859,6 +859,14 @@ void __put_task_struct(struct task_struct *tsk)
}
EXPORT_SYMBOL_GPL(__put_task_struct);
+void __delayed_put_task_struct(struct rcu_head *rhp)
+{
+ struct task_struct *task = container_of(rhp, struct task_struct, rcu);
+
+ __put_task_struct(task);
+}
+EXPORT_SYMBOL_GPL(__delayed_put_task_struct);
+
void __init __weak arch_task_cache_init(void) { }
/*
--
2.39.0
Powered by blists - more mailing lists