[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aC96cDLJmqt1Mjlx@slm.duckdns.org>
Date: Thu, 22 May 2025 09:26:40 -1000
From: Tejun Heo <tj@...nel.org>
To: David Vernet <void@...ifault.com>, Andrea Righi <arighi@...dia.com>,
Changwoo Min <changwoo@...lia.com>, linux-kernel@...r.kernel.org,
sched-ext@...a.com
Subject: Re: [PATCH sched_ext/for-6.16] sched_ext: Call ops.update_idle()
after updating builtin idle bits
Applied to sched_ext/for-6.16 with commit message update suggested by
Andrea.
------ 8< ------
>From 273cc949655c70001778eb0b9e7db993df845912 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@...nel.org>
Date: Wed, 21 May 2025 12:23:06 -1000
Subject: [PATCH] sched_ext: Call ops.update_idle() after updating builtin idle
bits
BPF schedulers that use both builtin CPU idle mechanism and
ops.update_idle() may want to use the latter to create interlocking between
ops.enqueue() and CPU idle transitions so that either ops.enqueue() sees the
idle bit or ops.update_idle() sees the task queued somewhere. This can
prevent race conditions where CPUs go idle while tasks are waiting in DSQs.
For such interlocking to work, ops.update_idle() must be called after
builtin CPU masks are updated. Relocate the invocation. Currently, there are
no ordering requirements on transitions from idle and this relocation isn't
expected to make meaningful differences in that direction.
This also makes the ops.update_idle() behavior semantically consistent:
any action performed in this callback should be able to override the
builtin idle state, not the other way around.
Signed-off-by: Tejun Heo <tj@...nel.org>
Reviewed-and-tested-by: Andrea Righi <arighi@...dia.com>
Acked-by: Changwoo Min <changwoo@...lia.com>
---
kernel/sched/ext_idle.c | 25 +++++++++++++++----------
1 file changed, 15 insertions(+), 10 deletions(-)
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index ae30de383913..66da03cc0b33 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -738,16 +738,6 @@ void __scx_update_idle(struct rq *rq, bool idle, bool do_notify)
lockdep_assert_rq_held(rq);
- /*
- * Trigger ops.update_idle() only when transitioning from a task to
- * the idle thread and vice versa.
- *
- * Idle transitions are indicated by do_notify being set to true,
- * managed by put_prev_task_idle()/set_next_task_idle().
- */
- if (SCX_HAS_OP(sch, update_idle) && do_notify && !scx_rq_bypassing(rq))
- SCX_CALL_OP(sch, SCX_KF_REST, update_idle, rq, cpu_of(rq), idle);
-
/*
* Update the idle masks:
* - for real idle transitions (do_notify == true)
@@ -765,6 +755,21 @@ void __scx_update_idle(struct rq *rq, bool idle, bool do_notify)
if (static_branch_likely(&scx_builtin_idle_enabled))
if (do_notify || is_idle_task(rq->curr))
update_builtin_idle(cpu, idle);
+
+ /*
+ * Trigger ops.update_idle() only when transitioning from a task to
+ * the idle thread and vice versa.
+ *
+ * Idle transitions are indicated by do_notify being set to true,
+ * managed by put_prev_task_idle()/set_next_task_idle().
+ *
+ * This must come after builtin idle update so that BPF schedulers can
+ * create interlocking between ops.update_idle() and ops.enqueue() -
+ * either enqueue() sees the idle bit or update_idle() sees the task
+ * that enqueue() queued.
+ */
+ if (SCX_HAS_OP(sch, update_idle) && do_notify && !scx_rq_bypassing(rq))
+ SCX_CALL_OP(sch, SCX_KF_REST, update_idle, rq, cpu_of(rq), idle);
}
static void reset_idle_masks(struct sched_ext_ops *ops)
--
2.49.0
Powered by blists - more mailing lists