lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ee5db302-07f6-4e09-8de4-0c1358e0a297@igalia.com>
Date: Thu, 22 May 2025 17:29:33 +0900
From: Changwoo Min <changwoo@...lia.com>
To: Tejun Heo <tj@...nel.org>, David Vernet <void@...ifault.com>,
 Andrea Righi <arighi@...dia.com>, linux-kernel@...r.kernel.org,
 sched-ext@...a.com
Subject: Re: [PATCH sched_ext/for-6.16] sched_ext: Call ops.update_idle()
 after updating builtin idle bits

Thank you, Tejun, for the change!
The change makes sense semantcially.

Acked-by: Changwoo Min <changwoo@...lia.com>

On 5/22/25 07:23, Tejun Heo wrote:
> BPF schedulers that use both builtin CPU idle mechanism and
> ops.update_idle() may want to use the latter to create interlocking between
> ops.enqueue() and CPU idle transitions so that either ops.enqueue() sees the
> idle bit or ops.update_idle() sees the task queued somewhere. This can
> prevent race conditions where CPUs go idle while tasks are waiting in DSQs.
> 
> For such interlocking to work, ops.update_idle() must be called after
> builtin CPU masks are updated. Relocate the invocation. Currently, there are
> no ordering requirements on transitions from idle and this relocation isn't
> expected to make meaningful differences in that direction.
> 
> Signed-off-by: Tejun Heo <tj@...nel.org>
> ---
>   kernel/sched/ext_idle.c |   25 +++++++++++++++----------
>   1 file changed, 15 insertions(+), 10 deletions(-)
> 
> diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
> index ae30de383913..66da03cc0b33 100644
> --- a/kernel/sched/ext_idle.c
> +++ b/kernel/sched/ext_idle.c
> @@ -738,16 +738,6 @@ void __scx_update_idle(struct rq *rq, bool idle, bool do_notify)
>   
>   	lockdep_assert_rq_held(rq);
>   
> -	/*
> -	 * Trigger ops.update_idle() only when transitioning from a task to
> -	 * the idle thread and vice versa.
> -	 *
> -	 * Idle transitions are indicated by do_notify being set to true,
> -	 * managed by put_prev_task_idle()/set_next_task_idle().
> -	 */
> -	if (SCX_HAS_OP(sch, update_idle) && do_notify && !scx_rq_bypassing(rq))
> -		SCX_CALL_OP(sch, SCX_KF_REST, update_idle, rq, cpu_of(rq), idle);
> -
>   	/*
>   	 * Update the idle masks:
>   	 * - for real idle transitions (do_notify == true)
> @@ -765,6 +755,21 @@ void __scx_update_idle(struct rq *rq, bool idle, bool do_notify)
>   	if (static_branch_likely(&scx_builtin_idle_enabled))
>   		if (do_notify || is_idle_task(rq->curr))
>   			update_builtin_idle(cpu, idle);
> +
> +	/*
> +	 * Trigger ops.update_idle() only when transitioning from a task to
> +	 * the idle thread and vice versa.
> +	 *
> +	 * Idle transitions are indicated by do_notify being set to true,
> +	 * managed by put_prev_task_idle()/set_next_task_idle().
> +	 *
> +	 * This must come after builtin idle update so that BPF schedulers can
> +	 * create interlocking between ops.update_idle() and ops.enqueue() -
> +	 * either enqueue() sees the idle bit or update_idle() sees the task
> +	 * that enqueue() queued.
> +	 */
> +	if (SCX_HAS_OP(sch, update_idle) && do_notify && !scx_rq_bypassing(rq))
> +		SCX_CALL_OP(sch, SCX_KF_REST, update_idle, rq, cpu_of(rq), idle);
>   }
>   
>   static void reset_idle_masks(struct sched_ext_ops *ops)
> 
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ