lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0c49ca67-a105-48d9-a848-39996e0cf467@igalia.com>
Date: Mon, 22 Sep 2025 17:09:04 +0900
From: Changwoo Min <changwoo@...lia.com>
To: Andrea Righi <arighi@...dia.com>, Tejun Heo <tj@...nel.org>,
 David Vernet <void@...ifault.com>
Cc: sched-ext@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched_ext: idle: Handle migration-disabled tasks in BPF
 code

This is a nice catch. Looks good to me.

Acked-by: Changwoo Min <changwoo@...lia.com>

On 9/20/25 22:26, Andrea Righi wrote:
> When scx_bpf_select_cpu_dfl()/and() kfuncs are invoked outside of
> ops.select_cpu() we can't rely on @p->migration_disabled to determine if
> migration is disabled for the task @p.
> 
> In fact, migration is always disabled for the current task while running
> BPF code: __bpf_prog_enter() disables migration and __bpf_prog_exit()
> re-enables it.
> 
> To handle this, when @p->migration_disabled == 1, check whether @p is
> the current task. If so, migration was not disabled before entering the
> callback, otherwise migration was disabled.
> 
> This ensures correct idle CPU selection in all cases. The behavior of
> ops.select_cpu() remains unchanged, because this callback is never
> invoked for the current task and migration-disabled tasks are always
> excluded.
> 
> Example: without this change scx_bpf_select_cpu_and() called from
> ops.enqueue() always returns -EBUSY; with this change applied, it
> correctly returns idle CPUs.
> 
> Fixes: 06efc9fe0b8de ("sched_ext: idle: Handle migration-disabled tasks in idle selection")
> Cc: stable@...r.kernel.org # v6.16+
> Signed-off-by: Andrea Righi <arighi@...dia.com>
> ---
>   kernel/sched/ext_idle.c | 28 +++++++++++++++++++++++++++-
>   1 file changed, 27 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
> index 942fd1e2ed44c..e8ca71cbd0d47 100644
> --- a/kernel/sched/ext_idle.c
> +++ b/kernel/sched/ext_idle.c
> @@ -880,6 +880,32 @@ static bool check_builtin_idle_enabled(void)
>   	return false;
>   }
>   
> +/*
> + * Determine whether @p is a migration-disabled task in the context of BPF
> + * code.
> + *
> + * We can't simply check whether @p->migration_disabled is set in a
> + * sched_ext callback, because migration is always disabled for the current
> + * task while running BPF code.
> + *
> + * The prolog (__bpf_prog_enter) and epilog (__bpf_prog_exit) respectively
> + * disable and re-enable migration. For this reason, the current task
> + * inside a sched_ext callback is always a migration-disabled task.
> + *
> + * Therefore, when @p->migration_disabled == 1, check whether @p is the
> + * current task or not: if it is, then migration was not disabled before
> + * entering the callback, otherwise migration was disabled.
> + *
> + * Returns true if @p is migration-disabled, false otherwise.
> + */
> +static bool is_bpf_migration_disabled(const struct task_struct *p)
> +{
> +	if (p->migration_disabled == 1)
> +		return p != current;
> +	else
> +		return p->migration_disabled;
> +}
> +
>   static s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
>   				 const struct cpumask *allowed, u64 flags)
>   {
> @@ -922,7 +948,7 @@ static s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_f
>   	 * selection optimizations and simply check whether the previously
>   	 * used CPU is idle and within the allowed cpumask.
>   	 */
> -	if (p->nr_cpus_allowed == 1 || is_migration_disabled(p)) {
> +	if (p->nr_cpus_allowed == 1 || is_bpf_migration_disabled(p)) {
>   		if (cpumask_test_cpu(prev_cpu, allowed ?: p->cpus_ptr) &&
>   		    scx_idle_test_and_clear_cpu(prev_cpu))
>   			cpu = prev_cpu;


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ