lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <41631831-6486-4286-a399-23130d1f653a@igalia.com>
Date: Fri, 7 Mar 2025 12:14:12 +0900
From: Changwoo Min <changwoo@...lia.com>
To: Andrea Righi <arighi@...dia.com>, Tejun Heo <tj@...nel.org>,
 David Vernet <void@...ifault.com>
Cc: bpf@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCHSET sched_ext/for-6.15] sched_ext: Enhance built-in idle
 selection with preferred CPUs

Hi Andrea,

Thank you for submitting the patch set.

On 25. 3. 7. 03:18, Andrea Righi wrote:
> Many scx schedulers define their own concept of scheduling domains to
> represent topology characteristics, such as heterogeneous architectures
> (e.g., big.LITTLE, P-cores/E-cores), or to categorize tasks based on
> specific properties (e.g., setting the soft-affinity of certain tasks to a
> subset of CPUs).
> 
> Currently, there is no mechanism to share these domains with the built-in
> idle CPU selection policy. As a result, schedulers often implement their
> own idle CPU selection policies, which are typically similar to one
> another, leading to a lot of code duplication.
> 
> To address this, extend the built-in idle CPU selection policy introducing
> the concept of preferred CPUs.
> 
> With this concept, BPF schedulers can apply the built-in idle CPU selection
> policy to a subset of preferred CPUs, allowing them to implement their own
> scheduling domains while still using the topology optimizations
> optimizations of the built-in policy, preventing code duplication across

Typo here. There are two "optimizations".

> different schedulers.
> 
> To implement this, introduce a new helper kfunc scx_bpf_select_cpu_pref()
> that allows to specify a cpumask of preferred CPUs:
> 
> s32 scx_bpf_select_cpu_pref(struct task_struct *p,
> 			    const struct cpumask *preferred_cpus,
> 			    s32 prev_cpu, u64 wake_flags, u64 flags);
> 
> Moreover, introduce the new idle flag %SCX_PICK_IDLE_IN_PREF that can be
> used to enforce selection strictly within the preferred domain.
> 
> Example usage
> =============
> 
> s32 BPF_STRUCT_OPS(foo_select_cpu, struct task_struct *p,
> 		   s32 prev_cpu, u64 wake_flags)
> {
> 	const struct cpumask *dom = task_domain(p) ?: p->cpus_ptr;
> 	s32 cpu;
> 
> 	/*
> 	 * Pick an idle CPU in the task's domain. If no CPU is found,
> 	 * extend the search outside the domain.
> 	 */
> 	cpu = scx_bpf_select_cpu_pref(p, dom, prev_cpu, wake_flags, 0);
> 	if (cpu >= 0) {
> 		scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
> 		return cpu;
> 	}
> 
> 	return prev_cpu;
> }
> 
> Results
> =======
> 
> Load distribution on a 4 sockets / 4 cores per socket system, simulated
> using virtme-ng, running a modified version of scx_bpfland that uses the
> new helper scx_bpf_select_cpu_pref() and 0xff00 as preferred domain:
> 
>   $ vng --cpu 16,sockets=4,cores=4,threads=1
> 
> Starting 12 CPU hogs to fill the preferred domain:
> 
>   $ stress-ng -c 12
>   ...
>      0[|||||||||||||||||||||||100.0%]   8[||||||||||||||||||||||||100.0%]
>      1[|                        1.3%]   9[||||||||||||||||||||||||100.0%]
>      2[|||||||||||||||||||||||100.0%]  10[||||||||||||||||||||||||100.0%]
>      3[|||||||||||||||||||||||100.0%]  11[||||||||||||||||||||||||100.0%]
>      4[|||||||||||||||||||||||100.0%]  12[||||||||||||||||||||||||100.0%]
>      5[||                       2.6%]  13[||||||||||||||||||||||||100.0%]
>      6[|                        0.6%]  14[||||||||||||||||||||||||100.0%]
>      7|                         0.0%]  15[||||||||||||||||||||||||100.0%]
> 
> Passing %SCX_PICK_IDLE_IN_PREF to scx_bpf_select_cpu_pref() to enforce
> strict selection on the preferred CPUs (with the same workload):
> 
>      0[                         0.0%]   8[||||||||||||||||||||||||100.0%]
>      1[                         0.0%]   9[||||||||||||||||||||||||100.0%]
>      2[                         0.0%]  10[||||||||||||||||||||||||100.0%]
>      3[                         0.0%]  11[||||||||||||||||||||||||100.0%]
>      4[                         0.0%]  12[||||||||||||||||||||||||100.0%]
>      5[                         0.0%]  13[||||||||||||||||||||||||100.0%]
>      6[                         0.0%]  14[||||||||||||||||||||||||100.0%]
>      7[                         0.0%]  15[||||||||||||||||||||||||100.0%]
> 
> Andrea Righi (4):
>        sched_ext: idle: Honor idle flags in the built-in idle selection policy
>        sched_ext: idle: Introduce the concept of preferred CPUs
>        sched_ext: idle: Introduce scx_bpf_select_cpu_pref()
>        selftests/sched_ext: Add test for scx_bpf_select_cpu_pref()
> 
>   kernel/sched/ext.c                                |   4 +-
>   kernel/sched/ext_idle.c                           | 235 ++++++++++++++++++----
>   kernel/sched/ext_idle.h                           |   3 +-
>   tools/sched_ext/include/scx/common.bpf.h          |   2 +
>   tools/sched_ext/include/scx/compat.h              |   1 +
>   tools/testing/selftests/sched_ext/Makefile        |   1 +
>   tools/testing/selftests/sched_ext/pref_cpus.bpf.c |  95 +++++++++
>   tools/testing/selftests/sched_ext/pref_cpus.c     |  58 ++++++
>   8 files changed, 354 insertions(+), 45 deletions(-)
>   create mode 100644 tools/testing/selftests/sched_ext/pref_cpus.bpf.c
>   create mode 100644 tools/testing/selftests/sched_ext/pref_cpus.c
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ