lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z0-cf7gUzV8jIWIX@slm.duckdns.org>
Date: Tue, 3 Dec 2024 14:04:15 -1000
From: Tejun Heo <tj@...nel.org>
To: Andrea Righi <arighi@...dia.com>
Cc: David Vernet <void@...ifault.com>, Yury Norov <yury.norov@...il.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] sched_ext: Introduce per-NUMA idle cpumasks

Hello,

On Tue, Dec 03, 2024 at 04:36:11PM +0100, Andrea Righi wrote:
...
> Probably a better way to solve this issue is to introduce new kfunc's to
> explicitly select specific per-NUMA cpumask and modify the scx
> schedulers to transition to this new API, for example:
> 
>   const struct cpumask *scx_bpf_get_idle_numa_cpumask(int node)
>   const struct cpumask *scx_bpf_get_idle_numa_smtmask(int node)

Yeah, I don't think we want to break backward compat here. Can we introduce
a flag to switch between node-aware and flattened logic and trigger ops
error if the wrong flavor is used? Then, we can deprecate and drop the old
behavior after a few releases. Also, I think it can be named
scx_bpf_get_idle_cpumask_node().

> +static struct cpumask *get_idle_cpumask(int cpu)
> +{
> +	int node = cpu_to_node(cpu);
> +
> +	return idle_masks[node]->cpu;
> +}
> +
> +static struct cpumask *get_idle_smtmask(int cpu)
> +{
> +	int node = cpu_to_node(cpu);
> +
> +	return idle_masks[node]->smt;
> +}

Hmm... why are they keyed by cpu? Wouldn't it make more sense to key them by
node?

> +static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
> +{
> +	int start = cpu_to_node(smp_processor_id());
> +	int node, cpu;
> +
> +	for_each_node_state_wrap(node, N_ONLINE, start) {
> +		/*
> +		 * scx_pick_idle_cpu_from_node() can be expensive and redundant
> +		 * if none of the CPUs in the NUMA node can be used (according
> +		 * to cpus_allowed).
> +		 *
> +		 * Therefore, check if the NUMA node is usable in advance to
> +		 * save some CPU cycles.
> +		 */
> +		if (!cpumask_intersects(cpumask_of_node(node), cpus_allowed))
> +			continue;
> +		cpu = scx_pick_idle_cpu_from_node(node, cpus_allowed, flags);
> +		if (cpu >= 0)
> +			return cpu;

This is fine for now but it'd be ideal if the iteration is in inter-node
distance order so that each CPU radiates from local node to the furthest
ones.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ