lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z64z6jIXz-MCSlv1@thinkpad>
Date: Thu, 13 Feb 2025 13:03:22 -0500
From: Yury Norov <yury.norov@...il.com>
To: Andrea Righi <arighi@...dia.com>
Cc: Tejun Heo <tj@...nel.org>, David Vernet <void@...ifault.com>,
	Changwoo Min <changwoo@...lia.com>, Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Juri Lelli <juri.lelli@...hat.com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
	Valentin Schneider <vschneid@...hat.com>,
	Joel Fernandes <joel@...lfernandes.org>, Ian May <ianm@...dia.com>,
	bpf@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/7] sched_ext: idle: Per-node idle cpumasks

On Wed, Feb 12, 2025 at 05:48:13PM +0100, Andrea Righi wrote:
  
> @@ -90,6 +131,78 @@ s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
>  		goto retry;
>  }
>  
> +static s32 pick_idle_cpu_from_other_nodes(const struct cpumask *cpus_allowed, int node, u64 flags)

'From other node' sounds a bit vague

> +{
> +	static DEFINE_PER_CPU(nodemask_t, per_cpu_unvisited);
> +	nodemask_t *unvisited = this_cpu_ptr(&per_cpu_unvisited);
> +	s32 cpu = -EBUSY;
> +
> +	preempt_disable();
> +	unvisited = this_cpu_ptr(&per_cpu_unvisited);
> +
> +	/*
> +	 * Restrict the search to the online nodes, excluding the current
> +	 * one.
> +	 */
> +	nodes_clear(*unvisited);
> +	nodes_or(*unvisited, *unvisited, node_states[N_ONLINE]);

nodes_clear() + nodes_or() == nodes_copy()

Yeah, we miss it. The attached patch adds nodes_copy(). Can you
consider taking it for your series?

> +	node_clear(node, *unvisited);
> +
> +	/*
> +	 * Traverse all nodes in order of increasing distance, starting
> +	 * from @node.
> +	 *
> +	 * This loop is O(N^2), with N being the amount of NUMA nodes,
> +	 * which might be quite expensive in large NUMA systems. However,
> +	 * this complexity comes into play only when a scheduler enables
> +	 * SCX_OPS_BUILTIN_IDLE_PER_NODE and it's requesting an idle CPU
> +	 * without specifying a target NUMA node, so it shouldn't be a
> +	 * bottleneck is most cases.
> +	 *
> +	 * As a future optimization we may want to cache the list of nodes
> +	 * in a per-node array, instead of actually traversing them every
> +	 * time.
> +	 */
> +	for_each_node_numadist(node, *unvisited) {
> +		cpu = pick_idle_cpu_in_node(cpus_allowed, node, flags);
> +		if (cpu >= 0)
> +			break;
> +	}
> +	preempt_enable();
> +
> +	return cpu;
> +}
> +
> +/*
> + * Find an idle CPU in the system, starting from @node.
> + */
> +s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
> +{
> +	s32 cpu;
> +
> +	/*
> +	 * Always search in the starting node first (this is an
> +	 * optimization that can save some cycles even when the search is
> +	 * not limited to a single node).
> +	 */
> +	cpu = pick_idle_cpu_in_node(cpus_allowed, node, flags);
> +	if (cpu >= 0)
> +		return cpu;
> +
> +	/*
> +	 * Stop the search if we are using only a single global cpumask
> +	 * (NUMA_NO_NODE) or if the search is restricted to the first node
> +	 * only.
> +	 */
> +	if (node == NUMA_NO_NODE || flags & SCX_PICK_IDLE_IN_NODE)
> +		return -EBUSY;
> +
> +	/*
> +	 * Extend the search to the other nodes.
> +	 */
> +	return pick_idle_cpu_from_other_nodes(cpus_allowed, node, flags);
> +}

>From d69294cba9bffc05924dc3351a88601937c24213 Mon Sep 17 00:00:00 2001
From: Yury Norov <yury.norov@...il.com>
Date: Thu, 13 Feb 2025 11:21:08 -0500
Subject: [PATCH] nodemask: add nodes_copy()

Nodemasks API misses the plain nodes_copy() which is required in this series.

Signed-off-by: Yury Norov [NVIDIA] <yury.norov@...il.com>
---
 include/linux/nodemask.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
index 9fd7a0ce9c1a..41cf43c4e70f 100644
--- a/include/linux/nodemask.h
+++ b/include/linux/nodemask.h
@@ -191,6 +191,13 @@ static __always_inline void __nodes_andnot(nodemask_t *dstp, const nodemask_t *s
 	bitmap_andnot(dstp->bits, src1p->bits, src2p->bits, nbits);
 }
 
+#define nodes_copy(dst, src) __nodes_copy(&(dst), &(src), MAX_NUMNODES)
+static __always_inline void __nodes_copy(nodemask_t *dstp,
+					const nodemask_t *srcp, unsigned int nbits)
+{
+	bitmap_copy(dstp->bits, srcp->bits, nbits);
+}
+
 #define nodes_complement(dst, src) \
 			__nodes_complement(&(dst), &(src), MAX_NUMNODES)
 static __always_inline void __nodes_complement(nodemask_t *dstp,
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ