[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z2owJmy22Tk-bl4A@yury-ThinkPad>
Date: Mon, 23 Dec 2024 19:53:21 -0800
From: Yury Norov <yury.norov@...il.com>
To: Andrea Righi <arighi@...dia.com>
Cc: Tejun Heo <tj@...nel.org>, David Vernet <void@...ifault.com>,
Changwoo Min <changwoo@...lia.com>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 08/10] sched_ext: idle: introduce SCX_PICK_IDLE_NODE
On Mon, Dec 23, 2024 at 06:48:48PM -0800, Yury Norov wrote:
> On Fri, Dec 20, 2024 at 04:11:40PM +0100, Andrea Righi wrote:
> > Introduce a flag to restrict the selection of an idle CPU to a specific
> > NUMA node.
> >
> > Signed-off-by: Andrea Righi <arighi@...dia.com>
> > ---
> > kernel/sched/ext.c | 1 +
> > kernel/sched/ext_idle.c | 11 +++++++++--
> > 2 files changed, 10 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> > index 143938e935f1..da5c15bd3c56 100644
> > --- a/kernel/sched/ext.c
> > +++ b/kernel/sched/ext.c
> > @@ -773,6 +773,7 @@ enum scx_deq_flags {
> >
> > enum scx_pick_idle_cpu_flags {
> > SCX_PICK_IDLE_CORE = 1LLU << 0, /* pick a CPU whose SMT siblings are also idle */
> > + SCX_PICK_IDLE_NODE = 1LLU << 1, /* pick a CPU in the same target NUMA node */
>
> SCX_FORCE_NODE or SCX_FIX_NODE?
>
> > };
> >
> > enum scx_kick_flags {
> > diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
> > index 444f2a15f1d4..013deaa08f12 100644
> > --- a/kernel/sched/ext_idle.c
> > +++ b/kernel/sched/ext_idle.c
> > @@ -199,6 +199,12 @@ static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 f
This function begins with:
static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
{
nodemask_t hop_nodes = NODE_MASK_NONE;
s32 cpu = -EBUSY;
if (!static_branch_maybe(CONFIG_NUMA, &scx_builtin_idle_per_node))
return pick_idle_cpu_from_node(cpus_allowed, NUMA_FLAT_NODE, flags);
...
So if I disable scx_builtin_idle_per_node and then call:
scx_pick_idle_cpu(some_cpus, numa_node_id(), SCX_PICK_IDLE_NODE)
I may get a CPU from any non-local node, right? I think we need to honor user's
request:
if (!static_branch_maybe(CONFIG_NUMA, &scx_builtin_idle_per_node))
return pick_idle_cpu_from_node(cpus_allowed,
flags & SCX_PICK_IDLE_NODE ? node : NUMA_FLAT_NODE, flags);
That way the code will be coherent: if you enable idle cpumasks, you
will be able to follow all the NUMA hierarchy. If you disable them, at
least you honor user's request to return a CPU from a given node, if
he's very explicit about his intention.
You can be even nicer:
if (!static_branch_maybe(CONFIG_NUMA, &scx_builtin_idle_per_node)) {
node = pick_idle_cpu_from_node(cpus, node, flags);
if (node == MAX_NUM_NODES && flags & SCX_PICK_IDLE_NODE == 0)
node = pick_idle_cpu_from_node(cpus, NUMA_FLAT_NODE, flags);
return node;
}
> > cpu = pick_idle_cpu_from_node(cpus_allowed, n, flags);
> > if (cpu >= 0)
> > break;
> > + /*
> > + * Check if the search is restricted to the same core or
> > + * the same node.
> > + */
> > + if (flags & SCX_PICK_IDLE_NODE)
> > + break;
>
> Yeah, if you will give a better name for the flag, you'll not have to
> comment the code.
>
> > }
> >
> > return cpu;
> > @@ -495,7 +501,8 @@ static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
> > * Search for any fully idle core in the same LLC domain.
> > */
> > if (llc_cpus) {
> > - cpu = pick_idle_cpu_from_node(llc_cpus, node, SCX_PICK_IDLE_CORE);
> > + cpu = scx_pick_idle_cpu(llc_cpus, node,
> > + SCX_PICK_IDLE_CORE | SCX_PICK_IDLE_NODE);
>
> You change it from scx_pick_idle_cpu() to pick_idle_cpu_from_node()
> in patch 7 just to revert it back in patch 8...
>
> You can use scx_pick_idle_cpu() in patch 7 already because
> scx_builtin_idle_per_node is always disabled, and you always
> follow the NUMA_FLAT_NODE path. Here you will just add the
> SCX_PICK_IDLE_NODE flag.
>
> That's the point of separating functionality and control patches. In
> patch 7 you may need to mention explicitly that your new per-node
> idle masks are unconditionally disabled, and will be enabled in the
> last patch of the series, so some following patches will detail the
> implementation.
>
> > if (cpu >= 0)
> > goto cpu_found;
> > }
> > @@ -533,7 +540,7 @@ static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
> > * Search for any idle CPU in the same LLC domain.
> > */
> > if (llc_cpus) {
> > - cpu = pick_idle_cpu_from_node(llc_cpus, node, 0);
> > + cpu = scx_pick_idle_cpu(llc_cpus, node, SCX_PICK_IDLE_NODE);
> > if (cpu >= 0)
> > goto cpu_found;
> > }
> > --
> > 2.47.1
Powered by blists - more mailing lists