[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z1nvDUlGrErZVEf9@gpd3>
Date: Wed, 11 Dec 2024 20:59:09 +0100
From: Andrea Righi <arighi@...dia.com>
To: Yury Norov <yury.norov@...il.com>
Cc: Tejun Heo <tj@...nel.org>, David Vernet <void@...ifault.com>,
Changwoo Min <changwoo@...lia.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/4] sched_ext: Introduce SCX_OPS_NODE_BUILTIN_IDLE
On Wed, Dec 11, 2024 at 10:21:49AM -0800, Yury Norov wrote:
...
> > + /*
> > + * Check if we need to enable per-node cpumasks.
> > + */
> > + if (ops->flags & SCX_OPS_BUILTIN_IDLE_PER_NODE)
> > + static_branch_enable_cpuslocked(&scx_builtin_idle_per_node);
> > + else
> > + static_branch_disable_cpuslocked(&scx_builtin_idle_per_node);
> > }
>
> The patch that introduces the flag should go the very first in the series,
> but should unconditionally disable scx_builtin_idle_per_node.
Ack, that's a good idea.
>
> The following patches should add all the machinery you need. The machinery
> should be conditional on the scx_builtin_idle_per_node, i.e. disabled for
> a while.
>
> Doing that, you'll be able to introduce your functionality as a whole:
>
> static struct cpumask *get_idle_cpumask_node(int node)
> {
> if (!static_branch_maybe(CONFIG_NUMA, &scx_builtin_idle_per_node))
> return idle_masks[0]->cpu;
>
> return idle_masks[node]->cpu;
> }
>
> Much better than patching just introduced code, right?
Agreed.
>
> The very last patch should only be a chunk that enables scx_builtin_idle_per_node
> based on SCX_OPS_BUILTIN_IDLE_PER_NODE.
>
> This way, when your feature will get merged, from git-bisect perspective
> it will be enabled atomically by the very last patch, but those interested
> in internals will have nice coherent history.
Makes sense, I'll refactor this in the next version, thanks!
-Andrea
Powered by blists - more mailing lists