[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <71121d12-0cb2-4ffe-92e5-caf25bf4596e@redhat.com>
Date: Wed, 12 Nov 2025 16:47:30 -0500
From: Waiman Long <llong@...hat.com>
To: Chen Ridong <chenridong@...weicloud.com>, tj@...nel.org,
hannes@...xchg.org, mkoutny@...e.com
Cc: cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
lujialin4@...wei.com, chenridong@...wei.com
Subject: Re: [PATCH RFC v2 10/22] cpuset: introduce local_partition_enable()
On 10/25/25 2:48 AM, Chen Ridong wrote:
> From: Chen Ridong <chenridong@...wei.com>
>
> The partition_enable() function introduced in the previous patch can be
> reused to enable local partitions.
>
> The local_partition_enable() function is introduced, which factors out the
> local partition enablement logic from update_parent_effective_cpumask().
> After passing local partition validation checks, it delegates to
> partition_enable() to complete the partition setup.
>
> This refactoring creates a clear separation between local and remote
> partition operations while maintaining code reuse through the shared
> partition_enable() infrastructure.
>
> Signed-off-by: Chen Ridong <chenridong@...wei.com>
> ---
> kernel/cgroup/cpuset.c | 94 ++++++++++++++++++++++++++----------------
> 1 file changed, 59 insertions(+), 35 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 5b57c5370641..b308d9f80eef 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1822,6 +1822,61 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
> remote_partition_disable(cs, tmp);
> }
>
> +/**
> + * local_partition_enable - Enable local partition for a cpuset
> + * @cs: Target cpuset to become a local partition root
> + * @new_prs: New partition root state to apply
> + * @tmp: Temporary masks for CPU calculations
> + *
> + * This function enables local partition root capability for a cpuset by
> + * validating prerequisites, computing exclusive CPUs, and updating the
> + * partition hierarchy.
> + *
> + * Return: 0 on success, error code on failure
> + */
> +static int local_partition_enable(struct cpuset *cs,
> + int new_prs, struct tmpmasks *tmp)
> +{
> + struct cpuset *parent = parent_cs(cs);
> + enum prs_errcode part_error;
> + bool cpumask_updated = false;
> +
> + lockdep_assert_held(&cpuset_mutex);
> + WARN_ON_ONCE(is_remote_partition(cs)); /* For local partition only */
> +
> + /*
> + * The parent must be a partition root.
> + * The new cpumask, if present, or the current cpus_allowed must
> + * not be empty.
> + */
> + if (!is_partition_valid(parent)) {
> + return is_partition_invalid(parent)
> + ? PERR_INVPARENT : PERR_NOTPART;
> + }
> +
> + /*
> + * Need to call compute_excpus() in case
> + * exclusive_cpus not set. Sibling conflict should only happen
> + * if exclusive_cpus isn't set.
> + */
> + if (compute_excpus(cs, tmp->new_cpus))
> + WARN_ON_ONCE(!cpumask_empty(cs->exclusive_cpus));
> +
> + part_error = validate_partition(cs, new_prs, tmp->new_cpus);
> + if (part_error)
> + return part_error;
> +
> + cpumask_updated = cpumask_andnot(tmp->addmask, tmp->new_cpus,
> + parent->effective_cpus);
What is the purpose of this cpumask_andnot() operation? Is it just to
create the cpumask_updated boolean? At this point, cpumask_updated
should always be true. If not, we have to add validation check to return
an error.
Cheers,
Longman
> + partition_enable(cs, parent, new_prs, tmp->new_cpus);
> +
> + if (cpumask_updated) {
> + cpuset_update_tasks_cpumask(parent, tmp->addmask);
> + update_sibling_cpumasks(parent, cs, tmp);
> + }
> + return 0;
> +}
> +
> /**
> * update_parent_effective_cpumask - update effective_cpus mask of parent cpuset
> * @cs: The cpuset that requests change in partition root state
> @@ -1912,34 +1967,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
>
> nocpu = tasks_nocpu_error(parent, cs, xcpus);
>
> - if ((cmd == partcmd_enable) || (cmd == partcmd_enablei)) {
> - /*
> - * Need to call compute_excpus() in case
> - * exclusive_cpus not set. Sibling conflict should only happen
> - * if exclusive_cpus isn't set.
> - */
> - xcpus = tmp->delmask;
> - if (compute_excpus(cs, xcpus))
> - WARN_ON_ONCE(!cpumask_empty(cs->exclusive_cpus));
> - new_prs = (cmd == partcmd_enable) ? PRS_ROOT : PRS_ISOLATED;
> -
> - part_error = validate_partition(cs, new_prs, xcpus);
> - if (part_error)
> - return part_error;
> - /*
> - * This function will only be called when all the preliminary
> - * checks have passed. At this point, the following condition
> - * should hold.
> - *
> - * (cs->effective_xcpus & cpu_active_mask) ⊆ parent->effective_cpus
> - *
> - * Warn if it is not the case.
> - */
> - cpumask_and(tmp->new_cpus, xcpus, cpu_active_mask);
> - WARN_ON_ONCE(!cpumask_subset(tmp->new_cpus, parent->effective_cpus));
> -
> - deleting = true;
> - } else if (cmd == partcmd_disable) {
> + if (cmd == partcmd_disable) {
> /*
> * May need to add cpus back to parent's effective_cpus
> * (and maybe removed from subpartitions_cpus/isolated_cpus)
> @@ -3062,14 +3090,10 @@ static int update_prstate(struct cpuset *cs, int new_prs)
> * If parent is valid partition, enable local partiion.
> * Otherwise, enable a remote partition.
> */
> - if (is_partition_valid(parent)) {
> - enum partition_cmd cmd = (new_prs == PRS_ROOT)
> - ? partcmd_enable : partcmd_enablei;
> -
> - err = update_parent_effective_cpumask(cs, cmd, NULL, &tmpmask);
> - } else {
> + if (is_partition_valid(parent))
> + err = local_partition_enable(cs, new_prs, &tmpmask);
> + else
> err = remote_partition_enable(cs, new_prs, &tmpmask);
> - }
> } else if (old_prs && new_prs) {
> /*
> * A change in load balance state only, no change in cpumasks.
Powered by blists - more mailing lists