[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <eaedf7d3-31dd-448b-9b00-60542e54260e@redhat.com>
Date: Tue, 25 Nov 2025 21:33:54 -0500
From: Waiman Long <llong@...hat.com>
To: Chen Ridong <chenridong@...weicloud.com>, Waiman Long <llong@...hat.com>,
tj@...nel.org, hannes@...xchg.org, mkoutny@...e.com
Cc: cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
daniel.m.jordan@...cle.com, lujialin4@...wei.com, chenridong@...wei.com
Subject: Re: [PATCH -next] cpuset: Remove unnecessary checks in
rebuild_sched_domains_locked
On 11/25/25 8:01 PM, Chen Ridong wrote:
>
> On 2025/11/26 2:16, Waiman Long wrote:
>>> active CPUs, preventing partition_sched_domains from being invoked with
>>> offline CPUs.
>>>
>>> Signed-off-by: Chen Ridong <chenridong@...wei.com>
>>> ---
>>> kernel/cgroup/cpuset.c | 29 ++++++-----------------------
>>> 1 file changed, 6 insertions(+), 23 deletions(-)
>>>
>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>> index daf813386260..1ac58e3f26b4 100644
>>> --- a/kernel/cgroup/cpuset.c
>>> +++ b/kernel/cgroup/cpuset.c
>>> @@ -1084,11 +1084,10 @@ void dl_rebuild_rd_accounting(void)
>>> */
>>> void rebuild_sched_domains_locked(void)
>>> {
>>> - struct cgroup_subsys_state *pos_css;
>>> struct sched_domain_attr *attr;
>>> cpumask_var_t *doms;
>>> - struct cpuset *cs;
>>> int ndoms;
>>> + int i;
>>> lockdep_assert_cpus_held();
>>> lockdep_assert_held(&cpuset_mutex);
>> In fact, the following code and the comments above in rebuild_sched_domains_locked() are also no
>> longer relevant. So you may remove them as well.
>>
>> if (!top_cpuset.nr_subparts_cpus &&
>> !cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask))
>> return;
>>
> Thank you for reminding me.
>
> I initially retained this code because I believed it was still required for cgroup v1, as I recalled
> that synchronous operation is exclusive to cgroup v2.
>
> However, upon re-examining the code, I confirm it can be safely removed. For cgroup v1,
> rebuild_sched_domains_locked is called synchronously, and only the migration task (handled by
> cpuset_migrate_tasks_workfn) operates asynchronously. Consequently, cpuset_hotplug_workfn is
> guaranteed to complete before the hotplug workflow finishes.
Yes, v1 still have a task migration part that is done asynchronously
because of the lock ordering issue. Even if this code has to be left
because of v1, you should still update the comment to reflect that.
Please try to keep the comment updated to help others to have a better
understanding of what the code is doing.
Thanks,
Longman
Powered by blists - more mailing lists