[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aUqyo58-CDReEm10@localhost.localdomain>
Date: Tue, 23 Dec 2025 16:17:55 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Chen Ridong <chenridong@...weicloud.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Michal Koutný <mkoutny@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Catalin Marinas <catalin.marinas@....com>,
Danilo Krummrich <dakr@...nel.org>,
"David S . Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Gabriele Monaco <gmonaco@...hat.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Ingo Molnar <mingo@...hat.com>, Jakub Kicinski <kuba@...nel.org>,
Jens Axboe <axboe@...nel.dk>, Johannes Weiner <hannes@...xchg.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Marco Crivellari <marco.crivellari@...e.com>,
Michal Hocko <mhocko@...e.com>, Muchun Song <muchun.song@...ux.dev>,
Paolo Abeni <pabeni@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, Phil Auld <pauld@...hat.com>,
"Rafael J . Wysocki" <rafael@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Simon Horman <horms@...nel.org>, Tejun Heo <tj@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Vlastimil Babka <vbabka@...e.cz>, Waiman Long <longman@...hat.com>,
Will Deacon <will@...nel.org>, cgroups@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-block@...r.kernel.org,
linux-mm@...ck.org, linux-pci@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [PATCH 17/31] cpuset: Propagate cpuset isolation update to
workqueue through housekeeping
Le Thu, Nov 06, 2025 at 08:55:42AM +0800, Chen Ridong a écrit :
>
>
> On 2025/11/6 5:03, Frederic Weisbecker wrote:
> > Until now, cpuset would propagate isolated partition changes to
> > workqueues so that unbound workers get properly reaffined.
> >
> > Since housekeeping now centralizes, synchronize and propagates isolation
> > cpumask changes, perform the work from that subsystem for consolidation
> > and consistency purposes.
> >
> > For simplification purpose, the target function is adapted to take the
> > new housekeeping mask instead of the isolated mask.
> >
> > Suggested-by: Tejun Heo <tj@...nel.org>
> > Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
> > ---
> > include/linux/workqueue.h | 2 +-
> > init/Kconfig | 1 +
> > kernel/cgroup/cpuset.c | 14 ++++++--------
> > kernel/sched/isolation.c | 4 +++-
> > kernel/workqueue.c | 17 ++++++++++-------
> > 5 files changed, 21 insertions(+), 17 deletions(-)
> >
> > diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
> > index dabc351cc127..a4749f56398f 100644
> > --- a/include/linux/workqueue.h
> > +++ b/include/linux/workqueue.h
> > @@ -588,7 +588,7 @@ struct workqueue_attrs *alloc_workqueue_attrs_noprof(void);
> > void free_workqueue_attrs(struct workqueue_attrs *attrs);
> > int apply_workqueue_attrs(struct workqueue_struct *wq,
> > const struct workqueue_attrs *attrs);
> > -extern int workqueue_unbound_exclude_cpumask(cpumask_var_t cpumask);
> > +extern int workqueue_unbound_housekeeping_update(const struct cpumask *hk);
> >
> > extern bool queue_work_on(int cpu, struct workqueue_struct *wq,
> > struct work_struct *work);
> > diff --git a/init/Kconfig b/init/Kconfig
> > index cab3ad28ca49..a1b3a3b66bfc 100644
> > --- a/init/Kconfig
> > +++ b/init/Kconfig
> > @@ -1247,6 +1247,7 @@ config CPUSETS
> > bool "Cpuset controller"
> > depends on SMP
> > select UNION_FIND
> > + select CPU_ISOLATION
> > help
> > This option will let you create and manage CPUSETs which
> > allow dynamically partitioning a system into sets of CPUs and
> > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> > index b04a4242f2fa..ea102e4695a5 100644
> > --- a/kernel/cgroup/cpuset.c
> > +++ b/kernel/cgroup/cpuset.c
> > @@ -1392,7 +1392,7 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
> > return isolcpus_updated;
> > }
> >
> > -static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
> > +static void update_housekeeping_cpumask(bool isolcpus_updated)
> > {
> > int ret;
> >
> > @@ -1401,8 +1401,6 @@ static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
> > if (!isolcpus_updated)
> > return;
> >
> > - ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
> > - WARN_ON_ONCE(ret < 0);
> > ret = housekeeping_update(isolated_cpus, HK_TYPE_DOMAIN);
> > WARN_ON_ONCE(ret < 0);
> > }
> > @@ -1558,7 +1556,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
> > list_add(&cs->remote_sibling, &remote_children);
> > cpumask_copy(cs->effective_xcpus, tmp->new_cpus);
> > spin_unlock_irq(&callback_lock);
> > - update_unbound_workqueue_cpumask(isolcpus_updated);
> > + update_housekeeping_cpumask(isolcpus_updated);
> > cpuset_force_rebuild();
> > cs->prs_err = 0;
> >
> > @@ -1599,7 +1597,7 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp)
> > compute_excpus(cs, cs->effective_xcpus);
> > reset_partition_data(cs);
> > spin_unlock_irq(&callback_lock);
> > - update_unbound_workqueue_cpumask(isolcpus_updated);
> > + update_housekeeping_cpumask(isolcpus_updated);
> > cpuset_force_rebuild();
> >
> > /*
> > @@ -1668,7 +1666,7 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *xcpus,
> > if (xcpus)
> > cpumask_copy(cs->exclusive_cpus, xcpus);
> > spin_unlock_irq(&callback_lock);
> > - update_unbound_workqueue_cpumask(isolcpus_updated);
> > + update_housekeeping_cpumask(isolcpus_updated);
> > if (adding || deleting)
> > cpuset_force_rebuild();
> >
> > @@ -2027,7 +2025,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
> > WARN_ON_ONCE(parent->nr_subparts < 0);
> > }
> > spin_unlock_irq(&callback_lock);
> > - update_unbound_workqueue_cpumask(isolcpus_updated);
> > + update_housekeeping_cpumask(isolcpus_updated);
> >
> > if ((old_prs != new_prs) && (cmd == partcmd_update))
> > update_partition_exclusive_flag(cs, new_prs);
> > @@ -3047,7 +3045,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
> > else if (isolcpus_updated)
> > isolated_cpus_update(old_prs, new_prs, cs->effective_xcpus);
> > spin_unlock_irq(&callback_lock);
> > - update_unbound_workqueue_cpumask(isolcpus_updated);
> > + update_housekeeping_cpumask(isolcpus_updated);
> >
>
> The patch [1] has been applied to cgroup/for-next, you may have to adapt it.
>
> [1]:
> https://lore.kernel.org/cgroups/20251105043848.382703-6-longman@redhat.com/T/#u
Right, I just waited for -rc1 to pop-up before doing that.
v5 will follow shortly.
Thanks.
>
> --
> Best regards,
> Ridong
>
--
Frederic Weisbecker
SUSE Labs
Powered by blists - more mailing lists