[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y0kfgypRPdJYrvM3@hirez.programming.kicks-ass.net>
Date: Fri, 14 Oct 2022 10:36:19 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Leonardo Bras <leobras@...hat.com>
Cc: Steffen Klassert <steffen.klassert@...unet.com>,
Herbert Xu <herbert@...dor.apana.org.au>,
"David S. Miller" <davem@...emloft.net>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
Tejun Heo <tj@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Frederic Weisbecker <frederic@...nel.org>,
Phil Auld <pauld@...hat.com>,
Antoine Tenart <atenart@...nel.org>,
Christophe JAILLET <christophe.jaillet@...adoo.fr>,
Wang Yufen <wangyufen@...wei.com>, mtosatti@...hat.com,
linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-pci@...r.kernel.org, netdev@...r.kernel.org,
fweisbec@...il.com
Subject: Re: [PATCH v2 3/4] sched/isolation: Add HK_TYPE_WQ to isolcpus=domain
+ Frederic; who actually does most of this code
On Thu, Oct 13, 2022 at 03:40:28PM -0300, Leonardo Bras wrote:
> Housekeeping code keeps multiple cpumasks in order to keep track of which
> cpus can perform given housekeeping category.
>
> Every time the HK_TYPE_WQ cpumask is checked before queueing work at a cpu
> WQ it also happens to check for HK_TYPE_DOMAIN. So It can be assumed that
> the Domain isolation also ends up isolating work queues.
>
> Delegating current HK_TYPE_DOMAIN's work queue isolation to HK_TYPE_WQ
> makes it simpler to check if a cpu can run a task into an work queue, since
> code just need to go through a single HK_TYPE_* cpumask.
>
> Make isolcpus=domain aggregate both HK_TYPE_DOMAIN and HK_TYPE_WQ, and
> remove a lot of cpumask_and calls.
>
> Also, remove a unnecessary '|=' at housekeeping_isolcpus_setup() since we
> are sure that 'flags == 0' here.
>
> Signed-off-by: Leonardo Bras <leobras@...hat.com>
I've long maintained that having all these separate masks is daft;
Frederic do we really need that?
> ---
> drivers/pci/pci-driver.c | 13 +------------
> kernel/sched/isolation.c | 4 ++--
> kernel/workqueue.c | 1 -
> net/core/net-sysfs.c | 1 -
> 4 files changed, 3 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
> index 107d77f3c8467..550bef2504b8d 100644
> --- a/drivers/pci/pci-driver.c
> +++ b/drivers/pci/pci-driver.c
> @@ -371,19 +371,8 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
> pci_physfn_is_probed(dev)) {
> cpu = nr_cpu_ids;
> } else {
> - cpumask_var_t wq_domain_mask;
> -
> - if (!zalloc_cpumask_var(&wq_domain_mask, GFP_KERNEL)) {
> - error = -ENOMEM;
> - goto out;
> - }
> - cpumask_and(wq_domain_mask,
> - housekeeping_cpumask(HK_TYPE_WQ),
> - housekeeping_cpumask(HK_TYPE_DOMAIN));
> -
> cpu = cpumask_any_and(cpumask_of_node(node),
> - wq_domain_mask);
> - free_cpumask_var(wq_domain_mask);
> + housekeeping_cpumask(HK_TYPE_WQ));
> }
>
> if (cpu < nr_cpu_ids)
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index 373d42c707bc5..ced4b78564810 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -204,7 +204,7 @@ static int __init housekeeping_isolcpus_setup(char *str)
>
> if (!strncmp(str, "domain,", 7)) {
> str += 7;
> - flags |= HK_FLAG_DOMAIN;
> + flags |= HK_FLAG_DOMAIN | HK_FLAG_WQ;
> continue;
> }
>
> @@ -234,7 +234,7 @@ static int __init housekeeping_isolcpus_setup(char *str)
>
> /* Default behaviour for isolcpus without flags */
> if (!flags)
> - flags |= HK_FLAG_DOMAIN;
> + flags = HK_FLAG_DOMAIN | HK_FLAG_WQ;
>
> return housekeeping_setup(str, flags);
> }
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 7cd5f5e7e0a1b..b557daa571f17 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -6004,7 +6004,6 @@ void __init workqueue_init_early(void)
>
> BUG_ON(!alloc_cpumask_var(&wq_unbound_cpumask, GFP_KERNEL));
> cpumask_copy(wq_unbound_cpumask, housekeeping_cpumask(HK_TYPE_WQ));
> - cpumask_and(wq_unbound_cpumask, wq_unbound_cpumask, housekeeping_cpumask(HK_TYPE_DOMAIN));
>
> pwq_cache = KMEM_CACHE(pool_workqueue, SLAB_PANIC);
>
> diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
> index 8409d41405dfe..7b6fb62a118ab 100644
> --- a/net/core/net-sysfs.c
> +++ b/net/core/net-sysfs.c
> @@ -852,7 +852,6 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue,
> }
>
> if (!cpumask_empty(mask)) {
> - cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_DOMAIN));
> cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_WQ));
> if (cpumask_empty(mask)) {
> free_cpumask_var(mask);
> --
> 2.38.0
>
Powered by blists - more mailing lists