[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZoIPzQNEsUWOWp3f@fedora>
Date: Mon, 1 Jul 2024 10:09:17 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Daniel Wagner <dwagner@...e.de>
Cc: Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>,
Sagi Grimberg <sagi@...mberg.me>,
Thomas Gleixner <tglx@...utronix.de>,
Christoph Hellwig <hch@....de>,
Frederic Weisbecker <fweisbecker@...e.com>,
Mel Gorman <mgorman@...e.de>, Hannes Reinecke <hare@...e.de>,
Sridhar Balaraman <sbalaraman@...allelwireless.com>,
"brookxu.cn" <brookxu.cn@...il.com>, linux-kernel@...r.kernel.org,
linux-block@...r.kernel.org, linux-nvme@...ts.infradead.org,
ming.lei@...hat.com
Subject: Re: [PATCH v2 3/3] lib/group_cpus.c: honor housekeeping config when
grouping CPUs
On Thu, Jun 27, 2024 at 04:10:53PM +0200, Daniel Wagner wrote:
> group_cpus_evenly distributes all present CPUs into groups. This ignores
> the isolcpus configuration and assigns isolated CPUs into the groups.
>
> Make group_cpus_evenly aware of isolcpus configuration and use the
> housekeeping CPU mask as base for distributing the available CPUs into
> groups.
>
> Fixes: 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed interrupts")
> Signed-off-by: Daniel Wagner <dwagner@...e.de>
> ---
> lib/group_cpus.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 73 insertions(+), 2 deletions(-)
>
> diff --git a/lib/group_cpus.c b/lib/group_cpus.c
> index ee272c4cefcc..19fb7186f9d4 100644
> --- a/lib/group_cpus.c
> +++ b/lib/group_cpus.c
> @@ -8,6 +8,7 @@
> #include <linux/cpu.h>
> #include <linux/sort.h>
> #include <linux/group_cpus.h>
> +#include <linux/sched/isolation.h>
>
> #ifdef CONFIG_SMP
>
> @@ -330,7 +331,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
> }
>
> /**
> - * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> + * group_possible_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> * @numgrps: number of groups
> *
> * Return: cpumask array if successful, NULL otherwise. And each element
> @@ -344,7 +345,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
> * We guarantee in the resulted grouping that all CPUs are covered, and
> * no same CPU is assigned to multiple groups
> */
> -struct cpumask *group_cpus_evenly(unsigned int numgrps)
> +static struct cpumask *group_possible_cpus_evenly(unsigned int numgrps)
> {
> unsigned int curgrp = 0, nr_present = 0, nr_others = 0;
> cpumask_var_t *node_to_cpumask;
> @@ -423,6 +424,76 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
> }
> return masks;
> }
> +
> +/**
> + * group_mask_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> + * @numgrps: number of groups
> + * @cpu_mask: CPU to consider for the grouping
> + *
> + * Return: cpumask array if successful, NULL otherwise. And each element
> + * includes CPUs assigned to this group.
> + *
> + * Try to put close CPUs from viewpoint of CPU and NUMA locality into
> + * same group. Allocate present CPUs on these groups evenly.
> + */
> +static struct cpumask *group_mask_cpus_evenly(unsigned int numgrps,
> + const struct cpumask *cpu_mask)
> +{
> + cpumask_var_t *node_to_cpumask;
> + cpumask_var_t nmsk;
> + int ret = -ENOMEM;
> + struct cpumask *masks = NULL;
> +
> + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
> + return NULL;
> +
> + node_to_cpumask = alloc_node_to_cpumask();
> + if (!node_to_cpumask)
> + goto fail_nmsk;
> +
> + masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL);
> + if (!masks)
> + goto fail_node_to_cpumask;
> +
> + build_node_to_cpumask(node_to_cpumask);
> +
> + ret = __group_cpus_evenly(0, numgrps, node_to_cpumask, cpu_mask, nmsk,
> + masks);
> +
> +fail_node_to_cpumask:
> + free_node_to_cpumask(node_to_cpumask);
> +
> +fail_nmsk:
> + free_cpumask_var(nmsk);
> + if (ret < 0) {
> + kfree(masks);
> + return NULL;
> + }
> + return masks;
> +}
> +
> +/**
> + * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> + * @numgrps: number of groups
> + *
> + * Return: cpumask array if successful, NULL otherwise.
> + *
> + * group_possible_cpus_evently() is used for distributing the cpus on all
> + * possible cpus in absence of isolcpus command line argument.
> + * group_mask_cpu_evenly() is used when the isolcpus command line
> + * argument is used with managed_irq option. In this case only the
> + * housekeeping CPUs are considered.
> + */
> +struct cpumask *group_cpus_evenly(unsigned int numgrps)
> +{
> + const struct cpumask *hk_mask;
> +
> + hk_mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
> + if (!cpumask_empty(hk_mask))
> + return group_mask_cpus_evenly(numgrps, hk_mask);
> +
> + return group_possible_cpus_evenly(numgrps);
Since this patch, some isolated CPUs may not be covered in
blk-mq queue mapping.
Meantime people still may submit IO workload from isolated CPUs
such as by 'taskset -c', blk-mq may not work well for this situation,
for example, IO hang may be caused during cpu hotplug.
I did see this kind of usage in some RH Openshift workloads.
If blk-mq problem can be solved, I am fine with this kind of
change.
Thanks,
Ming
Powered by blists - more mailing lists