[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7ea4e410-72cb-db3f-f141-a2f3c70b801d@huawei.com>
Date: Fri, 27 Mar 2020 17:52:28 +0000
From: John Garry <john.garry@...wei.com>
To: Marc Zyngier <maz@...nel.org>, <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>
CC: chenxiang <chenxiang66@...ilicon.com>,
Zhou Wang <wangzhou1@...ilicon.com>,
Ming Lei <ming.lei@...hat.com>,
Jason Cooper <jason@...edaemon.net>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>,
"luojiaxing@...wei.com" <luojiaxing@...wei.com>
Subject: Re: [PATCH v3 2/2] irqchip/gic-v3-its: Balance initial LPI affinity
across CPUs
> +
> +/*
> + * As suggested by Thomas Gleixner in:
> + * https://lore.kernel.org/r/87h80q2aoc.fsf@nanos.tec.linutronix.de
> + */
> +static int its_select_cpu(struct irq_data *d,
> + const struct cpumask *aff_mask)
> +{
> + struct its_device *its_dev = irq_data_get_irq_chip_data(d);
> + cpumask_var_t tmpmask;
> + int cpu, node;
> +
> + if (!alloc_cpumask_var(&tmpmask, GFP_KERNEL))
> + return -ENOMEM;
> +
> + node = its_dev->its->numa_node;
> +
> + if (!irqd_affinity_is_managed(d)) {
> + /* First try the NUMA node */
> + if (node != NUMA_NO_NODE) {
> + /*
> + * Try the intersection of the affinity mask and the
> + * node mask (and the online mask, just to be safe).
> + */
> + cpumask_and(tmpmask, cpumask_of_node(node), aff_mask);
> + cpumask_and(tmpmask, tmpmask, cpu_online_mask);
> +
> + /* If that doesn't work, try the nodemask itself */
> + if (cpumask_empty(tmpmask))
> + cpumask_and(tmpmask, cpumask_of_node(node), cpu_online_mask);
> +
> + cpu = cpumask_pick_least_loaded(d, tmpmask);
> + if (cpu < nr_cpu_ids)
> + goto out;
> +
> + /* If we can't cross sockets, give up */
> + if ((its_dev->its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_23144))
> + goto out;
> +
> + /* If the above failed, expand the search */
> + }
> +
> + /* Try the intersection of the affinity and online masks */
> + cpumask_and(tmpmask, aff_mask, cpu_online_mask);
> +
> + /* If that doesn't fly, the online mask is the last resort */
> + if (cpumask_empty(tmpmask))
> + cpumask_copy(tmpmask, cpu_online_mask);
> +
> + cpu = cpumask_pick_least_loaded(d, tmpmask);
> + } else {
Hi Marc,
> + cpumask_and(tmpmask, irq_data_get_affinity_mask(d), cpu_online_mask);
> +
Please consider this flow:
- in its_irq_domain_activate()->its_select_cpu(), for a managed
interrupt we select the target cpu from the interrupt affin mask
- then in its_set_affinity() call for the same interrupt, we may
needlessly reselect the target cpu. This is because in the
its_set_affinity()->its_select_cpu() call, we account for that interrupt
in the load for the current target cpu, and may find a lesser loaded cpu
in the mask and switch.
For example, from mask 0-5 we select cpu0 initially. Then on the 2nd
call, we find cpu0 has a greater load (1) then cpu1 (0), and switch to cpu1.
Cheers,
John
> + /* If we cannot cross sockets, limit the search to that node */
> + if ((its_dev->its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_23144) &&
> + node != NUMA_NO_NODE)
> + cpumask_and(tmpmask, tmpmask, cpumask_of_node(node));
> +
> + cpu = cpumask_pick_least_loaded(d, tmpmask);
> + }
> +out:
> + free_cpumask_var(tmpmask);
> +
> + pr_debug("IRQ%d -> %*pbl CPU%d\n", d->irq, cpumask_pr_args(aff_mask), cpu);
> + return cpu;
> +}
Powered by blists - more mailing lists