[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230811111414.GC6993@willie-the-truck>
Date: Fri, 11 Aug 2023 12:14:14 +0100
From: Will Deacon <will@...nel.org>
To: Yicong Yang <yangyicong@...wei.com>
Cc: jonathan.cameron@...wei.com, mark.rutland@....com,
hejunhao3@...wei.com, prime.zeng@...ilicon.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linuxarm@...wei.com, yangyicong@...ilicon.com
Subject: Re: [PATCH] drivers/perf: hisi: Schedule perf session according to
locality
On Tue, Aug 08, 2023 at 08:51:47PM +0800, Yicong Yang wrote:
> From: Yicong Yang <yangyicong@...ilicon.com>
>
> The PCIe PMUs locate on different NUMA node but currently we don't
> consider it and likely stack all the sessions on the same CPU:
>
> [root@...alhost tmp]# cat /sys/devices/hisi_pcie*/cpumask
> 0
> 0
> 0
> 0
> 0
> 0
>
> This can be optimize a bit to use a local CPU for the PMU.
>
> Signed-off-by: Yicong Yang <yangyicong@...ilicon.com>
> ---
> drivers/perf/hisilicon/hisi_pcie_pmu.c | 15 ++++++++++++---
> 1 file changed, 12 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/perf/hisilicon/hisi_pcie_pmu.c b/drivers/perf/hisilicon/hisi_pcie_pmu.c
> index e10fc7cb9493..60ecf469782b 100644
> --- a/drivers/perf/hisilicon/hisi_pcie_pmu.c
> +++ b/drivers/perf/hisilicon/hisi_pcie_pmu.c
> @@ -665,7 +665,7 @@ static int hisi_pcie_pmu_online_cpu(unsigned int cpu, struct hlist_node *node)
> struct hisi_pcie_pmu *pcie_pmu = hlist_entry_safe(node, struct hisi_pcie_pmu, node);
>
> if (pcie_pmu->on_cpu == -1) {
> - pcie_pmu->on_cpu = cpu;
> + pcie_pmu->on_cpu = cpumask_local_spread(0, dev_to_node(&pcie_pmu->pdev->dev));
> WARN_ON(irq_set_affinity(pcie_pmu->irq, cpumask_of(cpu)));
Hmm, this is a bit weird now, because the interrupt is affine to a different
CPU from the one you've chosen. Are you sure that's ok? When the offline
notifier picks a new target, it moves the irq as well.
Will
Powered by blists - more mailing lists