[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8b3ea6aa-2751-9612-4b91-82640e8dde0f@huawei.com>
Date: Tue, 15 Aug 2023 19:02:45 +0800
From: Yicong Yang <yangyicong@...wei.com>
To: Will Deacon <will@...nel.org>
CC: <jonathan.cameron@...wei.com>, <mark.rutland@....com>,
<hejunhao3@...wei.com>, <prime.zeng@...ilicon.com>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <yangyicong@...ilicon.com>
Subject: Re: [PATCH] drivers/perf: hisi: Schedule perf session according to
locality
On 2023/8/11 19:14, Will Deacon wrote:
> On Tue, Aug 08, 2023 at 08:51:47PM +0800, Yicong Yang wrote:
>> From: Yicong Yang <yangyicong@...ilicon.com>
>>
>> The PCIe PMUs locate on different NUMA node but currently we don't
>> consider it and likely stack all the sessions on the same CPU:
>>
>> [root@...alhost tmp]# cat /sys/devices/hisi_pcie*/cpumask
>> 0
>> 0
>> 0
>> 0
>> 0
>> 0
>>
>> This can be optimize a bit to use a local CPU for the PMU.
>>
>> Signed-off-by: Yicong Yang <yangyicong@...ilicon.com>
>> ---
>> drivers/perf/hisilicon/hisi_pcie_pmu.c | 15 ++++++++++++---
>> 1 file changed, 12 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/perf/hisilicon/hisi_pcie_pmu.c b/drivers/perf/hisilicon/hisi_pcie_pmu.c
>> index e10fc7cb9493..60ecf469782b 100644
>> --- a/drivers/perf/hisilicon/hisi_pcie_pmu.c
>> +++ b/drivers/perf/hisilicon/hisi_pcie_pmu.c
>> @@ -665,7 +665,7 @@ static int hisi_pcie_pmu_online_cpu(unsigned int cpu, struct hlist_node *node)
>> struct hisi_pcie_pmu *pcie_pmu = hlist_entry_safe(node, struct hisi_pcie_pmu, node);
>>
>> if (pcie_pmu->on_cpu == -1) {
>> - pcie_pmu->on_cpu = cpu;
>> + pcie_pmu->on_cpu = cpumask_local_spread(0, dev_to_node(&pcie_pmu->pdev->dev));
>> WARN_ON(irq_set_affinity(pcie_pmu->irq, cpumask_of(cpu)));
>
> Hmm, this is a bit weird now, because the interrupt is affine to a different
> CPU from the one you've chosen. Are you sure that's ok? When the offline
> notifier picks a new target, it moves the irq as well.
>
Thanks for pointing it out. This is indeed a problem. Will fix this.
Thanks.
Powered by blists - more mailing lists