[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151027164720.GH20208@arm.com>
Date: Tue, 27 Oct 2015 16:47:20 +0000
From: Will Deacon <will.deacon@....com>
To: roy.qing.li@...il.com
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH] ARM: perf: ensure the cpu is available to scheduler when
set irq affinity
On Wed, Oct 21, 2015 at 03:56:02PM +0800, roy.qing.li@...il.com wrote:
> From: Li RongQing <roy.qing.li@...il.com>
>
> when there are 4 cpus, but only one is available to schedule, the warning
> will be generated when run the below command:
> # perf record -g -e cpu-clock -- find / -name "*.ko"
> CPU PMU: unable to set irq affinity (irq=28, cpu=1)
> CPU PMU: unable to set irq affinity (irq=29, cpu=2)
> CPU PMU: unable to set irq affinity (irq=30, cpu=3)
>
> so ensure the cpu is available to scheduler when set irq affinity
>
> Signed-off-by: Li RongQing <roy.qing.li@...il.com>
> ---
> drivers/perf/arm_pmu.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> index 2365a32..9401aa8 100644
> --- a/drivers/perf/arm_pmu.c
> +++ b/drivers/perf/arm_pmu.c
> @@ -619,6 +619,9 @@ static void cpu_pmu_free_irq(struct arm_pmu *cpu_pmu)
> if (cpu_pmu->irq_affinity)
> cpu = cpu_pmu->irq_affinity[i];
>
> + if (!cpu_online(cpu))
> + continue;
> +
> if (!cpumask_test_and_clear_cpu(cpu, &cpu_pmu->active_irqs))
> continue;
> irq = platform_get_irq(pmu_device, i);
> @@ -665,6 +668,9 @@ static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler)
> if (cpu_pmu->irq_affinity)
> cpu = cpu_pmu->irq_affinity[i];
>
> + if (!cpu_online(cpu))
> + continue;
> +
Isn't this all racy against concurrent hotplug events? We're probably
better off requesting the IRQs at PMU probe time, since we no longer have
to worry about sharing the IRQ lines with other subsystems such as oprofile,
which is why it was designed this way originally.
Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists