[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <81d35736-27b6-f489-fac4-55a5669673dc@arm.com>
Date: Tue, 23 Apr 2019 11:45:14 +0100
From: Robin Murphy <robin.murphy@....com>
To: Suzuki K Poulose <suzuki.poulose@....com>, will.deacon@....com
Cc: mark.rutland@....com, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, tglx@...utronix.de,
bigeasy@...utronix.de, peterz@...radead.org,
clabbe.montjoie@...il.com, Meng.Li@...driver.com
Subject: Re: [PATCH v3 2/2] perf/arm-ccn: Clean up CPU hotplug handling
On 16/04/2019 17:29, Suzuki K Poulose wrote:
> On 04/16/2019 04:24 PM, Robin Murphy wrote:
>> Like arm-cci, arm-ccn has the same issue of disabling preemption around
>> operations which can take mutexes. Again, remove the definite bug by
>> simply not trying to fight the theoretical races. And since we are
>> touching the hotplug handling code, take the opportunity to streamline
>> it, as there's really no need to store a full-sized cpumask to keep
>> track of a single CPU ID.
>>
>> Signed-off-by: Robin Murphy <robin.murphy@....com>
>> ---
>> drivers/perf/arm-ccn.c | 25 +++++++++++++------------
>> 1 file changed, 13 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/perf/arm-ccn.c b/drivers/perf/arm-ccn.c
>> index 2ae76026e947..0bb52d9bdcf7 100644
>> --- a/drivers/perf/arm-ccn.c
>> +++ b/drivers/perf/arm-ccn.c
>> @@ -167,7 +167,7 @@ struct arm_ccn_dt {
>> struct hrtimer hrtimer;
>> - cpumask_t cpu;
>> + unsigned int cpu;
>> struct hlist_node node;
>> struct pmu pmu;
>> @@ -559,7 +559,7 @@ static ssize_t arm_ccn_pmu_cpumask_show(struct
>> device *dev,
>> {
>> struct arm_ccn *ccn = pmu_to_arm_ccn(dev_get_drvdata(dev));
>> - return cpumap_print_to_pagebuf(true, buf, &ccn->dt.cpu);
>> + return cpumap_print_to_pagebuf(true, buf, cpumask_of(ccn->dt.cpu));
>> }
>> static struct device_attribute arm_ccn_pmu_cpumask_attr =
>> @@ -759,7 +759,7 @@ static int arm_ccn_pmu_event_init(struct
>> perf_event *event)
>> * mitigate this, we enforce CPU assignment to one, selected
>> * processor (the one described in the "cpumask" attribute).
>> */
>> - event->cpu = cpumask_first(&ccn->dt.cpu);
>> + event->cpu = ccn->dt.cpu;
>> node_xp = CCN_CONFIG_NODE(event->attr.config);
>> type = CCN_CONFIG_TYPE(event->attr.config);
>> @@ -1215,15 +1215,15 @@ static int arm_ccn_pmu_offline_cpu(unsigned
>> int cpu, struct hlist_node *node)
>> struct arm_ccn *ccn = container_of(dt, struct arm_ccn, dt);
>> unsigned int target;
>> - if (!cpumask_test_and_clear_cpu(cpu, &dt->cpu))
>> + if (cpu != dt->cpu)
>> return 0;
>> target = cpumask_any_but(cpu_online_mask, cpu);
>> if (target >= nr_cpu_ids)
>> return 0;
>> perf_pmu_migrate_context(&dt->pmu, cpu, target);
>> - cpumask_set_cpu(target, &dt->cpu);
>> + dt->cpu = target;
>> if (ccn->irq)
>> - WARN_ON(irq_set_affinity_hint(ccn->irq, &dt->cpu) != 0);
>> + WARN_ON(irq_set_affinity_hint(ccn->irq, cpumask_of(dt->cpu)));
>> return 0;
>> }
>> @@ -1299,29 +1299,30 @@ static int arm_ccn_pmu_init(struct arm_ccn *ccn)
>> }
>> /* Pick one CPU which we will use to collect data from CCN... */
>> - cpumask_set_cpu(get_cpu(), &ccn->dt.cpu);
>> + ccn->dt.cpu = raw_smp_processor_id();
>> /* Also make sure that the overflow interrupt is handled by this
>> CPU */
>> if (ccn->irq) {
>> - err = irq_set_affinity_hint(ccn->irq, &ccn->dt.cpu);
>> + err = irq_set_affinity_hint(ccn->irq, cpumask_of(ccn->dt.cpu));
>> if (err) {
>> dev_err(ccn->dev, "Failed to set interrupt affinity!\n");
>> goto error_set_affinity;
>> }
>> }
>> + cpuhp_state_add_instance_nocalls(CPUHP_AP_PERF_ARM_CCN_ONLINE,
>> + &ccn->dt.node);
>> +
>> err = perf_pmu_register(&ccn->dt.pmu, name, -1);
>> if (err)
>> goto error_pmu_register;
>> - cpuhp_state_add_instance_nocalls(CPUHP_AP_PERF_ARM_CCN_ONLINE,
>> - &ccn->dt.node);
>> - put_cpu();
>> return 0;
>> error_pmu_register:
>> + cpuhp_state_remove_instance_nocalls(CPUHP_AP_PERF_ARM_CCN_ONLINE,
>> + &ccn->dt.node);
>> error_set_affinity:
>> - put_cpu();
>
> Super minor nit: We don't need the error_set_affinity label anymore, as
> we don't do anything here. Otherwise:
Right, there were already somewhat-redundant labels to begin with, but
since they remain consistently named for the failure conditions which
lead to them (as opposed to the cleanup action that they take) I figured
I would leave them be for this change.
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@....com>
Thanks!
Robin.
Powered by blists - more mailing lists