[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120328092446.GY22197@one.firstfloor.org>
Date: Wed, 28 Mar 2012 11:24:46 +0200
From: Andi Kleen <andi@...stfloor.org>
To: "Yan, Zheng" <zheng.z.yan@...el.com>
Cc: a.p.zijlstra@...llo.nl, mingo@...e.hu, andi@...stfloor.org,
eranian@...gle.com, linux-kernel@...r.kernel.org,
ming.m.lin@...el.com
Subject: Re: [PATCH 2/5] perf: generic intel uncore support
Overall the driver looks rather good. Thanks.
On Wed, Mar 28, 2012 at 02:43:15PM +0800, Yan, Zheng wrote:
> +static void uncore_perf_event_update(struct intel_uncore_box *box,
> + struct perf_event *event)
> +{
> + raw_spin_lock(&box->lock);
I think a raw lock would be only needed if the uncore was called
from the scheduler context switch, which it should not be.
So you can use a normal lock instead of a raw lock.
> +static void uncore_pmu_start_hrtimer(struct intel_uncore_box *box)
> +{
> + __hrtimer_start_range_ns(&box->hrtimer,
> + ns_to_ktime(UNCORE_PMU_HRTIMER_INTERVAL), 0,
> + HRTIMER_MODE_REL_PINNED, 0);
> +}
Can probably do some slack to be more friendly for power.
> +static struct intel_uncore_box *
> +uncore_pmu_find_box(struct intel_uncore_pmu *pmu, int phyid)
> +{
> + struct intel_uncore_box *box;
> +
> + rcu_read_lock();
I'm not sure RCU is really needed here, are any of those paths
time critical? But ok shouldn't hurt either.
> +static int __init uncore_cpu_init(void)
> +{
> + int ret, cpu;
> +
> + switch (boot_cpu_data.x86_model) {
> + default:
> + return 0;
> + }
Needs a case? Always returns?
> +
> + ret = uncore_types_init(msr_uncores);
> + if (ret)
> + return ret;
> +
> + get_online_cpus();
> + for_each_online_cpu(cpu)
> + uncore_cpu_prepare(cpu);
> +
> + preempt_disable();
> + smp_call_function(uncore_cpu_setup, NULL, 1);
> + uncore_cpu_setup(NULL);
> + preempt_enable();
That's on_each_cpu()
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists