[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160119191734.GB6357@twins.programming.kicks-ass.net>
Date: Tue, 19 Jan 2016 20:17:34 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Juri Lelli <juri.lelli@....com>
Cc: Michael Turquette <mturquette@...libre.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
rjw@...ysocki.net, steve.muckle@...aro.org,
vincent.guittot@...aro.org, morten.rasmussen@....com,
dietmar.eggemann@....com
Subject: Re: [RFC PATCH 18/19] cpufreq: remove transition_lock
On Tue, Jan 19, 2016 at 04:01:55PM +0000, Juri Lelli wrote:
> Right, read path is fast, but write path still requires some sort of
> locking (malloc, copy and update). So, I'm wondering if this still pays
> off for a structure that gets written a lot.
No, not at all.
struct cpufreq_driver *driver;
void sched_util_change(unsigned int util)
{
struct my_per_cpu_data *foo;
rcu_read_lock();
foo = __this_cpu_ptr(rcu_dereference(driver)->data);
if (foo) {
if (abs(util - foo->last_util) > 10) {
foo->last_util = util;
foo->set_util(util);
}
}
rcu_read_unlock();
}
struct cpufreq_driver *cpufreq_flip_driver(struct cpufreq_driver *new_driver)
{
struct cpufreq_driver *old_driver;
mutex_lock(&cpufreq_driver_lock);
old_driver = driver;
rcu_assign_driver(driver, new_driver);
if (old_driver)
synchronize_rcu();
mutex_unlock(&cpufreq_driver_lock);
return old_driver;
}
Powered by blists - more mailing lists