[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1520262.pnveEYDEnp@vostro.rjw.lan>
Date: Wed, 16 Mar 2016 22:38:14 +0100
From: "Rafael J. Wysocki" <rjw@...ysocki.net>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Linux PM list <linux-pm@...r.kernel.org>,
Juri Lelli <juri.lelli@....com>,
Steve Muckle <steve.muckle@...aro.org>,
ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Michael Turquette <mturquette@...libre.com>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH v4 7/7] cpufreq: schedutil: New governor based on scheduler utilization data
On Wednesday, March 16, 2016 06:52:11 PM Peter Zijlstra wrote:
> On Wed, Mar 16, 2016 at 03:59:18PM +0100, Rafael J. Wysocki wrote:
> > +static void sugov_work(struct work_struct *work)
> > +{
> > + struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work);
> > +
> > + mutex_lock(&sg_policy->work_lock);
> > + __cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq,
> > + CPUFREQ_RELATION_L);
> > + mutex_unlock(&sg_policy->work_lock);
> > +
>
> Be aware that the below store can creep up and become visible before the
> unlock. AFAICT that doesn't really matter, but still.
It doesn't matter. :-)
Had it mattered, I would have used memory barriers.
> > + sg_policy->work_in_progress = false;
> > +}
> > +
> > +static void sugov_irq_work(struct irq_work *irq_work)
> > +{
> > + struct sugov_policy *sg_policy;
> > +
> > + sg_policy = container_of(irq_work, struct sugov_policy, irq_work);
> > + schedule_work(&sg_policy->work);
> > +}
>
> If you care what cpu the work runs on, you should schedule_work_on(),
> regular schedule_work() can end up on any random cpu (although typically
> it does not).
I know, but I don't care too much.
"ondemand" and "conservative" use schedule_work() for the same thing, so
drivers need to cope with that if they need things to run on a particular
CPU.
That said I guess things would be a bit more efficient if the work was
scheduled on the same CPU that had queued up the irq_work. It also wouldn't
be too difficult to implement, so I'll make that change.
Powered by blists - more mailing lists