[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1627825.eiM24BDMdD@vostro.rjw.lan>
Date: Thu, 24 Nov 2016 02:19:03 +0100
From: "Rafael J. Wysocki" <rjw@...ysocki.net>
To: Peter Zijlstra <peterz@...radead.org>,
Viresh Kumar <viresh.kumar@...aro.org>
Cc: Ingo Molnar <mingo@...hat.com>, linaro-kernel@...ts.linaro.org,
linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <Juri.Lelli@....com>,
Robin Randhawa <robin.randhawa@....com>,
Steve Muckle <smuckle.linux@...il.com>
Subject: Re: [PATCH V2 3/4] cpufreq: schedutil: move slow path from workqueue to SCHED_FIFO task
On Wednesday, November 16, 2016 04:26:05 PM Peter Zijlstra wrote:
> On Tue, Nov 15, 2016 at 01:53:22PM +0530, Viresh Kumar wrote:
> > @@ -308,7 +313,21 @@ static void sugov_irq_work(struct irq_work *irq_work)
> > struct sugov_policy *sg_policy;
> >
> > sg_policy = container_of(irq_work, struct sugov_policy, irq_work);
> > +
> > + /*
> > + * For Real Time and Deadline tasks, schedutil governor shoots the
> > + * frequency to maximum. And special care must be taken to ensure that
> > + * this kthread doesn't result in that.
> > + *
> > + * This is (mostly) guaranteed by the work_in_progress flag. The flag is
> > + * updated only at the end of the sugov_work() and before that schedutil
> > + * rejects all other frequency scaling requests.
> > + *
> > + * Though there is a very rare case where the RT thread yields right
> > + * after the work_in_progress flag is cleared. The effects of that are
> > + * neglected for now.
> > + */
> > + kthread_queue_work(&sg_policy->worker, &sg_policy->work);
> > }
>
>
> Right, so that's a wee bit icky, but its also entirely pre-existing
> code.
>
> Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Whole series applied.
Thanks,
Rafael
Powered by blists - more mailing lists