[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200612134721.GA142550@google.com>
Date: Fri, 12 Jun 2020 14:47:21 +0100
From: Quentin Perret <qperret@...gle.com>
To: Qais Yousef <qais.yousef@....com>
Cc: Doug Anderson <dianders@...omium.org>,
Benson Leung <bleung@...omium.org>,
Enric Balletbo i Serra <enric.balletbo@...labora.com>,
hsinyi@...omium.org, Joel Fernandes <joelaf@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Nicolas Boichat <drinkcat@...omium.org>,
Gwendal Grignou <gwendal@...omium.org>,
ctheegal@...eaurora.org, Guenter Roeck <groeck@...omium.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] cros_ec_spi: Even though we're RT priority, don't bump
cpu freq
On Friday 12 Jun 2020 at 13:34:48 (+0100), Qais Yousef wrote:
> > On Thursday 11 Jun 2020 at 10:48:40 (-0700), Doug Anderson wrote:
> > > I'm not totally a fan, but I'm definitely not an expert in this area
> > > (I've also only read the patch description and not the patch or the
> > > whole thread). I really don't want yet another value that I need to
> > > tune from board to board. Even worse, this tuning value isn't
> > > board-specific but a combination of board and software specific. By
> > > this, I'd imagine a scenario where you're using a real-time task to
> > > get audio decoding done within a certain latency. I guess you'd tune
> > > this value to make sure that you can get all your audio decoding done
> > > in time but also not burn extra power. Now, imagine that the OS
> > > upgrades and the audio task suddenly has to decode more complex
> > > streams. You've got to go across all of your boards and re-tune every
> > > one? ...or, nobody thinks about it and older boards start getting
> > > stuttery audio? Perhaps the opposite happens and someone comes up
> > > with a newer lower-cpu-intensive codec and you could save power.
> > > Sounds like a bit of a nightmare.
>
> Generally I would expect this global tunable to be part of a vendor's SoC BSP.
>
> People tend to think of the flagship SoCs which are powerful, but if you
> consider the low and medium end devices there's a massive spectrum over there
> that this range is trying to cover.
>
> I would expect older boards init script to be separate for newer boards init
> script. The OS by default boosts all RT tasks unless a platform specific script
> overrides that. So I can't see how an OS upgrade would affect older boards.
I think Doug meant that the device-specific values need re-tuning in
case of major OS updates, which is indeed a pain. But yeah, I'm not sure
if we have a better solution than that, though.
> This knob still allows you to disable the max boosting and use the per-task
> uclamp interface to boost only those tasks you care about. AFAIK this is
> already done in a hacky way in android devices via special vendors provisions.
>
> > >
> > > I'd rather have a boolean value: boost all RT threads to max vs. don't
> > > boost all RT threads to max. Someone that just wanted RT stuff to run
>
> If that's what your use case requires, you can certainly treat it like
> a boolean if you want.
+1
> > > as fast as possible without any hassle on their system and didn't care
> > > about power efficiency could turn this on. Anyone who really cared
> > > about power could turn this off and then could find a more targeted
> > > way to boost things, hopefully in a way that doesn't require tuning.
> > > One option would be to still boost the CPU to max but only for certain
> > > tasks known to be really latency sensitive. Another might be to
>
> per-task uclamp interface allows you to do that. But SoC vendors/system
> integrators need to decide that. I'm saying this with Android in mind
> specifically. Linux based laptops that are tuned in similar way are rare. But
> hopefully this will change at some point :)
>
> > > somehow measure whether or not the task is making its deadlines and
> > > boost the CPU frequency up if deadlines are not being met. I'm sure
> > > there are fancier ways.
>
> You need to use SCHED_DEADLINE then :)
Well, not quite :-)
The frequency selection for DL is purely based on the userspace-provided
parameters, from which we derive the bandwidth request. But we don't do
things like 'raise the frequency if the actual runtime gets close to the
WCET', or anything of the sort. All of that would have to be implemented
in userspace ATM.
Cheers,
Quentin
Powered by blists - more mailing lists