lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 10 Nov 2022 13:25:17 +0000
From:   Qais Yousef <qyousef@...alina.io>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Kajetan Puchalski <kajetan.puchalski@....com>,
        Jian-Min Liu <jian-min.liu@...iatek.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Ingo Molnar <mingo@...nel.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Vincent Donnefort <vdonnefort@...gle.com>,
        Quentin Perret <qperret@...gle.com>,
        Patrick Bellasi <patrick.bellasi@...bug.net>,
        Abhijeet Dharmapurikar <adharmap@...cinc.com>,
        Qais Yousef <qais.yousef@....com>,
        linux-kernel@...r.kernel.org,
        Jonathan JMChen <jonathan.jmchen@...iatek.com>
Subject: Re: [RFC PATCH 0/1] sched/pelt: Change PELT halflife at runtime

On 11/09/22 16:49, Peter Zijlstra wrote:
> On Tue, Nov 08, 2022 at 07:48:43PM +0000, Qais Yousef wrote:
> > On 11/07/22 14:41, Peter Zijlstra wrote:
> > > On Thu, Sep 29, 2022 at 03:41:47PM +0100, Kajetan Puchalski wrote:
> > > 
> > > > Based on all the tests we've seen, jankbench or otherwise, the
> > > > improvement can mainly be attributed to the faster ramp up of frequency
> > > > caused by the shorter PELT window while using schedutil.
> > > 
> > > Would something terrible like the below help some?
> > > 
> > > If not, I suppose it could be modified to take the current state as
> > > history. But basically it runs a faster pelt sum along side the regular
> > > signal just for ramping up the frequency.
> > 
> > A bit of a tangent, but this reminded me of this old patch:
> > 
> > 	https://lore.kernel.org/lkml/1623855954-6970-1-git-send-email-yt.chang@mediatek.com/
> > 
> > I think we have a bit too many moving cogs that might be creating undesired
> > compound effect.
> > 
> > Should we consider removing margins in favour of improving util ramp up/down?
> > (whether via util_est or pelt hf).
> 
> Yeah, possibly.
> 
> So one thing that was key to that hack I proposed is that it is
> per-task. This means we can either set or detect the task activation
> period and use that to select an appropriate PELT multiplier.

Note that a big difference compared to PELT HF is that we bias towards going up
faster in util_est, not being able to go down as quickly could impact power as
our residency in higher frequencies will be higher. Testing only can show how
big of a problem this is in practice.

> 
> But please explain; once tasks are in a steady state (60HZ, 90HZ or god
> forbit higher), the utilization should be the same between the various
> PELT window sizes, provided the activation period isn't *much* larger
> than the window.

It is steady state for a short period of time, before something else happens
that change the nature of the workload.

For example, being standing still in an empty room then an explosion suddenly
happens causing lots of activity to appear on the screen.

We can have a steady state at 20%, but an action on the screen could suddenly
change the demand to 100%.

You can find a lot of videos on how to tweak cpu frequencies and governor to
improve gaming performances on youtube by the way:

	https://www.youtube.com/results?search_query=android+gaming+cpu+boost

And this ancient video from google about impact of frequency scaling on games:

	https://www.youtube.com/watch?v=AZ97b2nT-Vo

this is truly ancient and the advice given then (over 8 years ago) is not
a reflection on current state of affairs.

The problem is not new; and I guess expectations just keeps going higher on
what one can do on their phone in spite of all the past improvements :-)

> 
> Are these things running a ton of single shot tasks or something daft
> like that?

I'm not sure how all game engines behave; but the few I've seen they don't tend
to do that.

I've seen apps like instagram using single shot tasks sometime in the (distant)
past to retrieve images. Generally I'm not sure how the Java based APIs behave.
There is an API for Job Scheduler that allows apps to schedule background and
foreground work; that could end up reusing a pool of tasks or creating new
ones. I'm not sure. Game engines tend to be written in NDKs; but simpler games
might not be.


Cheers

--
Qais Yousef

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ