[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0gY+WjB2q=wnRYxpwFmLzOcLMKewrCgKdpC0oNPFgoDww@mail.gmail.com>
Date: Tue, 19 Jan 2021 16:12:20 +0100
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: "Rafael J. Wysocki" <rafael@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Linux PM <linux-pm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
x86 Maintainers <x86@...nel.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
Giovanni Gherdovich <ggherdovich@...e.com>,
Giovanni Gherdovich <ggherdovich@...e.cz>
Subject: Re: [PATCH] x86: PM: Register syscore_ops for scale invariance
On Tue, Jan 12, 2021 at 4:10 PM Rafael J. Wysocki <rafael@...nel.org> wrote:
>
> On Tue, Jan 12, 2021 at 4:02 PM Peter Zijlstra <peterz@...radead.org> wrote:
> >
> > On Fri, Jan 08, 2021 at 07:05:59PM +0100, Rafael J. Wysocki wrote:
> > > From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> > >
> > > On x86 scale invariace tends to be disabled during resume from
> > > suspend-to-RAM, because the MPERF or APERF MSR values are not as
> > > expected then due to updates taking place after the platform
> > > firmware has been invoked to complete the suspend transition.
> > >
> > > That, of course, is not desirable, especially if the schedutil
> > > scaling governor is in use, because the lack of scale invariance
> > > causes it to be less reliable.
> > >
> > > To counter that effect, modify init_freq_invariance() to register
> > > a syscore_ops object for scale invariance with the ->resume callback
> > > pointing to init_counter_refs() which will run on the CPU starting
> > > the resume transition (the other CPUs will be taken care of the
> > > "online" operations taking place later).
> > >
> > > Fixes: e2b0d619b400 ("x86, sched: check for counters overflow in frequency invariant accounting")
> > > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> >
> > Thanks!, I'll take it through the sched/urgent tree?
>
> That works, thanks!
Any news on this front? It's been a few days ...
Powered by blists - more mailing lists