[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aTXlvIhWd6RkWhyY@tassilo>
Date: Sun, 7 Dec 2025 12:38:20 -0800
From: Andi Kleen <ak@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org, ggherdovich@...e.cz,
rafael.j.wysocki@...el.com
Subject: Re: [PATCH] x86/aperfmperf: Don't disable scheduler APERF/MPERF on
bad samples
On Fri, Dec 05, 2025 at 05:10:52PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 04, 2025 at 10:09:14AM -0800, Andi Kleen wrote:
> > The APERF and MPERF MSRs get read together and the ratio
> > between the two is used to scale the scheduler capacity with frequency.
> >
> > Since e2b0d619b400 when there is ever an over/underflow of
> > the APERF/MPERF computation the sampling gets completely
> > disabled, under the assumption that there is a problem with
> > the hardware.
> >
> > However this can happen without any malfunction when there is
> > a long enough interruption between the two MSR reads, for
> > example due to an unlucky NMI or SMI or other system event
> > causing delays. We saw it when a delay resulted in
> > Acnt_Delta << Mcnt_Delta (about ~4k for acnt_delta and
> > 2M for MCnt_Delta)
> >
> > In this case the ratio computation underflows, which is detected,
> > but then APERF/MPERF usage gets incorrectly disabled forever.
> >
> > Remove the code to completely disable APERF/MPERF on
> > a bad sample. Instead when any over/underflow happens
> > return the fallback full capacity.
>
> So what systems are actually showing this bad behaviour and what are we
> doing to cure the problem rather than fight the symptom?
We saw it with an artificial stress test on an Intel internal system,
but like I (and Andrew) explained it is unavoidable and general:
Delays can always happen due to many reasons on any system: NMIs, SMIs,
virtualization, other random system issues.
> Also, a system where this is systematically buggered would really be
> better off disabling it, no?
The particular failure case here if it was common (lots of very long
execution delays) would make the system fairly unusable anyways.
The scheduler doing a slightly worse job is the least of your troubles
in such a case.
For other failures I'm not aware of a system (perhaps short of a
hypervisor that doesn't save/restore when switching underlying cpus)
that actually has broken APERF/MPERF. So breaking good systems
just for a hypothetical bad case doesn't seem like a good trade off.
The main difference to the old strategy is that if there is a really
bad case but it results in bad samples that don't under/overflow
they would still be used, while the old one would stop on any
bad sample.But any attempt to handle this without impacting
good cases would either need complexity or magic threshold numbers.
So it seems better to not even try.
-Andi
Powered by blists - more mailing lists