[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220419205738.GZ2731@worktop.programming.kicks-ass.net>
Date: Tue, 19 Apr 2022 22:57:38 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Wen Yang <wenyang@...ux.alibaba.com>
Cc: Stephane Eranian <eranian@...gle.com>,
Wen Yang <simon.wy@...baba-inc.com>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
mark rutland <mark.rutland@....com>,
jiri olsa <jolsa@...hat.com>,
namhyung kim <namhyung@...nel.org>,
borislav petkov <bp@...en8.de>, x86@...nel.org,
"h. peter anvin" <hpa@...or.com>, linux-perf-users@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RESEND PATCH 2/2] perf/x86: improve the event scheduling to
avoid unnecessary pmu_stop/start
On Tue, Apr 19, 2022 at 10:16:12PM +0800, Wen Yang wrote:
> We finally found that TFA (TSX Force Abort) may affect PMC3's behavior,
> refer to the following patch:
>
> 400816f60c54 perf/x86/intel: ("Implement support for TSX Force Abort")
>
> When the MSR gets set; the microcode will no longer use PMC3 but will
> Force Abort every TSX transaction (upon executing COMMIT).
>
> When TSX Force Abort (TFA) is allowed (default); the MSR gets set when
> PMC3 gets scheduled and cleared when, after scheduling, PMC3 is
> unused.
>
> When TFA is not allowed; clear PMC3 from all constraints such that it
> will not get used.
>
>
> >
> > However, this patch attempts to avoid the switching of the pmu counters
> > in various perf_events, so the special behavior of a single pmu counter
> > will not be propagated to other events.
> >
>
> Since PMC3 may have special behaviors, the continuous switching of PMU
> counters may not only affects the performance, but also may lead to abnormal
> data, please consider this patch again.
I'm not following. How do you get abnormal data?
Are you using RDPMC from userspace? If so, are you following the
prescribed logic using the self-monitoring interface?
Powered by blists - more mailing lists