[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z8meVrqd-F7tf44j@krava>
Date: Thu, 6 Mar 2025 14:08:38 +0100
From: Jiri Olsa <olsajiri@...il.com>
To: lirongqing <lirongqing@...du.com>
Cc: olsajiri@...il.com, peterz@...radead.org, mingo@...hat.com,
acme@...nel.org, namhyung@...nel.org, mark.rutland@....com,
alexander.shishkin@...ux.intel.com, irogers@...gle.com,
adrian.hunter@...el.com, kan.liang@...ux.intel.com,
tglx@...utronix.de, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, linux-perf-users@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH][next] perf/x86/intel/bts: check if bts_ctx is allocated
when call bts functions
On Thu, Mar 06, 2025 at 01:11:02PM +0800, lirongqing wrote:
> From: Li RongQing <lirongqing@...du.com>
>
> bts_ctx maybe not allocated, for example if the cpu has X86_FEATURE_PTI,
> but intel_bts_disable/enable_local and intel_bts_interrupt are called
> unconditionally from intel_pmu_handle_irq and exploding on accessing
> bts_ctx
>
> so check if bts_ctx is allocated when call bts functions
>
> Fixes: 3acfcefa795c "(perf/x86/intel/bts: Allocate bts_ctx only if necessary)"
> Reported-by: Jiri Olsa <olsajiri@...il.com>
Tested-by: Jiri Olsa <jolsa@...nel.org>
thanks,
jirka
> Suggested-by: Adrian Hunter <adrian.hunter@...el.com>
> Suggested-by: Dave Hansen <dave.hansen@...el.com>
> Signed-off-by: Li RongQing <lirongqing@...du.com>
> ---
> arch/x86/events/intel/bts.c | 25 ++++++++++++++++++++-----
> 1 file changed, 20 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/events/intel/bts.c b/arch/x86/events/intel/bts.c
> index 8e09319..e8b3e7b 100644
> --- a/arch/x86/events/intel/bts.c
> +++ b/arch/x86/events/intel/bts.c
> @@ -338,9 +338,14 @@ static void bts_event_stop(struct perf_event *event, int flags)
>
> void intel_bts_enable_local(void)
> {
> - struct bts_ctx *bts = this_cpu_ptr(bts_ctx);
> - int state = READ_ONCE(bts->state);
> + struct bts_ctx *bts;
> + int state;
>
> + if (!bts_ctx)
> + return;
> +
> + bts = this_cpu_ptr(bts_ctx);
> + state = READ_ONCE(bts->state);
> /*
> * Here we transition from INACTIVE to ACTIVE;
> * if we instead are STOPPED from the interrupt handler,
> @@ -358,7 +363,12 @@ void intel_bts_enable_local(void)
>
> void intel_bts_disable_local(void)
> {
> - struct bts_ctx *bts = this_cpu_ptr(bts_ctx);
> + struct bts_ctx *bts;
> +
> + if (!bts_ctx)
> + return;
> +
> + bts = this_cpu_ptr(bts_ctx);
>
> /*
> * Here we transition from ACTIVE to INACTIVE;
> @@ -450,12 +460,17 @@ bts_buffer_reset(struct bts_buffer *buf, struct perf_output_handle *handle)
> int intel_bts_interrupt(void)
> {
> struct debug_store *ds = this_cpu_ptr(&cpu_hw_events)->ds;
> - struct bts_ctx *bts = this_cpu_ptr(bts_ctx);
> - struct perf_event *event = bts->handle.event;
> + struct bts_ctx *bts;
> + struct perf_event *event;
> struct bts_buffer *buf;
> s64 old_head;
> int err = -ENOSPC, handled = 0;
>
> + if (!bts_ctx)
> + return 0;
> +
> + bts = this_cpu_ptr(bts_ctx);
> + event = bts->handle.event;
> /*
> * The only surefire way of knowing if this NMI is ours is by checking
> * the write ptr against the PMI threshold.
> --
> 2.9.4
>
Powered by blists - more mailing lists