lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <683367fc-4295-41f5-b10d-3c120f54ca0f@linux.intel.com>
Date: Fri, 22 Aug 2025 13:26:22 +0800
From: "Mi, Dapeng" <dapeng1.mi@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, Arnaldo Carvalho de Melo
 <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>,
 Ian Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>,
 Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
 Kan Liang <kan.liang@...ux.intel.com>, Andi Kleen <ak@...ux.intel.com>,
 Eranian Stephane <eranian@...gle.com>, linux-kernel@...r.kernel.org,
 linux-perf-users@...r.kernel.org, Dapeng Mi <dapeng1.mi@...el.com>,
 kernel test robot <oliver.sang@...el.com>
Subject: Re: [Patch v3 3/7] perf/x86: Check if cpuc->events[*] pointer exists
 before accessing it


On 8/21/2025 9:35 PM, Peter Zijlstra wrote:
> On Wed, Aug 20, 2025 at 10:30:28AM +0800, Dapeng Mi wrote:
>> When intel_pmu_drain_pebs_icl() is called to drain PEBS records, the
>> perf_event_overflow() could be called to process the last PEBS record.
>>
>> While perf_event_overflow() could trigger the interrupt throttle and
>> stop all events of the group, like what the below call-chain shows.
>>
>> perf_event_overflow()
>>   -> __perf_event_overflow()
>>     ->__perf_event_account_interrupt()
>>       -> perf_event_throttle_group()
>>         -> perf_event_throttle()
>>           -> event->pmu->stop()
>>             -> x86_pmu_stop()
>>
>> The side effect of stopping the events is that all corresponding event
>> pointers in cpuc->events[] array are cleared to NULL.
>>
>> Assume there are two PEBS events (event a and event b) in a group. When
>> intel_pmu_drain_pebs_icl() calls perf_event_overflow() to process the
>> last PEBS record of PEBS event a, interrupt throttle is triggered and
>> all pointers of event a and event b are cleared to NULL. Then
>> intel_pmu_drain_pebs_icl() tries to process the last PEBS record of
>> event b and encounters NULL pointer access.
>>
>> Since the left PEBS records have been processed when stopping the event,
>> check and skip to process the last PEBS record if cpuc->events[*] is
>> NULL.
>>
>> Reported-by: kernel test robot <oliver.sang@...el.com>
>> Closes: https://lore.kernel.org/oe-lkp/202507042103.a15d2923-lkp@intel.com
>> Fixes: 9734e25fbf5a ("perf: Fix the throttle logic for a group")
>> Signed-off-by: Dapeng Mi <dapeng1.mi@...ux.intel.com>
>> Tested-by: kernel test robot <oliver.sang@...el.com>
>> ---
>>  arch/x86/events/intel/ds.c | 10 ++++++++++
>>  1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
>> index c0b7ac1c7594..dcf29c099ad2 100644
>> --- a/arch/x86/events/intel/ds.c
>> +++ b/arch/x86/events/intel/ds.c
>> @@ -2663,6 +2663,16 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_d
>>  			continue;
>>  
>>  		event = cpuc->events[bit];
>> +		/*
>> +		 * perf_event_overflow() called by below __intel_pmu_pebs_last_event()
>> +		 * could trigger interrupt throttle and clear all event pointers of the
>> +		 * group in cpuc->events[] to NULL. So need to re-check if cpuc->events[*]
>> +		 * is NULL, if so it indicates the event has been throttled (stopped) and
>> +		 * the corresponding last PEBS records have been processed in stopping
>> +		 * event, don't need to process it again.
>> +		 */
>> +		if (!event)
>> +			continue;
>>  
>>  		__intel_pmu_pebs_last_event(event, iregs, regs, data, last[bit],
>>  					    counts[bit], setup_pebs_adaptive_sample_data);
>
> So if this is due to __intel_pmu_pebs_last_event() calling into
> perf_event_overflow(); then isn't intel_pmu_drain_pebs_nhm() similarly
> affected?
>
> And worse, the _nhm() version would loose all events for that counter,
> not just the last.

hmm, Yes. After double check, I suppose I made a mistake for the answer to
Andi. It indeed has data loss since the "ds->pebs_index" is reset at the
head of _nhm()/_icl() these drain_pebs helper instead of the end of the
drain_pebs helper.  :(

> I'm really thinking this isn't the right thing to do.
>
>
> How about we audit the entirety of arch/x86/events/ for cpuc->events[]
> usage and see if we can get away with changing x86_pmu_stop() to simply
> not clearing that field.

Checking current code, I suppose it's fine that we don't clear
cpuc->events[] in x86_pmu_stop() since we already have another variable
"cpuc->active_mask" which is used to indicate if the corresponding
cpuc->events[*] is active. But in current code, the cpuc->active_mask is
not always checked.

So if we select not to clear cpuc->events[] in x86_pmu_stop(), then it's a
must to check cpuc->active_mask before really accessing cpuc->events[]
represented event. Maybe we can add an inline function got check this.

bool inline x86_pmu_cntr_event_active(int idx)
{
    struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);

    return cpuc->events[idx] && test_bit(idx, cpuc->active_mask);
}

>
> Or perhaps move the setting and clearing into x86_pmu_{add,del}() rather
> than x86_pmu_{start,stop}(). After all, the latter don't affect the
> counter placement, they just stop/start the event.

IIUC, we could not move the setting into x86_pmu_add() from x86_pmu_stop()
since the counter index is not finalized at x86_pmu_add() is called. The
counter index could change for each adding a new event.


>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ