lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86df1293-c20b-4292-abde-852861dcedf1@linux.intel.com>
Date:   Thu, 19 Oct 2023 09:56:02 -0400
From:   "Liang, Kan" <kan.liang@...ux.intel.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     mingo@...hat.com, acme@...nel.org, linux-kernel@...r.kernel.org,
        mark.rutland@....com, alexander.shishkin@...ux.intel.com,
        jolsa@...nel.org, namhyung@...nel.org, irogers@...gle.com,
        adrian.hunter@...el.com, ak@...ux.intel.com, eranian@...gle.com,
        alexey.v.bayduraev@...ux.intel.com, tinghao.zhang@...el.com
Subject: Re: [PATCH V4 4/7] perf/x86/intel: Support LBR event logging



On 2023-10-19 5:23 a.m., Peter Zijlstra wrote:
> On Wed, Oct 04, 2023 at 11:40:41AM -0700, kan.liang@...ux.intel.com wrote:
> 
>> diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
>> index c3b0d15a9841..1e80a551a4c2 100644
>> --- a/arch/x86/events/intel/lbr.c
>> +++ b/arch/x86/events/intel/lbr.c
>> @@ -676,6 +676,21 @@ void intel_pmu_lbr_del(struct perf_event *event)
>>  	WARN_ON_ONCE(cpuc->lbr_users < 0);
>>  	WARN_ON_ONCE(cpuc->lbr_pebs_users < 0);
>>  	perf_sched_cb_dec(event->pmu);
>> +
>> +	/*
>> +	 * The logged occurrences information is only valid for the
>> +	 * current LBR group. If another LBR group is scheduled in
>> +	 * later, the information from the stale LBRs will be wrongly
>> +	 * interpreted. Reset the LBRs here.
>> +	 * For the context switch, the LBR will be unconditionally
>> +	 * flushed when a new task is scheduled in. If both the new task
>> +	 * and the old task are monitored by a LBR event group. The
>> +	 * reset here is redundant. But the extra reset doesn't impact
>> +	 * the functionality. It's hard to distinguish the above case.
>> +	 * Keep the unconditionally reset for a LBR event group for now.
>> +	 */
> 
> I found this really hard to read, also should this not rely on
> !cpuc->lbr_users ?
>

It's possible that the last LBR user is not in the branch_counters
group, e.g., a branch_counters group + several normal LBR events.
For this case, the is_branch_counters_group(event) return false for the
last LBR user. The LBR will not be reset.

> As is, you'll reset the lbr for every event in the group.
> 
>> +	if (is_branch_counters_group(event))
>> +		intel_pmu_lbr_reset();
>>  }

Right, I forgot to change it after I modified flag. :(

Here I think we should only clear the LBRs once for a branch_counters
group, e.g., in the leader event.

+	if (is_branch_counters_group(event) && event == event->group_leader)+	
intel_pmu_lbr_reset();

The only problem is that the leader event may not be an LBR event. But I
guess it should be OK to limit that the leader event of a
branch_counters group must be an LBR event in hw_config().

Thanks,
Kan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ