lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231019092341.GE36211@noisy.programming.kicks-ass.net>
Date:   Thu, 19 Oct 2023 11:23:41 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     kan.liang@...ux.intel.com
Cc:     mingo@...hat.com, acme@...nel.org, linux-kernel@...r.kernel.org,
        mark.rutland@....com, alexander.shishkin@...ux.intel.com,
        jolsa@...nel.org, namhyung@...nel.org, irogers@...gle.com,
        adrian.hunter@...el.com, ak@...ux.intel.com, eranian@...gle.com,
        alexey.v.bayduraev@...ux.intel.com, tinghao.zhang@...el.com
Subject: Re: [PATCH V4 4/7] perf/x86/intel: Support LBR event logging

On Wed, Oct 04, 2023 at 11:40:41AM -0700, kan.liang@...ux.intel.com wrote:

> diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
> index c3b0d15a9841..1e80a551a4c2 100644
> --- a/arch/x86/events/intel/lbr.c
> +++ b/arch/x86/events/intel/lbr.c
> @@ -676,6 +676,21 @@ void intel_pmu_lbr_del(struct perf_event *event)
>  	WARN_ON_ONCE(cpuc->lbr_users < 0);
>  	WARN_ON_ONCE(cpuc->lbr_pebs_users < 0);
>  	perf_sched_cb_dec(event->pmu);
> +
> +	/*
> +	 * The logged occurrences information is only valid for the
> +	 * current LBR group. If another LBR group is scheduled in
> +	 * later, the information from the stale LBRs will be wrongly
> +	 * interpreted. Reset the LBRs here.
> +	 * For the context switch, the LBR will be unconditionally
> +	 * flushed when a new task is scheduled in. If both the new task
> +	 * and the old task are monitored by a LBR event group. The
> +	 * reset here is redundant. But the extra reset doesn't impact
> +	 * the functionality. It's hard to distinguish the above case.
> +	 * Keep the unconditionally reset for a LBR event group for now.
> +	 */

I found this really hard to read, also should this not rely on
!cpuc->lbr_users ?

As is, you'll reset the lbr for every event in the group.

> +	if (is_branch_counters_group(event))
> +		intel_pmu_lbr_reset();
>  }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ