lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zr8pGEMmzmexNGL8@J2N7QTR9R3>
Date: Fri, 16 Aug 2024 11:25:28 +0100
From: Mark Rutland <mark.rutland@....com>
To: Robin Murphy <robin.murphy@....com>
Cc: will@...nel.org, linux-arm-kernel@...ts.infradead.org,
	linux-kernel@...r.kernel.org, ilkka@...amperecomputing.com
Subject: Re: [PATCH 5/8] perf/arm-cmn: Make cycle counts less surprising

On Fri, Aug 09, 2024 at 08:15:44PM +0100, Robin Murphy wrote:
> By default, CMN has automatic clock-gating with the implication that a
> DTC's cycle counter may not increment while the domain is sufficiently
> idle.

Similar is true for the cycles event on the CPU side, so this has some
precedent.

> Given that we may have up to 4 DTCs to choose from when scheduling
> a cycles event, this may potentially lead to surprising results if
> trying to measure metrics based on activity in a different DTC domain
> from where cycles end up being counted. Make the reasonable assumption
> that if the user wants to count cycles, they almost certainly want to
> count all of the cycles, and disable clock gating while a DTC's cycle
> counter is in use.

As above, the default does match the CPU side behaviour, and a user
might be trying to determine how much clock gating occurs over some
period, so it's not necessarily right to always disable clock gating.
That might need to be an explicit option on the cycles event.

Do we always have the ability to disable clock gating, or can that be
locked down by system integration or FW?

Mark.

> 
> Signed-off-by: Robin Murphy <robin.murphy@....com>
> ---
>  drivers/perf/arm-cmn.c | 16 +++++++++++-----
>  1 file changed, 11 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
> index 8f7a1a6f8ab7..4d128db2040c 100644
> --- a/drivers/perf/arm-cmn.c
> +++ b/drivers/perf/arm-cmn.c
> @@ -115,6 +115,7 @@
>  /* The DTC node is where the magic happens */
>  #define CMN_DT_DTC_CTL			0x0a00
>  #define CMN_DT_DTC_CTL_DT_EN		BIT(0)
> +#define CMN_DT_DTC_CTL_CG_DISABLE	BIT(10)
>  
>  /* DTC counters are paired in 64-bit registers on a 16-byte stride. Yuck */
>  #define _CMN_DT_CNT_REG(n)		((((n) / 2) * 4 + (n) % 2) * 4)
> @@ -1544,9 +1545,12 @@ static void arm_cmn_event_start(struct perf_event *event, int flags)
>  	int i;
>  
>  	if (type == CMN_TYPE_DTC) {
> -		i = hw->dtc_idx[0];
> -		writeq_relaxed(CMN_CC_INIT, cmn->dtc[i].base + CMN_DT_PMCCNTR);
> -		cmn->dtc[i].cc_active = true;
> +		struct arm_cmn_dtc *dtc = cmn->dtc + hw->dtc_idx[0];
> +
> +		writel_relaxed(CMN_DT_DTC_CTL_DT_EN | CMN_DT_DTC_CTL_CG_DISABLE,
> +			       dtc->base + CMN_DT_DTC_CTL);
> +		writeq_relaxed(CMN_CC_INIT, dtc->base + CMN_DT_PMCCNTR);
> +		dtc->cc_active = true;
>  	} else if (type == CMN_TYPE_WP) {
>  		u64 val = CMN_EVENT_WP_VAL(event);
>  		u64 mask = CMN_EVENT_WP_MASK(event);
> @@ -1575,8 +1579,10 @@ static void arm_cmn_event_stop(struct perf_event *event, int flags)
>  	int i;
>  
>  	if (type == CMN_TYPE_DTC) {
> -		i = hw->dtc_idx[0];
> -		cmn->dtc[i].cc_active = false;
> +		struct arm_cmn_dtc *dtc = cmn->dtc + hw->dtc_idx[0];
> +
> +		dtc->cc_active = false;
> +		writel_relaxed(CMN_DT_DTC_CTL_DT_EN, dtc->base + CMN_DT_DTC_CTL);
>  	} else if (type == CMN_TYPE_WP) {
>  		for_each_hw_dn(hw, dn, i) {
>  			void __iomem *base = dn->pmu_base + CMN_DTM_OFFSET(hw->dtm_offset);
> -- 
> 2.39.2.101.g768bb238c484.dirty
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ