lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 17 Nov 2017 02:18:46 +0000
From:   "Liang, Kan" <kan.liang@...el.com>
To:     "tglx@...utronix.de" <tglx@...utronix.de>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC:     "acme@...nel.org" <acme@...nel.org>,
        "eranian@...gle.com" <eranian@...gle.com>,
        "ak@...ux.intel.com" <ak@...ux.intel.com>
Subject: RE: [PATCH V4 1/8] perf/x86/intel/uncore: customized event_read for
 client IMC uncore

Hi Thomas,

Any comments for this patch series?

Thanks,
Kan

> 
> From: Kan Liang <Kan.liang@...el.com>
> 
> There are two free running counters for client IMC uncore. The custom
> event_init() function hardcode their index to 'UNCORE_PMC_IDX_FIXED' and
> 'UNCORE_PMC_IDX_FIXED + 1'. To support the 'UNCORE_PMC_IDX_FIXED +
> 1'
> case, the generic uncore_perf_event_update is obscurely hacked.
> The code quality issue will bring problem when new counter index is
> introduced into generic code. For example, free running counter index.
> 
> Introduce customized event_read function for client IMC uncore.
> The customized function is exactly copied from previous generic
> uncore_pmu_event_read.
> The 'UNCORE_PMC_IDX_FIXED + 1' case will be isolated for client IMC uncore
> only.
> 
> Signed-off-by: Kan Liang <Kan.liang@...el.com>
> ---
> 
> Change since V3:
>  - Use the customized read function to replace uncore_perf_event_update.
>  - Move generic code change to patch 3/8.
> 
>  arch/x86/events/intel/uncore_snb.c | 33
> +++++++++++++++++++++++++++++++--
>  1 file changed, 31 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/events/intel/uncore_snb.c
> b/arch/x86/events/intel/uncore_snb.c
> index db1127c..b6d0d72 100644
> --- a/arch/x86/events/intel/uncore_snb.c
> +++ b/arch/x86/events/intel/uncore_snb.c
> @@ -449,6 +449,35 @@ static void snb_uncore_imc_event_start(struct
> perf_event *event, int flags)
>  		uncore_pmu_start_hrtimer(box);
>  }
> 
> +static void snb_uncore_imc_event_read(struct perf_event *event) {
> +	struct intel_uncore_box *box = uncore_event_to_box(event);
> +	u64 prev_count, new_count, delta;
> +	int shift;
> +
> +	/*
> +	 * There are two free running counters in IMC.
> +	 * The index for the second one is hardcoded to
> +	 * UNCORE_PMC_IDX_FIXED + 1.
> +	 */
> +	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> +		shift = 64 - uncore_fixed_ctr_bits(box);
> +	else
> +		shift = 64 - uncore_perf_ctr_bits(box);
> +
> +	/* the hrtimer might modify the previous event value */
> +again:
> +	prev_count = local64_read(&event->hw.prev_count);
> +	new_count = uncore_read_counter(box, event);
> +	if (local64_xchg(&event->hw.prev_count, new_count) != prev_count)
> +		goto again;
> +
> +	delta = (new_count << shift) - (prev_count << shift);
> +	delta >>= shift;
> +
> +	local64_add(delta, &event->count);
> +}
> +
>  static void snb_uncore_imc_event_stop(struct perf_event *event, int flags)  {
>  	struct intel_uncore_box *box = uncore_event_to_box(event); @@ -
> 471,7 +500,7 @@ static void snb_uncore_imc_event_stop(struct perf_event
> *event, int flags)
>  		 * Drain the remaining delta count out of a event
>  		 * that we are disabling:
>  		 */
> -		uncore_perf_event_update(box, event);
> +		snb_uncore_imc_event_read(event);
>  		hwc->state |= PERF_HES_UPTODATE;
>  	}
>  }
> @@ -533,7 +562,7 @@ static struct pmu snb_uncore_imc_pmu = {
>  	.del		= snb_uncore_imc_event_del,
>  	.start		= snb_uncore_imc_event_start,
>  	.stop		= snb_uncore_imc_event_stop,
> -	.read		= uncore_pmu_event_read,
> +	.read		= snb_uncore_imc_event_read,
>  };
> 
>  static struct intel_uncore_ops snb_uncore_imc_ops = {
> --
> 2.7.4

Powered by blists - more mailing lists