lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 20 Oct 2017 16:15:29 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Kan Liang <kan.liang@...el.com>
cc:     Peter Zijlstra <peterz@...radead.org>, x86@...nel.org,
        LKML <linux-kernel@...r.kernel.org>, acme@...nel.org,
        eranian@...gle.com, ak@...ux.intel.com
Subject: Re: [PATCH V2 3/4] perf/x86/intel/uncore: add infrastructure for
 freerunning counters

On Thu, 19 Oct 2017, kan.liang@...el.com wrote:
> From: Kan Liang <Kan.liang@...el.com>
> 
> There are a number of freerunning counters introduced for uncore.
> For example, Skylake Server has IIO freerunning counters to collect
> Input/Output x BW/Utilization.
> 
> The freerunning counter is similar as fixed counter, except it cannot
> be written by SW. It needs to be specially handled in generic code and
> not added in box->events list.

You are describing what you are doing, not why.

> Introduce a new idx to indicate the freerunning counter. Only one idx is

Please write out index and do not use arbitrary acronyms in changelogs.

> enough for all freerunning counters. Because event and freerunning
> counter are always 1:1 mapped. The freerunning counter is always

  mapped, the free running ....  

> available. It doesn't need extra idx to indicate the assigned counter.
> 
> The event code for freerunning event is shared with fixed event, which

Csn you please use consistent terminology. freerunning counter, freerunning
event ? Are you talking about two different things here?

> is 0xff. The umask of freerunning event starts from 0x10. The umask less
> than 0x10 is reserved for fixed event.
> 
> The Freerunning counters could have different MSR location and offset.

s/Freerunning/free running/

could have ? Either they have or not.

> Accordingly, they are divided into different types. Each type is limited
> to only have at most 16 events.
> So the umask of the first free running events type starts from 0x10. The
> umask of the second starts from 0x20. The rest can be done in the same
> manner.

Which rest and which manner?

Please spend more time in writing change logs. They are part of the patch
and an essential information for a reviewer.

> @@ -218,7 +218,9 @@ void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *e
>  	u64 prev_count, new_count, delta;
>  	int shift;
>  
> -	if (uncore_pmc_fixed(event->hw.idx))
> +	if (uncore_pmc_freerunning(event->hw.idx))
> +		shift = 64 - uncore_freerunning_bits(box, event);
> +	else if (uncore_pmc_fixed(event->hw.idx))
>  		shift = 64 - uncore_fixed_ctr_bits(box);
>  	else
>  		shift = 64 - uncore_perf_ctr_bits(box);
> @@ -362,6 +364,10 @@ uncore_collect_events(struct intel_uncore_box *box, struct perf_event *leader,
>  		if (n >= max_count)
>  			return -EINVAL;
>  
> +		/* freerunning event is not tracked by box->events list */

First word in a sentence starts with an uppercase letter.

> +		if (uncore_pmc_freerunning(event->hw.idx))
> +			continue;
> +
>  		box->event_list[n] = event;
>  		n++;
>  	}
> @@ -454,10 +460,21 @@ static void uncore_pmu_event_start(struct perf_event *event, int flags)
>  	struct intel_uncore_box *box = uncore_event_to_box(event);
>  	int idx = event->hw.idx;
>  
> -	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
> +	if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX))
>  		return;
>  
> -	if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX))
> +	/*
> +	 * Freerunning counters cannot be written by SW.
> +	 * Does not need to enable it or add the event to box->events list.
> +	 * Use current value as start point.

So what you want to say is:

   	/*
	 * Free running counters are read only and always active. Use the
	 * current counter value as start point.
	 */

Right?

> +	 */
> +	if (uncore_pmc_freerunning(event->hw.idx)) {
> +		local64_set(&event->hw.prev_count,
> +			    uncore_read_counter(box, event));
> +		return;
> +	}
> +
> +	if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
>  		return;
>  
>  	event->hw.state = 0;
> @@ -479,6 +496,15 @@ static void uncore_pmu_event_stop(struct perf_event *event, int flags)
>  	struct intel_uncore_box *box = uncore_event_to_box(event);
>  	struct hw_perf_event *hwc = &event->hw;
>  
> +	/*
> +	 * Does not need to disable freerunning counters.

Does not need? You _cannot_ disable them.

> +	 * Read current value as end.
> +	 */
> +	if (uncore_pmc_freerunning(hwc->idx)) {
> +		uncore_perf_event_update(box, event);
> +		return;


> +	/*
> +	 * Freerunning counters cannot be written by SW.

See above.

> +	 * No need to force event->hw.idx = -1 to reassign the counter.
> +	 */
> +	if (uncore_pmc_freerunning(event->hw.idx))
> +		return;
> +
>  	for (i = 0; i < box->n_events; i++) {
>  		if (event == box->event_list[i]) {
>  			uncore_put_event_constraint(box, event);
> @@ -603,6 +647,13 @@ static int uncore_validate_group(struct intel_uncore_pmu *pmu,
>  	struct intel_uncore_box *fake_box;
>  	int ret = -EINVAL, n;
>  
> +	/*
> +	 * Event and freerunning counters are 1:1 mapped,
> +	 * which is always available.

Nice informative one!

> +	 */
> +	if (uncore_pmc_freerunning(event->hw.idx))
> +		return 0;
> +
> +/*
> + * Freerunning counter is similar as fixed counter, except it cannot be
> + * written by SW.
> + *
> + * Freerunning MSR events have the same event code 0xff as fixed events.
> + * The Freerunning events umask starts from 0x10.
> + * The umask which is less than 0x10 is reserved for fixed events.
> + *
> + * The Freerunning events are divided into different types according to
> + * MSR location, bit width or definition. Each type is limited to only have
> + * at most 16 events.
> + * So the umask of first type starts from 0x10, the second starts from 0x20,
> + * the rest can be done in the same manner.
> + */
> +#define UNCORE_FREERUNNING_MSR_START		0x10

New lines are there for a reason. Squeezing that #define between the
comment and the inline function, which not even uses that define, does not
make the code more readable.

> +static inline unsigned int uncore_freerunning_msr_idx(u64 config)
> +{
> +	return ((config >> 8) & 0xf);
> +}
> +
> +static inline unsigned int uncore_freerunning_msr_type(u64 config)
> +{
> +	return ((((config >> 8) - UNCORE_FREERUNNING_MSR_START) >> 4) & 0xf);
> +}

Other than that this looks sane.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ