lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1311000892.2970.23.camel@localhost>
Date:	Mon, 18 Jul 2011 22:54:52 +0800
From:	Lin Ming <ming.m.lin@...el.com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	Ingo Molnar <mingo@...e.hu>, Andi Kleen <andi@...stfloor.org>,
	Stephane Eranian <eranian@...gle.com>,
	Arnaldo Carvalho de Melo <acme@...stprotocols.net>,
	linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 2/6] perf, x86: Add Intel Nehalem/Westmere uncore pmu

On Mon, 2011-07-18 at 22:20 +0800, Peter Zijlstra wrote:
> On Fri, 2011-07-15 at 14:34 +0000, Lin Ming wrote:
> > Add Intel Nehalem/Westmere uncore pmu support.
> > And also the generic data structure to support uncore pmu.
> > 
> > Uncore pmu interrupt does not work, so hrtimer is used to pull counters.
> 
> s/pull/poll/

Will change.

> 
> > diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore.c b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
> > new file mode 100644
> > index 0000000..79a501e
> > --- /dev/null
> > +++ b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
> > @@ -0,0 +1,450 @@
> > +#include "perf_event_intel_uncore.h"
> > +
> > +static DEFINE_PER_CPU(struct cpu_uncore_events, cpu_uncore_events);
> > +static DEFINE_RAW_SPINLOCK(intel_uncore_lock);
> > +
> > +static bool uncore_pmu_initialized;
> > +static struct intel_uncore_pmu intel_uncore_pmu __read_mostly;
> > +
> > +/*
> > + * Uncore pmu interrupt does not work.
> > + * Use hrtimer to pull the counter every 10 seconds.
> > + */
> > +#define UNCORE_PMU_HRTIMER_INTERVAL (10000000000ULL)
> 
>  10 * NSEC_PER_SEC

ok.

> 
> > +static int uncore_pmu_event_init(struct perf_event *event)
> > +{
> > +	struct hw_perf_event *hwc = &event->hw;
> > +
> > +	if (!uncore_pmu_initialized)
> > +		return -ENOENT;
> > +
> > +	if (event->attr.type != uncore_pmu.type)
> > +		return -ENOENT;
> > +
> > +	/*
> > +	 * Uncore PMU does measure at all privilege level all the time.
> > +	 * So it doesn't make sense to specify any exclude bits.
> > +	 */
> > +	if (event->attr.exclude_user || event->attr.exclude_kernel
> > +	    || event->attr.exclude_hv || event->attr.exclude_idle)
> > +		return -ENOENT;
> 
> -EINVAL, the PMU exists and is the right one, we just don't support
> this.

ok.

> 
> > +	/* Sampling not supported yet */
> > +	if (hwc->sample_period)
> > +		return -EINVAL;
> > +
> > +	return 0;
> > +}
> 
> > +static int uncore_pmu_add(struct perf_event *event, int flags)
> > +{
> > +	struct cpu_uncore_events *cpuc = &__get_cpu_var(cpu_uncore_events);
> > +	struct intel_uncore *uncore = cpuc->intel_uncore;
> > +	int ret = 1;
> > +	int i;
> > +
> > +	raw_spin_lock(&uncore->lock);
> > +
> > +	if (event->attr.config == UNCORE_FIXED_EVENT) {
> > +		i = X86_PMC_IDX_FIXED;
> > +		goto fixed_event;
> 
> Can the GP counters also count that event? If so, what happens if I
> start 2 of them?

For Nehalem, manual says "The fixed-function
uncore counter increments at the rate of the U-clock when enabled."

There is no same event in the Nehalem uncore events list.

For SandyBridge, manual does not tell clearly what the fixed event
counts. But I think it should be similar with Nehalem.

> 
> > +	}
> > +
> > +	for (i = 0; i < X86_PMC_IDX_FIXED; i++) {
> > +fixed_event:
> > +		if (!uncore->events[i]) {
> > +			uncore->events[i] = event;
> > +			uncore->n_events++;
> > +			event->hw.idx = i;
> > +			__set_bit(i, uncore->active_mask);
> > +
> > +			intel_uncore_pmu.hw_config(event);
> > +
> > +			if (flags & PERF_EF_START)
> > +				uncore_pmu_start(event, flags);
> > +			ret = 0;
> > +			break;
> > +		}
> > +	}
> > +
> > +	if (uncore->n_events == 1) {
> > +		uncore_pmu_start_hrtimer(uncore);
> > +		intel_uncore_pmu.enable_all();
> > +	}
> > +
> > +	raw_spin_unlock(&uncore->lock);
> > +
> > +	return ret;
> > +}
> 
> uncore is fully symmetric and doesn't have any constraints other than
> the fixed counter?

SandyBridge uncore events 0x0180 and 0x0183 can only use counter 0.

> 
> I guess we can start with this, there is still the issue of mapping the
> events to a single active cpu in the node, but I guess we can do that a
> little later.

Do we really need this mapping with uncore pmu interrupt disabled?

Thanks for comments.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ