lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 30 Mar 2010 23:28:42 +0200
From:	stephane eranian <eranian@...glemail.com>
To:	Corey Ashford <cjashfor@...ux.vnet.ibm.com>
Cc:	Lin Ming <ming.m.lin@...el.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...e.hu>,
	LKML <linux-kernel@...r.kernel.org>,
	Andi Kleen <andi@...stfloor.org>,
	Paul Mackerras <paulus@...ba.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Xiao Guangrong <xiaoguangrong@...fujitsu.com>,
	Dan Terpstra <terpstra@...s.utk.edu>,
	Philip Mucci <mucci@...s.utk.edu>,
	Maynard Johnson <mpjohn@...ibm.com>,
	Carl Love <cel@...ibm.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	Masami Hiramatsu <mhiramat@...hat.com>
Subject: Re: [RFC] perf_events: support for uncore a.k.a. nest units

On Tue, Mar 30, 2010 at 6:49 PM, Corey Ashford
<cjashfor@...ux.vnet.ibm.com> wrote:
> On 03/30/2010 12:42 AM, Lin Ming wrote:
>>
>> Hi, Corey
>>
>> How is this going now? Are you still working on this?
>> I'd like to help to add support for uncore, test, write code or anything
>> else.
>>
>> Thanks,
>> Lin Ming
>
> I haven't been actively working on adding infrastructure for nest PMUs yet.
>  At the moment we are working on supporting nest events for IBM's Wire-Speed
> processor, using the current infrastructure, because of the time
> limitations.  Using the existing infrastructure is definitely not ideal, but
> for this processor, it's workable.
>
> There are still a lot of issues to solve for adding this infrastructure:
>
> 1) Does perf_events need a new context type (in addition to per-task and
> per-cpu)?  This is partly because we don't want to be mixing the rotation of
> CPU-events with nest events.  Each PMU really ought to have its own event
> list.
>
I concur with the fact that you don't want to mix events form
different PMUs in the
same rotation list. There are some side effects of not doing this with
AMD64 Northbridge
events when you have multiple concurrent sessions. But this is a special case
where the "uncore" (nest) PMU is actually controlled via the core PMU. I think
that is okay for now, but you certainly don't want that for Nehalem uncore, for
instance.

> 2) How do we deal with accessing PMU's which require slow access methods
> (e.g. internal serial bus)?  The accesses may need to be placed on worker
> threads so that they don't affect the performance of context switches and
> system ticks.
>
> 3) How exactly do we represent the PMU's in the pseudofs (/sys or /proc)?
>  And how exactly does the user specify the PMU to perf_events?
> Peter Zijlstra and Stephane Eranian both recommended opening the PMU with
> open() and then passing the resulting fd in through the perf_event_attr
> struct.
>
> 4) How do we choose a CPU to do the housekeeping work for a particular nest
> PMU.  Peter thought that user space should still specify the it via
> open_perf_event() cpu parameter, but there's also an argument to be made for
> the kernel choosing the best CPU to handle the job, or at least make it
> optional for the user to choose the CPU.
>
One of the housekeeping task is to handle uncore PMU interrupts, for instance.
That is not a trivial task given that events are managed independently and
that you could be monitoring per-thread or system-wide. It may be that
some uncore PMU can only interrupt one core. Intel Nehalem can interrupt
many at once.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ