lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 10 May 2010 18:58:04 +0800 From: Lin Ming <ming.m.lin@...el.com> To: Paul Mundt <lethal@...ux-sh.org> Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...e.hu>, Frederic Weisbecker <fweisbec@...il.com>, "eranian@...il.com" <eranian@...il.com>, "Gary.Mohr@...l.com" <Gary.Mohr@...l.com>, Corey Ashford <cjashfor@...ux.vnet.ibm.com>, "arjan@...ux.intel.com" <arjan@...ux.intel.com>, "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>, Paul Mackerras <paulus@...ba.org>, "David S. Miller" <davem@...emloft.net>, Russell King <rmk+kernel@....linux.org.uk>, lkml <linux-kernel@...r.kernel.org> Subject: Re: [RFC][PATCH 3/9] perf: export registerred pmus via sysfs On Mon, 2010-05-10 at 18:35 +0800, Paul Mundt wrote: > On Mon, May 10, 2010 at 06:26:35PM +0800, Lin Ming wrote: > > On Mon, 2010-05-10 at 18:18 +0800, Peter Zijlstra wrote: > > > On Mon, 2010-05-10 at 18:11 +0800, Lin Ming wrote: > > > > On Mon, 2010-05-10 at 17:40 +0800, Peter Zijlstra wrote: > > > > > On Mon, 2010-05-10 at 17:27 +0800, Lin Ming wrote: > > > > > > Export pmus via sysfs /sys/devices/system/cpu/pmus/0...N > > > > > > The file name is the pmu id, ie, /sys/devices/system/cpu/pmus/N > > > > > > represents pmu id N. > > > > > > So perf tool can use it to initialize perf_event_attr. > > > > > > > > > > Why create a whole new directory, why not: > > > > > > > > > > /sys/devices/system/cpu/cpuN/pmu_id ? > > > > > > > > > > > > Do you mean /sys/devices/system/cpu/cpuN/pmu_id contain all ids? > > > > > > > > For example, each cpu has 4 pmus and the file pmu_id shows something > > > > like, > > > > > > > > #cat /sys/devices/system/cpu/cpu0/pmu_id > > > > 0 1 2 3 > > > > > > No, I'm assuming there is only 1 PMU per CPU. Corey is the expert on > > > crazy hardware though, but I think the sanest way is to extend the CPU > > > topology if there's more structure to it. > > > > But our goal is to support multiple pmus, don't we need to assume there > > are more than 1 PMU per CPU? > > > The multiple PMU case still suggests 1 per CPU in most (all?) cases. If > you're thinking of PMUs in the northbridge case this would sit under its > own topology given that most CPUs will have a shared view of it. Do you Take Nehalem core and uncore pmu as an example, core pmu sits under /sys/devices/system/cpu/cpuN/pmu_id with ID 0, But uncore pmu is shared by cpus within package, where should it sit under? > have some cases with performance counters in per-CPU memory controllers > or something similar? Not sure about this now. I'll check. > > > How about > > /sys/devices/system/cpu/cpuN/pmu_0 > > /sys/devices/system/cpu/cpuN/pmu_1 > > /sys/devices/system/cpu/cpuN/pmu_2 > > /sys/devices/system/cpu/cpuN/pmu_3 > > ....? > > > If you're following driver model naming conventions, then these should > all be pmu.0, pmu.1, etc, etc. Thanks, Lin Ming -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists