lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1289993447.2109.717.camel@laptop>
Date:	Wed, 17 Nov 2010 12:30:47 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Kyle Moffett <kyle@...fetthome.net>
Cc:	Corey Ashford <cjashfor@...ux.vnet.ibm.com>,
	Stephane Eranian <eranian@...gle.com>,
	LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
	Lin Ming <ming.m.lin@...el.com>,
	"robert.richter" <robert.richter@....com>,
	fweisbec <fweisbec@...il.com>, paulus <paulus@...ba.org>,
	Greg Kroah-Hartman <gregkh@...e.de>,
	Kay Sievers <kay.sievers@...y.org>,
	"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [RFC][PATCH] perf: sysfs type id

On Wed, 2010-11-17 at 02:02 -0500, Kyle Moffett wrote:
> 
> Not quite.  I'm still a relative newbie to bits and pieces of the
> device model, but I'll explain what I believe the best representation
> would be:
> 
> Assuming you have counters on graphics cards, you would already have
> the PCI device directories for the GPUs themselves.  For example:
> /sys/devices/[...]/0000:01:00.0/
> /sys/devices/[...]/0000:02:00.0/
> 
> Those already obviously have various DRM-related device directories
> under them, but I'll assume the PMU is tied directly to the PCI device
> (although it could be put elsewhere if appropriate).
> 
> So then I believe you would create your "PMU" devices with names
> "pmu0", "pmu1", etc, and set their "parent" to point to the PCI
> device, and set their "bus" to point to the "pmu" bus.
> 
> What would happen is you would get subdirectories for your "pmu" devices
> /sys/devices/[...]/0000:01:00.0/pmu0/
> /sys/devices/[...]/0000:02:00.0/pmu1/
> 
> Each of those devices would have a "driver" symlink inside pointing to
> something like:
> /sys/subsystem/pmu/drivers/radeonpmu
> 
> There would also be symlinks:
> /sys/subsystem/pmu/devices/pmu0 => ../../../../devices/[...]/0000:01:00.0/pmu0
> /sys/subsystem/pmu/devices/pmu1 => ../../../../devices/[...]/0000:02:00.0/pmu1
> 
> So that lets you find your various PMU devices.
> 
> Then you'd have another "bus", perhaps, for "pmuevents", where the
> "pmuevent" device nodes get useful names.  Please note that including
> the PMU name in the event name is necessary as you cannot have two
> devices on the same "bus" with the exact same name.
> /sys/devices/[...]/0000:01:00.0/pmu0/pmu0:gpu_idle/
> /sys/devices/[...]/0000:01:00.0/pmu0/pmu0:gpu_throttle/
> /sys/devices/[...]/0000:02:00.0/pmu1/pmu1:gpu_idle/
> /sys/devices/[...]/0000:02:00.0/pmu1/pmu1:gpu_throttle/
> 
> Each event directory would contain other directories full of various
> registered attributes of the event.
> 
> And again the directory full of symlinks (this is what requires the
> "different names" thing as mentioned above):
> /sys/subsystem/pmuevent/devices/pmu0:gpu_idle => [......]
> /sys/subsystem/pmuevent/devices/pmu0:gpu_throttle => [......]
> /sys/subsystem/pmuevent/devices/pmu1:gpu_idle => [......]
> /sys/subsystem/pmuevent/devices/pmu1:gpu_throttle => [......]
> 
> So if you wanted to enumerate all of the "gpu_idle" events on the
> system, you could just do:
> ls /sys/subsystem/pmuevent/devices/*:gpu_idle
> 
> And then by following the symlinks into /sys/devices and traversing
> the path upwards you can examine all of the other properties the same
> way that udev does. 

I've been talking to Kay for a bit and I've settled on one bus
"event_source", I register devices to that, and each device gets an
attribute_group "events" with events in.

Implementing that is painful enough, when I get it working I'll post it
as RFC and if someone with more sysfs skill than me is willing to help
out that's nice, but implementing something like Kyle says is waaaay
beyond me, sysfs is teh crazeh.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ