lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 11 May 2010 16:15:11 +0200
From:	Borislav Petkov <bp@...64.org>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Lin Ming <ming.m.lin@...el.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	"eranian@...il.com" <eranian@...il.com>,
	"Gary.Mohr@...l.com" <Gary.Mohr@...l.com>,
	Corey Ashford <cjashfor@...ux.vnet.ibm.com>,
	"arjan@...ux.intel.com" <arjan@...ux.intel.com>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	Paul Mackerras <paulus@...ba.org>,
	"David S. Miller" <davem@...emloft.net>,
	Russell King <rmk+kernel@....linux.org.uk>,
	Paul Mundt <lethal@...ux-sh.org>,
	lkml <linux-kernel@...r.kernel.org>,
	Arnaldo Carvalho de Melo <acme@...hat.com>
Subject: Re: [RFC][PATCH 3/9] perf: export registerred pmus via sysfs

From: Ingo Molnar <mingo@...e.hu>
Date: Mon, May 10, 2010 at 01:43:11PM +0200

Hi all,

> Yeah, we really want a mechanism like this in place instead of continuing with 
> the somewhat ad-hoc extensions to the event enumeration space.
> 
> One detail: i think we want one more level. Instead of:
> 
>  /sys/devices/system/node/nodeN/node_events
>                                 node_events/event_source_id
>                                 node_events/local_misses
>                                            /local_hits
>                                            /remote_misses
>                                            /remote_hits
>                                            /...
> 
> We want the individual events to be a directory, containing the event_id:
> 
>  /sys/devices/system/node/nodeN/node_events
>                                 node_events/event_source_id
>                                 node_events/local_misses/event_id
>                                            /local_hits/event_id
>                                            /remote_misses/event_id
>                                            /remote_hits/event_id
>                                            /...
> 
> The reason is that we want to keep our options open to add more attributes to 
> individual events. (In fact extended attributes already exist for certain 
> event classes - such as the 'format' info for tracepoints.)

ok, what you guys have so far sounds ok, here's some more stuff
we should be considering when using the tracepoints (and their
representation in /sysfs or whatever) for error reporting.

All the error reporting is done using MCEs so the
MCE should be a raw per cpu event somewhere under
/sys/devices/system/cpu/cpuN/events/raw_cpu_events/ or whatever works for
you.

Another point I have is that MCEs don't need pmus so we should consider
having the ability to decouple events from pmus.

What you basically want to have is a tracepoint which is "persistent,"
as Ingo suggested earlier, and it buffers MCEs occurring at any time
into a ring buffer until a userspace daemon or similar sucks that data
out for processing (critical stuff is handled differently, of course).
And this should work on any x86 hw supporting MCA without hw perf
monitoring features.

Also, we might think in terms of using some of the MCE fields in /sysfs
for hardware error injection like EDAC does inject DRAM ECC errors but
this should be straight-forward using one attribute like

/sys/devices/system/cpu/cpuN/events/raw_cpu_events/mce/inject_ecc

or similar.

This is mostly what I can come up with now...


-- 
Regards/Gruss,
Boris.

--
Advanced Micro Devices, Inc.
Operating Systems Research Center
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ