lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <509DB632.7070305@linaro.org>
Date:	Fri, 09 Nov 2012 18:04:34 -0800
From:	John Stultz <john.stultz@...aro.org>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Stephane Eranian <eranian@...gle.com>,
	LKML <linux-kernel@...r.kernel.org>,
	"mingo@...e.hu" <mingo@...e.hu>, Paul Mackerras <paulus@...ba.org>,
	Anton Blanchard <anton@...ba.org>,
	Will Deacon <will.deacon@....com>,
	"ak@...ux.intel.com" <ak@...ux.intel.com>,
	Pekka Enberg <penberg@...il.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Robert Richter <robert.richter@....com>,
	tglx <tglx@...utronix.de>
Subject: Re: [RFC] perf: need to expose sched_clock to correlate user samples
 with kernel samples

On 10/16/2012 10:23 AM, Peter Zijlstra wrote:
> On Tue, 2012-10-16 at 12:13 +0200, Stephane Eranian wrote:
>> Hi,
>>
>> There are many situations where we want to correlate events happening at
>> the user level with samples recorded in the perf_event kernel sampling buffer.
>> For instance, we might want to correlate the call to a function or creation of
>> a file with samples. Similarly, when we want to monitor a JVM with jitted code,
>> we need to be able to correlate jitted code mappings with perf event samples
>> for symbolization.
>>
>> Perf_events allows timestamping of samples with PERF_SAMPLE_TIME.
>> That causes each PERF_RECORD_SAMPLE to include a timestamp
>> generated by calling the local_clock() -> sched_clock_cpu() function.
>>
>> To make correlating user vs. kernel samples easy, we would need to
>> access that sched_clock() functionality. However, none of the existing
>> clock calls permit this at this point. They all return timestamps which are
>> not using the same source and/or offset as sched_clock.
>>
>> I believe a similar issue exists with the ftrace subsystem.
>>
>> The problem needs to be adressed in a portable manner. Solutions
>> based on reading TSC for the user level to reconstruct sched_clock()
>> don't seem appropriate to me.
>>
>> One possibility to address this limitation would be to extend clock_gettime()
>> with a new clock time, e.g., CLOCK_PERF.
>>
>> However, I understand that sched_clock_cpu() provides ordering guarantees only
>> when invoked on the same CPU repeatedly, i.e., it's not globally synchronized.
>> But we already have to deal with this problem when merging samples obtained
>> from different CPU sampling buffer in per-thread mode. So this is not
>> necessarily
>> a showstopper.
>>
>> Alternatives could be to use uprobes but that's less practical to setup.
>>
>> Anyone with better ideas?
> You forgot to CC the time people ;-)
>
> I've no problem with adding CLOCK_PERF (or another/better name).
Hrm. I'm not excited about exporting that sort of internal kernel 
details to userland.

The behavior and expectations from sched_clock() has changed over the 
years, so I'm not sure its wise to export it, since we'd have to 
preserve its behavior from then on.

Also I worry that it will be abused in the same way that direct TSC 
access is, where the seemingly better performance from the more 
careful/correct CLOCK_MONOTONIC would cause developers to write fragile 
userland code that will break when moved from one machine to the next.

I'd probably rather perf output timestamps to userland using sane clocks 
(CLOCK_MONOTONIC), rather then trying to introduce a new time domain to 
userland.   But I probably could be convinced I'm wrong.

thanks
-john

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ