lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 18 Feb 2011 16:41:19 -0200
From:	Arnaldo Carvalho de Melo <acme@...stprotocols.net>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	David Ahern <daahern@...co.com>, Ingo Molnar <mingo@...e.hu>,
	linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
	peterz@...radead.org, paulus@...ba.org,
	Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH 3/3] perf events: add timehist option to record and report

Em Fri, Feb 18, 2011 at 06:59:30PM +0100, Frederic Weisbecker escreveu:
>   sched:sched_wait_task                      [Tracepoint event]
>   sched:sched_wakeup                         [Tracepoint event]
>   sched:sched_wakeup_new                     [Tracepoint event]
>   sched:sched_switch                         [Tracepoint event]
>   sched:sched_migrate_task                   [Tracepoint event]
>   sched:sched_process_free                   [Tracepoint event]
>   sched:sched_process_exit                   [Tracepoint event]
> 
> 
> You have the sched:sched_switch event and many others.
> 
> Just try:
> 
> perf record -a -e sched:*
> perf script
> 
>             perf-4128  [000] 19242.870025: sched_stat_runtime: comm=perf pid=4128 runtime=7430405 [ns] vruntime=3530192223488 
>             perf-4128  [000] 19242.870042: sched_stat_runtime: comm=perf pid=4128 runtime=23142 [ns] vruntime=3530192246630 [n
>             perf-4128  [000] 19242.870045: sched_stat_sleep: comm=kondemand/0 pid=59 delay=9979163 [ns]
>             perf-4128  [000] 19242.870048: sched_wakeup: comm=kondemand/0 pid=59 prio=120 success=1 target_cpu=000
>             perf-4128  [000] 19242.870063: sched_stat_runtime: comm=perf pid=4128 runtime=21581 [ns] vruntime=3530192268211 [n
>             perf-4128  [000] 19242.870066: sched_stat_wait: comm=kondemand/0 pid=59 delay=21581 [ns]
>             perf-4128  [000] 19242.870069: sched_switch: prev_comm=perf prev_pid=4128 prev_prio=120 prev_state=R ==> next_comm
>      kondemand/0-59    [000] 19242.870091: sched_stat_runtime: comm=kondemand/0 pid=59 runtime=27362 [ns] vruntime=35301862739
>      kondemand/0-59    [000] 19242.870094: sched_stat_wait: comm=perf pid=4128 delay=27362 [ns]
>      kondemand/0-59    [000] 19242.870095: sched_switch: prev_comm=kondemand/0 prev_pid=59 prev_prio=120 prev_state=S ==> next
> 
> And you can run your own script on these events:
> 
> $ sudo ./perf script -g python
> generated Python script: perf-script.py
> 
> Edit perf-script.py and then run it:
> 
> $ perf script -s ./perf-script.py
> 
> That also works for perl.
> 
> The timestamps will be the cpu time and not the walltime, but at least that seems
> to be partly what you seek?

The whole issue for him, AFAIK, is to correlate perf events with app
events.

Think about tcpdump + networking tracepoints or 'perf probe' dynamic
events in the network stack, he wants to merge those logs and correlate
the tcpdump packet exchange with the tracepoints events in the network
stack, etc.

I.e. it doesn't matter if it is ftrace or not, having a common clock
shared between apps and kernel tracing/whatever infrastructure is what
David is after, right?

He can change userspace to use the clock the kernel is using in the
perf/ftrace/whatever infrastructure or make the kernel use the clock
userspace uses.

The issue here is who will bend, u or k ;-)

- Arnaldo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ