[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150304010709.GQ5187@kernel.org>
Date: Tue, 3 Mar 2015 22:07:09 -0300
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: Namhyung Kim <namhyung@...nel.org>
Cc: David Ahern <dsahern@...il.com>, Jiri Olsa <jolsa@...nel.org>,
Ingo Molnar <mingo@...nel.org>,
Frédéric Weisbecker <fweisbec@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: heads up/RFC: 'perf trace' using ordered_events
Em Wed, Mar 04, 2015 at 10:01:23AM +0900, Namhyung Kim escreveu:
> On Tue, Mar 03, 2015 at 01:49:40PM -0300, Arnaldo Carvalho de Melo wrote:
> > Hi,
> >
> > Just a preview, but this is something David had mentioned at some
> > point, a major problem with 'perf trace' was that it wasn't using
> > 'perf_session' event reordering mechanism, so I've been working on making it
> > use it, refactoring the ordered_events code so that it can be used by tools
> > that don't deal with perf.data files.
> At a conceptual level, I think event processing should be done with
> session. Even if perf trace does not do anything with a data file it
> can have a (live) session like perf top does. This way we can avoid
perf top is another one I want to move away from perf_session, as it
doesn't use any perf.data file, but needs to have events ordered.
But then I really need to look at what you done in your patchset,
probably I am missing something (or a lot).
> internal API changes.
>
> Also, in my multi-thread work, I don't use ordered_events API much
> since all samples are already ordered for each cpu or thread. So I'd
> like to decouple it as much as possible.
> > I still have to investigate why there are so many sched_wakeup at the beginning,
> > but probably they are really being generated by the kernel and 'perf trace' has to make
> > this output more compact, perhaps noticing that a number of similar events appeared on
> > the stream and instead of writing N lines, print some prefix ([N*] perhaps) and then
> > the line.
> >
> > Also its keeping pointers to the ring buffer, probably we'll need to copy or
> > store the perf_sample somehow in the 'ordered_event' instance.
> >
> > But as this is touching things that Namhyung is touching as well, I thought about
> > sharing this stuff, its on my tree, branch tmp.perf/trace_ordered_events.
>
> Thanks for letting me know.
> Namhyung
>
>
> >
> > Lunchtime, bbl!
> >
> > - Arnaldo
> >
> > [root@...andy ~]# trace --ev sched:* usleep 1
> > 0.000 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=001)
> > 0.001 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=001)
> > 0.002 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=001)
> > 0.002 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=001)
> > 0.003 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=001)
> > 0.003 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=001)
> > 0.003 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=001)
> > 0.004 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=001)
> > 1.367 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=003)
> > 1.368 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=003)
> > 1.369 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=003)
> > 1.369 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=003)
> > 1.369 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=003)
> > 1.370 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=003)
> > 1.370 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=003)
> > 1.371 ( ): sched:sched_wakeup:comm=trace pid=4677 prio=120 success=1 target_cpu=003)
> > 1.615 ( ): sched:sched_process_exec:filename=/bin/usleep pid=4677 old_pid=4677)
> > 1.621 ( ): sched:sched_wakeup:comm=rcuop/3 pid=16 prio=120 success=1 target_cpu=005)
> > 1.640 ( 0.001 ms): usleep/4677 brk( ) = 0x64e000
> > 1.662 ( 0.005 ms): usleep/4677 mmap(len: 4096, prot: READ|WRITE, flags: PRIVATE|ANONYMOUS, fd: -1 ) = 0x7f3d1d7ef000
> > 1.675 ( 0.005 ms): usleep/4677 access(filename: 0x7f3d1d5edbe0, mode: R ) = -1 ENOENT No such file or directory
> > 1.687 ( 0.004 ms): usleep/4677 open(filename: 0x7f3d1d5ec4d8, flags: CLOEXEC ) = 3
> > 1.689 ( 0.001 ms): usleep/4677 fstat(fd: 3, statbuf: 0x7fff28219d10 ) = 0
> > 1.695 ( 0.005 ms): usleep/4677 mmap(len: 68635, prot: READ, flags: PRIVATE, fd: 3 ) = 0x7f3d1d7de000
> > 1.696 ( 0.001 ms): usleep/4677 close(fd: 3 ) = 0
> > 1.714 ( 0.007 ms): usleep/4677 open(filename: 0x7f3d1d7e867d, flags: CLOEXEC ) = 3
> > 1.716 ( 0.002 ms): usleep/4677 read(fd: 3, buf: 0x7fff28219eb0, count: 832 ) = 832
> > 1.719 ( 0.001 ms): usleep/4677 fstat(fd: 3, statbuf: 0x7fff28219d60 ) = 0
> > 1.727 ( 0.006 ms): usleep/4677 mmap(len: 2135088, prot: EXEC|READ, flags: PRIVATE|DENYWRITE, fd: 3 ) = 0x7f3d1d3c6000
> > 1.734 ( 0.007 ms): usleep/4677 mprotect(start: 0x7f3d1d3cf000, len: 2093056 ) = 0
> > 1.741 ( 0.007 ms): usleep/4677 mmap(addr: 0x7f3d1d5ce000, len: 8192, prot: READ|WRITE, flags: PRIVATE|DENYWRITE|FIXED, fd: 3, off: 32768) = 0x7f3d1d5ce000
> > 1.748 ( 0.001 ms): usleep/4677 close(fd: 3 ) = 0
> > 1.763 ( 0.006 ms): usleep/4677 open(filename: 0x7f3d1d7ed5fa, flags: CLOEXEC ) = 3
> > 1.765 ( 0.002 ms): usleep/4677 read(fd: 3, buf: 0x7fff28219e80, count: 832 ) = 832
> > 1.767 ( 0.001 ms): usleep/4677 fstat(fd: 3, statbuf: 0x7fff28219d30 ) = 0
> > 1.775 ( 0.006 ms): usleep/4677 mmap(len: 3932736, prot: EXEC|READ, flags: PRIVATE|DENYWRITE, fd: 3 ) = 0x7f3d1d005000
> > 1.782 ( 0.007 ms): usleep/4677 mprotect(start: 0x7f3d1d1bb000, len: 2097152 ) = 0
> > 1.790 ( 0.008 ms): usleep/4677 mmap(addr: 0x7f3d1d3bb000, len: 24576, prot: READ|WRITE, flags: PRIVATE|DENYWRITE|FIXED, fd: 3, off: 1794048) ...
> > 1.790 ( ): sched:sched_stat_runtime:comm=usleep pid=4677 runtime=426124 [ns] vruntime=569867840059 [ns])
> > 1.793 ( ): sched:sched_wakeup:comm=rcuc/3 pid=56 prio=97 success=1 target_cpu=003)
> > 1.794 ( ): sched:sched_wakeup:comm=ksoftirqd/3 pid=58 prio=98 success=1 target_cpu=003)
> > 1.797 ( ): sched:sched_stat_runtime:comm=usleep pid=4677 runtime=2371 [ns] vruntime=569867842430 [ns])
> > 1.798 ( ): sched:sched_switch:prev_comm=usleep prev_pid=4677 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/3 next_pid=56 next_prio=97)
> > 1.832 ( 0.049 ms): usleep/4677 ... [continued]: mmap()) = 0x7f3d1d3bb000
> > 1.844 ( 0.007 ms): usleep/4677 mmap(addr: 0x7f3d1d3c1000, len: 16960, prot: READ|WRITE, flags: PRIVATE|ANONYMOUS|FIXED, fd: -1) = 0x7f3d1d3c1000
> > 1.849 ( 0.001 ms): usleep/4677 close(fd: 3 ) = 0
> > 1.859 ( 0.005 ms): usleep/4677 mmap(len: 4096, prot: READ|WRITE, flags: PRIVATE|ANONYMOUS, fd: -1 ) = 0x7f3d1d7dd000
> > 1.873 ( 0.005 ms): usleep/4677 mmap(len: 8192, prot: READ|WRITE, flags: PRIVATE|ANONYMOUS, fd: -1 ) = 0x7f3d1d7db000
> > 1.879 ( 0.001 ms): usleep/4677 arch_prctl(option: 4098, arg2: 139900464510784, arg3: 139900464513104, arg4: 34, arg5: 4294967295) = 0
> > 1.969 ( 0.009 ms): usleep/4677 mprotect(start: 0x7f3d1d3bb000, len: 16384, prot: READ ) = 0
> > 1.982 ( 0.007 ms): usleep/4677 mprotect(start: 0x7f3d1d5ce000, len: 4096, prot: READ ) = 0
> > 1.991 ( 0.006 ms): usleep/4677 mprotect(start: 0x601000, len: 4096, prot: READ ) = 0
> > 2.001 ( 0.007 ms): usleep/4677 mprotect(start: 0x7f3d1d7f0000, len: 4096, prot: READ ) = 0
> > 2.009 ( 0.007 ms): usleep/4677 munmap(addr: 0x7f3d1d7de000, len: 68635 ) = 0
> > 2.049 ( ): sched:sched_stat_runtime:comm=usleep pid=4677 runtime=220367 [ns] vruntime=569868062797 [ns])
> > 2.050 ( ): sched:sched_switch:prev_comm=usleep prev_pid=4677 prev_prio=120 prev_state=R ==> next_comm=irq/51-enp0s25 next_pid=1164 next_prio=49)
> > 2.097 ( ): sched:sched_stat_runtime:comm=usleep pid=4677 runtime=29987 [ns] vruntime=569868092784 [ns])
> > 2.099 ( ): sched:sched_switch:prev_comm=usleep prev_pid=4677 prev_prio=120 prev_state=R+ ==> next_comm=irq/51-enp0s25 next_pid=1164 next_prio=49)
> > 2.121 ( 0.001 ms): usleep/4677 brk( ) = 0x64e000
> > 2.126 ( 0.005 ms): usleep/4677 brk(brk: 0x66f000 ) = 0x66f000
> > 2.129 ( 0.001 ms): usleep/4677 brk( ) = 0x66f000
> > 2.148 ( ): sched:sched_stat_runtime:comm=usleep pid=4677 runtime=34634 [ns] vruntime=569868127418 [ns])
> > 2.149 ( ): sched:sched_switch:prev_comm=usleep prev_pid=4677 prev_prio=120 prev_state=R+ ==> next_comm=irq/51-enp0s25 next_pid=1164 next_prio=49)
> > 2.173 ( 0.002 ms): usleep/4677 nanosleep(rqtp: 0x7fff2821a4e0 ) ...
> > 2.173 ( ): sched:sched_stat_runtime:comm=usleep pid=4677 runtime=11440 [ns] vruntime=569868138858 [ns])
> > 2.174 ( ): sched:sched_switch:prev_comm=usleep prev_pid=4677 prev_prio=120 prev_state=S ==> next_comm=swapper/3 next_pid=0 next_prio=120)
> > 2.227 ( ): sched:sched_wakeup:comm=usleep pid=4677 prio=120 success=1 target_cpu=003)
> > 2.228 ( ): sched:sched_wakeup:comm=usleep pid=4677 prio=120 success=1 target_cpu=003)
> > 2.228 ( ): sched:sched_wakeup:comm=usleep pid=4677 prio=120 success=1 target_cpu=003)
> > 2.229 ( ): sched:sched_wakeup:comm=usleep pid=4677 prio=120 success=1 target_cpu=003)
> > 2.230 ( ): sched:sched_wakeup:comm=usleep pid=4677 prio=120 success=1 target_cpu=003)
> > 2.230 ( ): sched:sched_wakeup:comm=usleep pid=4677 prio=120 success=1 target_cpu=003)
> > 2.230 ( ): sched:sched_wakeup:comm=usleep pid=4677 prio=120 success=1 target_cpu=003)
> > 2.231 ( ): sched:sched_wakeup:comm=usleep pid=4677 prio=120 success=1 target_cpu=003)
> > 2.237 ( 0.067 ms): usleep/4677 ... [continued]: nanosleep()) = 0
> > 2.243 ( 0.000 ms): usleep/4677 exit_group(
> > 2.289 ( ): sched:sched_process_exit:comm=usleep pid=4677 prio=120)
> > [root@...andy ~]#
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists