[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140724141958.GB7831@kernel.org>
Date: Thu, 24 Jul 2014 11:19:58 -0300
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: David Ahern <dsahern@...il.com>
Cc: Adrian Hunter <adrian.hunter@...el.com>,
Andi Kleen <andi@...stfloor.org>, Jiri Olsa <jolsa@...nel.org>,
linux-kernel@...r.kernel.org,
Corey Ashford <cjashfor@...ux.vnet.ibm.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Jean Pihet <jean.pihet@...aro.org>,
Namhyung Kim <namhyung@...nel.org>,
Paul Mackerras <paulus@...ba.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCHv3 00/19] perf tools: Factor ordered samples queue
Em Mon, Jul 21, 2014 at 12:36:54PM -0600, David Ahern escreveu:
> On 7/21/14, 12:23 PM, Adrian Hunter wrote:
> >On 21/07/2014 7:31 p.m., Andi Kleen wrote:
> >>Jiri Olsa <jolsa@...hat.com> writes:
> >>>
> >>>[jolsa@...-x3650m4-01 perf]$ sudo ./perf report --stdio
> >>>Timestamp below last timeslice flush
> >>>0x2276f58 [0x68]: failed to process type: 9
> >>
> >>FWIW we're seeing this frequently too.
> >
> >Jiri's example didn't work for me. Do you have one?
>
> $ perf sched record -m 8192 -- perf bench sched all
> # Running sched/messaging benchmark...
> # 20 sender and receiver processes per group
> # 10 groups == 400 processes run
>
> Total time: 0.087 [sec]
>
> # Running sched/pipe benchmark...
> # Executed 1000000 pipe operations between two processes
>
> Total time: 12.043 [sec]
>
> 12.043779 usecs/op
> 83030 ops/sec
>
> [ perf record: Woken up 1 times to write data ]
> [ perf record: Captured and wrote 289.549 MB perf.data (~12650569 samples) ]
> 0x54b4828 [0]: failed to process type: 0
>
> No overload condition, no dropped chunks message - yet can't process events.
Appling just the patches 1 and 17 from this series I managed to:
[root@zoo ~]# cat /proc/loadavg
52.13 13.06 5.04 132/977 29522
[root@zoo ~]# perf sched record -m 4096 -- perf bench sched all
# Running sched/messaging benchmark...
# 20 sender and receiver processes per group
# 10 groups == 400 processes run
Total time: 0.692 [sec]
# Running sched/pipe benchmark...
# Executed 1000000 pipe operations between two processes
Total time: 13.007 [sec]
13.007582 usecs/op
76878 ops/sec
[ perf record: Woken up 105 times to write data ]
[ perf record: Captured and wrote 909.513 MB perf.data (~39737216 samples) ]
[root@zoo ~]# cat /proc/loadavg
80.30 22.82 8.49 132/968 31755
[root@zoo ~]# perf script | wc -l
8308492
[root@zoo ~]# perf cript | head -10
perf 9897 [01] 1.29678: sched:sched_stat_runtime: comm=perf pid=29897 runtime=916836 [ns] vruntime=1322281938410 [ns]
perf 9897 [01] 1.29683: sched:sched_wakeup: comm=perf pid=29919 prio=120 success=1 target_cpu=001
perf 9897 [01] 1.29688: sched:sched_switch: prev_comm=perf prev_pid=29897 prev_prio=120 prev_state=R ==> next_comm=perf next_pid=29919 next_prio=120
perf 9919 [01] 1.29748: sched:sched_stat_runtime: comm=perf pid=29919 runtime=70430 [ns] vruntime=1322273008840 [ns]
cc1 7110 [03] 1.29748: sched:sched_stat_runtime: comm=cc1 pid=27110 runtime=992012 [ns] vruntime=702438664528 [ns]
cc1 9278 [00] 1.29748: sched:sched_stat_runtime: comm=cc1 pid=29278 runtime=979081 [ns] vruntime=583946650623 [ns]
cc1 7781 [02] 1.29749: sched:sched_stat_runtime: comm=cc1 pid=27781 runtime=989627 [ns] vruntime=714050072391 [ns]
cc1 9278 [00] 1.29756: sched:sched_switch: prev_comm=cc1 prev_pid=29278 prev_prio=120 prev_state=R ==> next_comm=cc1 next_pid=29128 next_prio=120
cc1 7110 [03] 1.29757: sched:sched_switch: prev_comm=cc1 prev_pid=27110 prev_prio=120 prev_state=R ==> next_comm=cc1 next_pid=29586 next_prio=120
perf 9919 [01] 1.29914: sched:sched_wakeup: comm=migration/1 pid=12 prio=0 success=1 target_cpu=001
[root@zoo ~]#
While doing a make -j128 allmodconfig on a 4way machine.
David, can I have your Acked-by for just those two while the rest is debated?
Adrian, can I have yours as well?
- Arnaldo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists