lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140724151009.GD7831@kernel.org>
Date:	Thu, 24 Jul 2014 12:10:09 -0300
From:	Arnaldo Carvalho de Melo <acme@...nel.org>
To:	Jiri Olsa <jolsa@...hat.com>
Cc:	David Ahern <dsahern@...il.com>,
	Adrian Hunter <adrian.hunter@...el.com>,
	Andi Kleen <andi@...stfloor.org>, Jiri Olsa <jolsa@...nel.org>,
	linux-kernel@...r.kernel.org,
	Corey Ashford <cjashfor@...ux.vnet.ibm.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Ingo Molnar <mingo@...nel.org>,
	Jean Pihet <jean.pihet@...aro.org>,
	Namhyung Kim <namhyung@...nel.org>,
	Paul Mackerras <paulus@...ba.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCHv3 00/19] perf tools: Factor ordered samples queue

Em Thu, Jul 24, 2014 at 04:56:11PM +0200, Jiri Olsa escreveu:
> On Thu, Jul 24, 2014 at 11:19:58AM -0300, Arnaldo Carvalho de Melo wrote:
> > Em Mon, Jul 21, 2014 at 12:36:54PM -0600, David Ahern escreveu:
> > > On 7/21/14, 12:23 PM, Adrian Hunter wrote:
> > > >On 21/07/2014 7:31 p.m., Andi Kleen wrote:
> > > >>Jiri Olsa <jolsa@...hat.com> writes:
> > > >>>
> > > >>>[jolsa@...-x3650m4-01 perf]$ sudo ./perf report --stdio
> > > >>>Timestamp below last timeslice flush
> > > >>>0x2276f58 [0x68]: failed to process type: 9
> > > >>
> > > >>FWIW we're seeing this frequently too.
> > > >
> > > >Jiri's example didn't work for me.  Do you have one?
> > > 
> > > $ perf sched record  -m 8192 -- perf bench sched all
> > > # Running sched/messaging benchmark...
> > > # 20 sender and receiver processes per group
> > > # 10 groups == 400 processes run
> > > 
> > >      Total time: 0.087 [sec]
> > > 
> > > # Running sched/pipe benchmark...
> > > # Executed 1000000 pipe operations between two processes
> > > 
> > >      Total time: 12.043 [sec]
> > > 
> > >       12.043779 usecs/op
> > >           83030 ops/sec
> > > 
> > > [ perf record: Woken up 1 times to write data ]
> > > [ perf record: Captured and wrote 289.549 MB perf.data (~12650569 samples) ]
> > > 0x54b4828 [0]: failed to process type: 0
> > > 
> > > No overload condition, no dropped chunks message - yet can't process events.
> > 
> > Appling just the patches 1 and 17 from this series I managed to:
> 
> ok with me..
> 
> I plan to spin new version with better debug messages and without (Adrian):
>   perf tools: Limit the ordered events queue by default to 100MB
> 
> Anyway, I think the rest without (and the one above):
>   perf tools: Add debug prints for ordered events queue
> 
> should be good to go..
> 
> Namhyung pointed out performance implications for patch 17:
>   perf tools: Always force PERF_RECORD_FINISHED_ROUND event

I haven't seen this comment, just saw David, IIUC, having problems using
the whole patchkit, David?

My tests were more to counter what he said, i.e. that:

<quote David Ahern>
No overload condition, no dropped chunks message - yet can't process
events
</>

I see no overload condition, no dropped chunks message, I was pounding
the machine with a big load and could process the events, so all seems
well.

And from the discussion I understood that patch 17 is the most important
one, agreed by you and Adrian, right?

I.e. the other ones are refactorings to make it have better naming and
to allow it being used elsewhere (builtin-trace.c, whee), but still need
some more polishing/testing, is that right?

- Arnaldo
 
> which you seem to test.. but I dont see comparison dow there.. ?
> 
> thanks,
> jirka
> 
> > 
> > [root@zoo ~]# cat /proc/loadavg 
> > 52.13 13.06 5.04 132/977 29522
> > [root@zoo ~]# perf sched record  -m 4096 -- perf bench sched all
> > # Running sched/messaging benchmark...
> > # 20 sender and receiver processes per group
> > # 10 groups == 400 processes run
> > 
> >      Total time: 0.692 [sec]
> > 
> > # Running sched/pipe benchmark...
> > # Executed 1000000 pipe operations between two processes
> > 
> >      Total time: 13.007 [sec]
> > 
> >       13.007582 usecs/op
> >           76878 ops/sec
> > 
> > [ perf record: Woken up 105 times to write data ]
> > [ perf record: Captured and wrote 909.513 MB perf.data (~39737216 samples) ]
> > [root@zoo ~]# cat /proc/loadavg 
> > 80.30 22.82 8.49 132/968 31755
> > [root@zoo ~]# perf script | wc -l
> > 8308492
> > [root@zoo ~]# perf cript | head -10
> >  perf 9897 [01] 1.29678: sched:sched_stat_runtime: comm=perf pid=29897 runtime=916836 [ns] vruntime=1322281938410 [ns]
> >  perf 9897 [01] 1.29683: sched:sched_wakeup: comm=perf pid=29919 prio=120 success=1 target_cpu=001
> >  perf 9897 [01] 1.29688: sched:sched_switch: prev_comm=perf prev_pid=29897 prev_prio=120 prev_state=R ==> next_comm=perf next_pid=29919 next_prio=120
> >  perf 9919 [01] 1.29748: sched:sched_stat_runtime: comm=perf pid=29919 runtime=70430 [ns] vruntime=1322273008840 [ns]
> >   cc1 7110 [03] 1.29748: sched:sched_stat_runtime: comm=cc1 pid=27110 runtime=992012 [ns] vruntime=702438664528 [ns]
> >   cc1 9278 [00] 1.29748: sched:sched_stat_runtime: comm=cc1 pid=29278 runtime=979081 [ns] vruntime=583946650623 [ns]
> >   cc1 7781 [02] 1.29749: sched:sched_stat_runtime: comm=cc1 pid=27781 runtime=989627 [ns] vruntime=714050072391 [ns]
> >   cc1 9278 [00] 1.29756: sched:sched_switch: prev_comm=cc1 prev_pid=29278 prev_prio=120 prev_state=R ==> next_comm=cc1 next_pid=29128 next_prio=120
> >   cc1 7110 [03] 1.29757: sched:sched_switch: prev_comm=cc1 prev_pid=27110 prev_prio=120 prev_state=R ==> next_comm=cc1 next_pid=29586 next_prio=120
> >  perf 9919 [01] 1.29914: sched:sched_wakeup: comm=migration/1 pid=12 prio=0 success=1 target_cpu=001
> > [root@zoo ~]#
> > 
> > While doing a make -j128 allmodconfig on a 4way machine.
> > 
> > David, can I have your Acked-by for just those two while the rest is debated?
> > 
> > Adrian, can I have yours as well?
> > 
> > - Arnaldo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ