lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 14 Oct 2014 11:09:58 +0200
From:	Jiri Olsa <>
To:	Arnaldo Carvalho de Melo <>
Cc:	Ingo Molnar <>,,
	Waiman Long <>,
	Adrian Hunter <>,
	Don Zickus <>,
	Douglas Hatch <>,
	Ingo Molnar <>, Jiri Olsa <>,
	Namhyung Kim <>,
	Paul Mackerras <>,
	Peter Zijlstra <>,
	Scott J Norton <>,
	Arnaldo Carvalho de Melo <>
Subject: Re: [PATCH 6/8] perf symbols: Improve DSO long names lookup speed
 with rbtree

On Wed, Oct 01, 2014 at 04:50:41PM -0300, Arnaldo Carvalho de Melo wrote:
> From: Waiman Long <>
> With workload that spawns and destroys many threads and processes, it
> was found that perf-mem could took a long time to post-process the perf
> data after the target workload had completed its operation.
> The performance bottleneck was found to be the lookup and insertion of
> the new DSO structures (thousands of them in this case).

this change segfaults (below) some tests, but only if I compiled
without DEBUG when I revert this commit, I can no longer reproduce..


(gdb) set follow-fork-mode child
(gdb) r test 31
Starting program: /home/jolsa/ test 31
warning: section  not found in /usr/lib/debug/lib/modules/3.16.3-200.fc20.x86_64/vdso/
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/".
31: Test output sorting of hist entries                    :[New process 15477]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/".

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff7b9d7c0 (LWP 15477)]
__strcmp_ssse3 () at ../sysdeps/x86_64/strcmp.S:210
210             movlpd  (%rsi), %xmm2
(gdb) bt
#0  __strcmp_ssse3 () at ../sysdeps/x86_64/strcmp.S:210
#1  0x0000000000477967 in dso__findlink_by_longname (name=<optimized out>, dso=0x0, root=0x7fffffffdbf0)
    at util/dso.c:674
#2  dso__find_by_longname (name=0x7fffffffcae8 "perf", root=0x7fffffffdbf0) at util/dso.c:712
#3  dsos__find (cmp_short=false, name=0x7fffffffcae8 "perf", dsos=0x7fffffffdbe0) at util/dso.c:935
#4  __dsos__findnew (dsos=dsos@...ry=0x7fffffffdbe0, name=name@...ry=0x7fffffffcae8 "perf") at util/dso.c:940
#5  0x00000000004915d9 in map__new (machine=machine@...ry=0x7fffffffdb90, start=4194304, len=1048576, pgoff=0, 
    pid=<optimized out>, d_maj=d_maj@...ry=0, d_min=d_min@...ry=0, ino=ino@...ry=0, ino_gen=ino_gen@...ry=0, 
    prot=prot@...ry=0, flags=flags@...ry=0, filename=filename@...ry=0x7fffffffcae8 "perf", type=MAP__FUNCTION, 
    thread=thread@...ry=0x90d1f0) at util/map.c:180
#6  0x00000000004900f4 in machine__process_mmap_event (machine=machine@...ry=0x7fffffffdb90, 
    event=event@...ry=0x7fffffffcac0, sample=sample@...ry=0x0) at util/machine.c:1182
#7  0x00000000004d12bb in setup_fake_machine (machines=machines@...ry=0x7fffffffdb90)
    at tests/hists_common.c:116
#8  0x00000000004d4478 in test__hists_output () at tests/hists_output.c:600
#9  0x0000000000448fe4 in run_test (test=0x8166a0 <tests+480>) at tests/builtin-test.c:210
#10 __cmd_test (skiplist=0x0, argv=0x7fffffffe2d0, argc=1) at tests/builtin-test.c:255
#11 cmd_test (argc=1, argv=0x7fffffffe2d0, prefix=<optimized out>) at tests/builtin-test.c:320
#12 0x000000000041c8f5 in run_builtin (p=p@...ry=0x814fc0 <commands+480>, argc=argc@...ry=2, 
    argv=argv@...ry=0x7fffffffe2d0) at perf.c:331
#13 0x000000000041c110 in handle_internal_command (argv=0x7fffffffe2d0, argc=2) at perf.c:390
#14 run_argv (argv=0x7fffffffe050, argcp=0x7fffffffe05c) at perf.c:434
#15 main (argc=2, argv=0x7fffffffe2d0) at perf.c:549
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists