[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3621613.kMnvz8Tm3d@milian-kdab2>
Date: Thu, 15 Jun 2017 10:46:16 +0200
From: Milian Wolff <milian.wolff@...b.com>
To: Ravi Bangoria <ravi.bangoria@...ux.vnet.ibm.com>
Cc: Mark Wielaard <mark@...mp.org>,
Paolo Bonzini <pbonzini@...hat.com>,
linux-kernel@...r.kernel.org, acme@...nel.org,
"Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>,
linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH v2] perf: libdw support for powerpc [ping]
On Tuesday, June 13, 2017 5:55:09 PM CEST Ravi Bangoria wrote:
> Hi Mark,
>
> On Tuesday 13 June 2017 05:14 PM, Mark Wielaard wrote:
> > I see the same on very short runs. But when doing a slightly longer run,
> > even just using ls -lahR, which does some more work, then I do see user
> > backtraces. They are still missing for some of the early samples though.
> > It is as if there is a stack/memory address mismatch when the probe is
> > "too early" in ld.so.
> >
> > Could you do a test run on some program that does some more work to see
> > if you never get any user stack traces, or if you only not get them for
> > some specific probes?
>
> Thanks for checking. I tried a proper workload this time, but I still
> don't see any userspace callchain getting unwound.
>
> $ ./perf record --call-graph=dwarf -- zip -q -r temp.zip .
> [ perf record: Woken up 2891 times to write data ]
> [ perf record: Captured and wrote 723.290 MB perf.data (87934 samples) ]
>
>
> With libdw:
>
> $ LD_LIBRARY_PATH=/home/ravi/elfutils-git/usr/local/lib:\
> /home/ravi/elfutils-git/usr/local/lib/elfutils/:$LD_LIBRARY_PATH\
> ./perf script
>
> zip 16699 6857.354633: 37371 cycles:u:
> ecedc xmon_core
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux) 8c4fc
> __hash_page_64K (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
> 83450 hash_preload
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux) 7cc34
> update_mmu_cache (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
> 330064 alloc_set_pte
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux) 330efc do_fault
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux) 334580
> __handle_mm_fault (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
> 335040 handle_mm_fault
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux) 7bf94
> do_page_fault (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
> 7bec4 do_page_fault
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux) 7be78
> do_page_fault (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
> 1a4f8 handle_page_fault
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
>
> zip 16699 6857.354663: 300677 cycles:u:
>
> zip 16699 6857.354895: 584131 cycles:u:
>
> zip 16699 6857.355312: 589687 cycles:u:
>
> zip 16699 6857.355606: 560142 cycles:u:
Just a quick question: Have you guys applied my recent patch:
commit 5ea0416f51cc93436bbe497c62ab49fd9cb245b6
Author: Milian Wolff <milian.wolff@...b.com>
Date: Thu Jun 1 23:00:21 2017 +0200
perf report: Include partial stacks unwound with libdw
So far the whole stack was thrown away when any error occurred before
the maximum stack depth was unwound. This is actually a very common
scenario though. The stacks that got unwound so far are still
interesting. This removes a large chunk of differences when comparing
perf script output for libunwind and libdw perf unwinding.
If not, then this could explain the issue you are seeing.
Cheers
--
Milian Wolff | milian.wolff@...b.com | Software Engineer
KDAB (Deutschland) GmbH&Co KG, a KDAB Group company
Tel: +49-30-521325470
KDAB - The Qt Experts
Download attachment "smime.p7s" of type "application/pkcs7-signature" (3826 bytes)
Powered by blists - more mailing lists