lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab047883-69f6-1175-153f-5ad9462c6389@fb.com>
Date:   Wed, 22 May 2019 14:49:07 +0000
From:   Alexei Starovoitov <ast@...com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Kairui Song <kasong@...hat.com>
CC:     Song Liu <songliubraving@...com>,
        lkml <linux-kernel@...r.kernel.org>,
        Kernel Team <Kernel-team@...com>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        "bpf@...r.kernel.org" <bpf@...r.kernel.org>
Subject: Re: Getting empty callchain from perf_callchain_kernel()

On 5/22/19 7:02 AM, Peter Zijlstra wrote:
> 
>> If the unwinder could trace back through the bpf func call then there
>> will be no such problem.
> 
> Why couldn't it trace back through the bpf stuff? And how can we fix
> that?

Most of the time there is no 'tracing through bpf stuff'.
bpf infra is preserving 'pt_regs*' that was collected before
any bpf things start executing.
The reason is that bpf can be executed via interpreter and
directly when JITed.
In both cases collected stack traces should be the same or
it's confusing the users and they cannot compensate for such
difference.

The only exception is raw_tracepoint, since it's the most
minimalistic way of calling bpf and kernel side doesn't do
anything before calling into bpf.
Only in such case bpf side has to call perf_fetch_caller_regs().
See bpf_get_stackid_raw_tp().
But this test case is actually working!
It's covered by prog_tests/stacktrace_map_raw_tp.c and
it passes.
The one that is broken is prog_tests/stacktrace_map.c
There we attach bpf to standard tracepoint where
kernel suppose to collect pt_regs before calling into bpf.
And that's what bpf_get_stackid_tp() is doing.
It passes pt_regs (that was collected before any bpf)
into bpf_get_stackid() which calls get_perf_callchain().
Same thing with kprobes, uprobes.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ