[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzbUh-Q=7a0cyxWm+=DA9hhovpLRcBGsq2ocXoCWpC2SUA@mail.gmail.com>
Date: Mon, 29 Jun 2020 12:25:29 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Song Liu <songliubraving@...com>
Cc: bpf <bpf@...r.kernel.org>, Networking <netdev@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>,
Peter Ziljstra <peterz@...radead.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Kernel Team <kernel-team@...com>,
john fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...omium.org>
Subject: Re: [PATCH v4 bpf-next 0/4] bpf: introduce bpf_get_task_stack()
On Mon, Jun 29, 2020 at 11:54 AM Song Liu <songliubraving@...com> wrote:
>
> This set introduces a new helper bpf_get_task_stack(). The primary use case
> is to dump all /proc/*/stack to seq_file via bpf_iter__task.
>
> A few different approaches have been explored and compared:
>
> 1. A simple wrapper around stack_trace_save_tsk(), as v1 [1].
>
> This approach introduces new syntax, which is different to existing
> helper bpf_get_stack(). Therefore, this is not ideal.
>
> 2. Extend get_perf_callchain() to support "task" as argument.
>
> This approach reuses most of bpf_get_stack(). However, extending
> get_perf_callchain() requires non-trivial changes to architecture
> specific code. Which is error prone.
>
> 3. Current (v2) approach, leverages most of existing bpf_get_stack(), and
> uses stack_trace_save_tsk() to handle architecture specific logic.
>
> [1] https://lore.kernel.org/netdev/20200623070802.2310018-1-songliubraving@fb.com/
>
> Changes v3 => v4:
> 1. Simplify the selftests with bpf_iter.h. (Yonghong)
> 2. Add example output to commit log of 4/4. (Yonghong)
>
> Changes v2 => v3:
> 1. Rebase on top of bpf-next. (Yonghong)
> 2. Sanitize get_callchain_entry(). (Peter)
> 3. Use has_callchain_buf for bpf_get_task_stack. (Andrii)
> 4. Other small clean up. (Yonghong, Andrii).
>
> Changes v1 => v2:
> 1. Reuse most of bpf_get_stack() logic. (Andrii)
> 2. Fix unsigned long vs. u64 mismatch for 32-bit systems. (Yonghong)
> 3. Add %pB support in bpf_trace_printk(). (Daniel)
> 4. Fix buffer size to bytes.
>
> Song Liu (4):
> perf: expose get/put_callchain_entry()
> bpf: introduce helper bpf_get_task_stack()
> bpf: allow %pB in bpf_seq_printf() and bpf_trace_printk()
> selftests/bpf: add bpf_iter test with bpf_get_task_stack()
>
> include/linux/bpf.h | 1 +
> include/linux/perf_event.h | 2 +
> include/uapi/linux/bpf.h | 36 ++++++++-
> kernel/bpf/stackmap.c | 75 ++++++++++++++++++-
> kernel/bpf/verifier.c | 4 +-
> kernel/events/callchain.c | 13 ++--
> kernel/trace/bpf_trace.c | 12 ++-
> scripts/bpf_helpers_doc.py | 2 +
> tools/include/uapi/linux/bpf.h | 36 ++++++++-
> .../selftests/bpf/prog_tests/bpf_iter.c | 17 +++++
> .../selftests/bpf/progs/bpf_iter_task_stack.c | 37 +++++++++
> 11 files changed, 220 insertions(+), 15 deletions(-)
> create mode 100644 tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c
>
> --
> 2.24.1
Thanks for working on this! This will enable a whole new set of tools
and applications.
Acked-by: Andrii Nakryiko <andriin@...com>
Powered by blists - more mailing lists