[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180429155506.opzecjjgmtswu24k@ast-mbp>
Date: Sun, 29 Apr 2018 08:55:08 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Yonghong Song <yhs@...com>
Cc: ast@...com, daniel@...earbox.net, netdev@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH bpf-next v9 00/10] bpf: add bpf_get_stack helper
On Sat, Apr 28, 2018 at 10:28:06PM -0700, Yonghong Song wrote:
> Currently, stackmap and bpf_get_stackid helper are provided
> for bpf program to get the stack trace. This approach has
> a limitation though. If two stack traces have the same hash,
> only one will get stored in the stackmap table regardless of
> whether BPF_F_REUSE_STACKID is specified or not,
> so some stack traces may be missing from user perspective.
>
> This patch implements a new helper, bpf_get_stack, will
> send stack traces directly to bpf program. The bpf program
> is able to see all stack traces, and then can do in-kernel
> processing or send stack traces to user space through
> shared map or bpf_perf_event_output.
>
> Patches #1 and #2 implemented the core kernel support.
> Patch #3 removes two never-hit branches in verifier.
> Patches #4 and #5 are two verifier improves to make
> bpf programming easier. Patch #6 synced the new helper
> to tools headers. Patch #7 moved perf_event polling code
> and ksym lookup code from samples/bpf to
> tools/testing/selftests/bpf. Patch #8 added a verifier
> test in tools/bpf for new verifier change.
> Patches #9 and #10 added tests for raw tracepoint prog
> and tracepoint prog respectively.
>
> Changelogs:
> v8 -> v9:
> . make function perf_event_mmap (in trace_helpers.c) extern
> to decouple perf_event_mmap and perf_event_poller.
> . add jit enabled handling for kernel stack verification
> in Patch #9. Since we did not have a good way to
> verify jit enabled kernel stack, just return true if
> the kernel stack is not empty.
> . In path #9, using raw_syscalls/sys_enter instead of
> sched/sched_switch, removed calling cmd
> "task 1 dd if=/dev/zero of=/dev/null" which is left
> with dangling process after the program exited.
>
> v7 -> v8:
> . rebase on top of latest bpf-next
> . simplify BPF_ARSH dst_reg->smin_val/smax_value tracking
> . rewrite the description of bpf_get_stack() in uapi bpf.h
> based on new format.
> v6 -> v7:
> . do perf callchain buffer allocation inside the
> verifier. so if the prog->has_callchain_buf is set,
> it is guaranteed that the buffer has been allocated.
> . change condition "trace_nr <= skip" to "trace_nr < skip"
> so that for zero size buffer, return 0 instead of -EFAULT
> v5 -> v6:
> . after refining return register smax_value and umax_value
> for helpers bpf_get_stack and bpf_probe_read_str,
> bounds and var_off of the return register are further refined.
> . added missing commit message for tools header sync commit.
> . removed one unnecessary empty line.
> v4 -> v5:
> . relied on dst_reg->var_off to refine umin_val/umax_val
> in verifier handling BPF_ARSH value range tracking,
> suggested by Edward.
> v3 -> v4:
> . fixed a bug when meta ptr is set to NULL in check_func_arg.
> . introduced tnum_arshift and added detailed comments for
> the underlying implementation
> . avoided using VLA in tools/bpf test_progs.
> v2 -> v3:
> . used meta to track helper memory size argument
> . implemented range checking for ARSH in verifier
> . moved perf event polling and ksym related functions
> from samples/bpf to tools/bpf
> . added test to compare build id's between bpf_get_stackid
> and bpf_get_stack
> v1 -> v2:
> . fixed compilation error when CONFIG_PERF_EVENTS is not enabled
Applied, Thanks Yonghong.
Powered by blists - more mailing lists