lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c2fbefd2-6137-712e-47d4-200ef4d74775@fb.com>
Date:   Sat, 9 May 2020 10:23:53 -0700
From:   Yonghong Song <yhs@...com>
To:     Andrii Nakryiko <andriin@...com>, <bpf@...r.kernel.org>,
        <netdev@...r.kernel.org>, <ast@...com>, <daniel@...earbox.net>
CC:     <andrii.nakryiko@...il.com>, <kernel-team@...com>,
        John Fastabend <john.fastabend@...il.com>
Subject: Re: [PATCH v2 bpf-next 2/3] selftest/bpf: fmod_ret prog and implement
 test_overhead as part of bench



On 5/8/20 4:20 PM, Andrii Nakryiko wrote:
> Add fmod_ret BPF program to existing test_overhead selftest. Also re-implement
> user-space benchmarking part into benchmark runner to compare results.  Results
> with ./bench are consistently somewhat lower than test_overhead's, but relative
> performance of various types of BPF programs stay consisten (e.g., kretprobe is
> noticeably slower).
> 
> run_bench_rename.sh script (in benchs/ directory) was used to produce the
> following numbers:
> 
>    base      :    3.975 ± 0.065M/s
>    kprobe    :    3.268 ± 0.095M/s
>    kretprobe :    2.496 ± 0.040M/s
>    rawtp     :    3.899 ± 0.078M/s
>    fentry    :    3.836 ± 0.049M/s
>    fexit     :    3.660 ± 0.082M/s
>    fmodret   :    3.776 ± 0.033M/s
> 
> While running test_overhead gives:
> 
>    task_rename base        4457K events per sec
>    task_rename kprobe      3849K events per sec
>    task_rename kretprobe   2729K events per sec
>    task_rename raw_tp      4506K events per sec
>    task_rename fentry      4381K events per sec
>    task_rename fexit       4349K events per sec
>    task_rename fmod_ret    4130K events per sec

Do you where the overhead is and how we could provide options in
bench to reduce the overhead so we can achieve similar numbers?
For benchmarking, sometimes you really want to see "true"
potential of a particular implementation.

> 
> Acked-by: John Fastabend <john.fastabend@...il.com>
> Signed-off-by: Andrii Nakryiko <andriin@...com>
> ---
>   tools/testing/selftests/bpf/Makefile          |   4 +-
>   tools/testing/selftests/bpf/bench.c           |  14 ++
>   .../selftests/bpf/benchs/bench_rename.c       | 195 ++++++++++++++++++
>   .../selftests/bpf/benchs/run_bench_rename.sh  |   9 +
>   .../selftests/bpf/prog_tests/test_overhead.c  |  14 +-
>   .../selftests/bpf/progs/test_overhead.c       |   6 +
>   6 files changed, 240 insertions(+), 2 deletions(-)
>   create mode 100644 tools/testing/selftests/bpf/benchs/bench_rename.c
>   create mode 100755 tools/testing/selftests/bpf/benchs/run_bench_rename.sh
> 
> diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> index 289fffbf975e..29a02abf81a3 100644
> --- a/tools/testing/selftests/bpf/Makefile
> +++ b/tools/testing/selftests/bpf/Makefile
> @@ -409,10 +409,12 @@ $(OUTPUT)/test_cpp: test_cpp.cpp $(OUTPUT)/test_core_extern.skel.h $(BPFOBJ)
>   $(OUTPUT)/bench_%.o: benchs/bench_%.c bench.h
>   	$(call msg,CC,,$@)
>   	$(CC) $(CFLAGS) -c $(filter %.c,$^) $(LDLIBS) -o $@
> +$(OUTPUT)/bench_rename.o: $(OUTPUT)/test_overhead.skel.h
>   $(OUTPUT)/bench.o: bench.h
>   $(OUTPUT)/bench: LDLIBS += -lm
>   $(OUTPUT)/bench: $(OUTPUT)/bench.o \
> -		 $(OUTPUT)/bench_count.o
> +		 $(OUTPUT)/bench_count.o \
> +		 $(OUTPUT)/bench_rename.o
>   	$(call msg,BINARY,,$@)
>   	$(CC) $(LDFLAGS) -o $@ $(filter %.a %.o,$^) $(LDLIBS)
>   
[...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ