lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 15 Mar 2017 18:26:38 -0700
From:   Alexei Starovoitov <ast@...com>
To:     "David S . Miller" <davem@...emloft.net>
CC:     Daniel Borkmann <daniel@...earbox.net>,
        Fengguang Wu <fengguang.wu@...el.com>,
        <netdev@...r.kernel.org>, <kernel-team@...com>
Subject: [PATCH net-next 0/6] bpf: inline bpf_map_lookup_elem()

bpf_map_lookup_elem() is one of the most frequently used helper functions.
Improve JITed program performance by inlining this helper.

bpf_map_type	before  after
hash		58M	74M
array		174M	280M

The values are number of lookups per second in ideal conditions
measured by micro-benchmark in patch 6.

The 'perf report' for HASH map type:
before:
    54.23%  map_perf_test  [kernel.kallsyms]  [k] __htab_map_lookup_elem
    14.24%  map_perf_test  [kernel.kallsyms]  [k] lookup_elem_raw
     8.84%  map_perf_test  [kernel.kallsyms]  [k] htab_map_lookup_elem
     5.93%  map_perf_test  [kernel.kallsyms]  [k] bpf_map_lookup_elem
     2.30%  map_perf_test  [kernel.kallsyms]  [k] bpf_prog_da4fc6a3f41761a2
     1.49%  map_perf_test  [kernel.kallsyms]  [k] kprobe_ftrace_handler

after:
    60.03%  map_perf_test  [kernel.kallsyms]  [k] __htab_map_lookup_elem
    18.07%  map_perf_test  [kernel.kallsyms]  [k] lookup_elem_raw
     2.91%  map_perf_test  [kernel.kallsyms]  [k] bpf_prog_da4fc6a3f41761a2
     1.94%  map_perf_test  [kernel.kallsyms]  [k] _einittext
     1.90%  map_perf_test  [kernel.kallsyms]  [k] __audit_syscall_exit
     1.72%  map_perf_test  [kernel.kallsyms]  [k] kprobe_ftrace_handler

so the cost of htab_map_lookup_elem() and bpf_map_lookup_elem()
is gone after inlining.

'per-cpu' and 'lru' map types can be optimized similarly in the future.

Note the sparse will complain that bpf is addictive ;)
kernel/bpf/hashtab.c:438:19: sparse: subtraction of functions? Share your drugs
kernel/bpf/verifier.c:3342:38: sparse: subtraction of functions? Share your drugs
it's not a new warning, just in new places.

Alexei Starovoitov (6):
  bpf: move fixup_bpf_calls() function
  bpf: refactor fixup_bpf_calls()
  bpf: adjust insn_aux_data when patching insns
  bpf: add helper inlining infra and optimize map_array lookup
  bpf: inline htab_map_lookup_elem()
  samples/bpf: add map_lookup microbenchmark

 include/linux/bpf.h              |   1 +
 include/linux/bpf_verifier.h     |   5 +-
 include/linux/filter.h           |  10 +++
 kernel/bpf/arraymap.c            |  29 +++++++++
 kernel/bpf/hashtab.c             |  31 +++++++++-
 kernel/bpf/syscall.c             |  56 -----------------
 kernel/bpf/verifier.c            | 129 ++++++++++++++++++++++++++++++++++++---
 samples/bpf/map_perf_test_kern.c |  33 ++++++++++
 samples/bpf/map_perf_test_user.c |  32 ++++++++++
 9 files changed, 261 insertions(+), 65 deletions(-)

-- 
2.8.0

Powered by blists - more mailing lists