lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221125122912.54709-1-sunhao.th@gmail.com>
Date:   Fri, 25 Nov 2022 20:29:09 +0800
From:   Hao Sun <sunhao.th@...il.com>
To:     bpf@...r.kernel.org
Cc:     ast@...nel.org, daniel@...earbox.net, john.fastabend@...il.com,
        andrii@...nel.org, martin.lau@...ux.dev, song@...nel.org,
        yhs@...com, kpsingh@...nel.org, sdf@...gle.com, haoluo@...gle.com,
        jolsa@...nel.org, davem@...emloft.net,
        linux-kernel@...r.kernel.org, Hao Sun <sunhao.th@...il.com>
Subject: [PATCH bpf-next v3 0/3] bpf: Add LDX/STX/ST sanitize in jited BPF progs

The verifier sometimes makes mistakes[1][2] that may be exploited to
achieve arbitrary read/write. Currently, syzbot is continuously testing
bpf, and can find memory issues in bpf syscalls, but it can hardly find
mischecking/bugs in the verifier. We need runtime checks like KASAN in
BPF programs for this. This patch series implements address sanitize
in jited BPF progs for testing purpose, so that tools like syzbot can
find interesting bugs in the verifier automatically by, if possible,
generating and executing BPF programs that bypass the verifier but have
memory issues, then triggering this sanitizing.

The idea is to dispatch read/write addr of a BPF program to the kernel
functions that are instrumented by KASAN, to achieve indirect checking.
Indirect checking is adopted because this is much simple, instrument
direct checking like compilers makes the jit much more complex. The
main step is: back up all the scratch regs to extend BPF prog stack,
store addr to R1, and then insert the checking function before load
or store insns, during bpf_misc_fixup(). The stack size of BPF progs
is extended by 64 bytes in this mode, to backup R1~R5 to make sure
the checking funcs won't corrupt regs states. An extra Kconfig option
is used to enable this, so normal use case won't be impacted at all.

Also, not all ldx/stx/st are instrumented. Insns rewrote by other fixup
or conversion passes that use BPF_REG_AX are skipped, because that
conflicts with us; insns whose access addr is specified by R10 are also
skipped because they are trivial to verify.

Patch1 sanitizes st/stx insns, and Patch2 sanitizes ldx insns, Patch3 adds
selftests for instrumentation in each possible case, and all new/existing
selftests for the verifier can pass. Also, a BPF prog that also exploits
CVE-2022-23222 to achieve OOB read is provided[3], this can be perfertly
captured with this patch series.

[1] http://bit.do/CVE-2021-3490
[2] http://bit.do/CVE-2022-23222
[3] OOB-read: https://pastebin.com/raw/Ee1Cw492

v1 -> v2:
        remove changes to JIT completely, backup regs to extended stack.
v2 -> v3:
	fix missing-prototypes warning report by kernel test bot.
	simplify regs backing up and rewrite corresponding selftests.

Hao Sun (3):
  bpf: Sanitize STX/ST in jited BPF progs with KASAN
  bpf: Sanitize LDX in jited BPF progs with KASAN
  selftests/bpf: Add tests for LDX/STX/ST sanitize

 kernel/bpf/Kconfig                            |  13 +
 kernel/bpf/verifier.c                         | 173 ++++++++++
 .../selftests/bpf/verifier/sanitize_st_ldx.c  | 317 ++++++++++++++++++
 3 files changed, 503 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c


base-commit: 2b3e8f6f5b939ceeb2e097339bf78ebaaf11dfe9
-- 
2.38.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ