lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220108051121.28632-1-yichun@openresty.com>
Date:   Fri,  7 Jan 2022 21:11:21 -0800
From:   "Yichun Zhang (agentzh)" <yichun@...nresty.com>
To:     yichun@...nresty.com
Cc:     Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>, netdev@...r.kernel.org,
        bpf@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH] bpf: core: Fix the call ins's offset s32 -> s16 truncation

The BPF interpreter always truncates the BPF CALL instruction's 32-bit
jump offset to 16-bit. Large BPF programs run by the interpreter often
hit this issue and result in weird behaviors when jumping to the wrong
destination instructions.

The BPF JIT compiler does not have this bug.

Fixes: 1ea47e01ad6ea ("bpf: add support for bpf_call to interpreter")
Signed-off-by: Yichun Zhang (agentzh) <yichun@...nresty.com>
---
 kernel/bpf/core.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 2405e39d800f..dc3c90992f33 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -59,6 +59,9 @@
 #define CTX	regs[BPF_REG_CTX]
 #define IMM	insn->imm
 
+static u64 (*interpreters_args[])(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5,
+				  const struct bpf_insn *insn);
+
 /* No hurry in this branch
  *
  * Exported for the bpf jit load helper.
@@ -1560,10 +1563,10 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
 		CONT;
 
 	JMP_CALL_ARGS:
-		BPF_R0 = (__bpf_call_base_args + insn->imm)(BPF_R1, BPF_R2,
-							    BPF_R3, BPF_R4,
-							    BPF_R5,
-							    insn + insn->off + 1);
+		BPF_R0 = (interpreters_args[insn->off])(BPF_R1, BPF_R2,
+							BPF_R3, BPF_R4,
+							BPF_R5,
+							insn + insn->imm + 1);
 		CONT;
 
 	JMP_TAIL_CALL: {
@@ -1810,9 +1813,7 @@ EVAL4(PROG_NAME_LIST, 416, 448, 480, 512)
 void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
 {
 	stack_depth = max_t(u32, stack_depth, 1);
-	insn->off = (s16) insn->imm;
-	insn->imm = interpreters_args[(round_up(stack_depth, 32) / 32) - 1] -
-		__bpf_call_base_args;
+	insn->off = (round_up(stack_depth, 32) / 32) - 1;
 	insn->code = BPF_JMP | BPF_CALL_ARGS;
 }
 
-- 
2.17.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ