[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210929111855.50254-7-hbathini@linux.ibm.com>
Date: Wed, 29 Sep 2021 16:48:53 +0530
From: Hari Bathini <hbathini@...ux.ibm.com>
To: naveen.n.rao@...ux.ibm.com, christophe.leroy@...roup.eu,
mpe@...erman.id.au, ast@...nel.org, daniel@...earbox.net
Cc: paulus@...ba.org, andrii@...nel.org, kafai@...com,
songliubraving@...com, yhs@...com, john.fastabend@...il.com,
kpsingh@...nel.org, netdev@...r.kernel.org, bpf@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org,
Ravi Bangoria <ravi.bangoria@...ux.ibm.com>,
Hari Bathini <hbathini@...ux.ibm.com>
Subject: [PATCH v4 6/8] bpf ppc64: Access only if addr is kernel address
From: Ravi Bangoria <ravi.bangoria@...ux.ibm.com>
On PPC64 with KUAP enabled, any kernel code which wants to
access userspace needs to be surrounded by disable-enable KUAP.
But that is not happening for BPF_PROBE_MEM load instruction.
So, when BPF program tries to access invalid userspace address,
page-fault handler considers it as bad KUAP fault:
Kernel attempted to read user page (d0000000) - exploit attempt? (uid: 0)
Considering the fact that PTR_TO_BTF_ID (which uses BPF_PROBE_MEM
mode) could either be a valid kernel pointer or NULL but should
never be a pointer to userspace address, execute BPF_PROBE_MEM load
only if addr is kernel address, otherwise set dst_reg=0 and move on.
This will catch NULL, valid or invalid userspace pointers. Only bad
kernel pointer will be handled by BPF exception table.
[Alexei suggested for x86]
Suggested-by: Alexei Starovoitov <ast@...nel.org>
Signed-off-by: Ravi Bangoria <ravi.bangoria@...ux.ibm.com>
Signed-off-by: Hari Bathini <hbathini@...ux.ibm.com>
---
Changes in v4:
* Used IS_ENABLED() instead of #ifdef.
* Dropped the else case that is not applicable for PPC64.
arch/powerpc/net/bpf_jit_comp64.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 4170999371ee..e1ea64081ae1 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -727,6 +727,32 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
/* dst = *(u64 *)(ul) (src + off) */
case BPF_LDX | BPF_MEM | BPF_DW:
case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
+ /*
+ * As PTR_TO_BTF_ID that uses BPF_PROBE_MEM mode could either be a valid
+ * kernel pointer or NULL but not a userspace address, execute BPF_PROBE_MEM
+ * load only if addr is kernel address (see is_kernel_addr()), otherwise
+ * set dst_reg=0 and move on.
+ */
+ if (BPF_MODE(code) == BPF_PROBE_MEM) {
+ EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], src_reg, off));
+ if (IS_ENABLED(CONFIG_PPC_BOOK3E_64))
+ PPC_LI64(b2p[TMP_REG_2], 0x8000000000000000ul);
+ else /* BOOK3S_64 */
+ PPC_LI64(b2p[TMP_REG_2], PAGE_OFFSET);
+ EMIT(PPC_RAW_CMPLD(b2p[TMP_REG_1], b2p[TMP_REG_2]));
+ PPC_BCC(COND_GT, (ctx->idx + 4) * 4);
+ EMIT(PPC_RAW_LI(dst_reg, 0));
+ /*
+ * Check if 'off' is word aligned because PPC_BPF_LL()
+ * (BPF_DW case) generates two instructions if 'off' is not
+ * word-aligned and one instruction otherwise.
+ */
+ if (BPF_SIZE(code) == BPF_DW && (off & 3))
+ PPC_JMP((ctx->idx + 3) * 4);
+ else
+ PPC_JMP((ctx->idx + 2) * 4);
+ }
+
switch (size) {
case BPF_B:
EMIT(PPC_RAW_LBZ(dst_reg, src_reg, off));
--
2.31.1
Powered by blists - more mailing lists