lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 19 Apr 2022 17:42:36 -0700 From: joao@...rdrivepizza.com To: linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org Cc: joao@...rdrivepizza.com, peterz@...radead.org, jpoimboe@...hat.com, andrew.cooper3@...rix.com, keescook@...omium.org, samitolvanen@...gle.com, mark.rutland@....com, hjl.tools@...il.com, alyssa.milburn@...ux.intel.com, ndesaulniers@...gle.com, gabriel.gomes@...ux.intel.com, rick.p.edgecombe@...el.com Subject: [RFC PATCH 06/11] x86/bpf: Support FineIBT From: Joao Moreira <joao@...rdrivepizza.com> BPF jitted code calls helper functions that are in the core and contain a FineIBT hash check sequence in their prologue. Make BPF jit capable of identifying FineIBT sequences when emitting calls and properly sum the offset to bypass it when emitting calls. Signed-off-by: Joao Moreira <joao@...rdrivepizza.com> Tinkered-from-patches-by: Peter Zijlstra <peterz@...radead.org> --- arch/x86/net/bpf_jit_comp.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 16b6efacf7c6..e0c82174a075 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -330,13 +330,44 @@ static int emit_patch(u8 **pprog, void *func, void *ip, u8 opcode) return 0; } +static inline bool skip_fineibt_sequence(void *func) +{ + const void *addr = (void *) func; + union text_poke_insn text; + u32 insn; + + if ((get_kernel_nofault(insn, addr)) || + (!is_endbr(insn))) + return false; + + if ((get_kernel_nofault(text, addr+4)) || + (text.opcode != SUB_INSN_OPCODE)) + return false; + + if ((get_kernel_nofault(text, addr+11)) || + (text.opcode != JE_INSN_OPCODE)) + return false; + + if ((get_kernel_nofault(text, addr+13)) || + (text.opcode != CALL_INSN_OPCODE)) + return false; + + return true; +} + static int emit_call(u8 **pprog, void *func, void *ip) { +#ifdef CONFIG_X86_KERNEL_FINEIBT + if(skip_fineibt_sequence(func)) func = func + FINEIBT_FIXUP; +#endif return emit_patch(pprog, func, ip, 0xE8); } static int emit_jump(u8 **pprog, void *func, void *ip) { +#ifdef CONFIG_X86_KERNEL_FINEIBT + if(skip_fineibt_sequence(func)) func = func + FINEIBT_FIXUP; +#endif return emit_patch(pprog, func, ip, 0xE9); } -- 2.35.1
Powered by blists - more mailing lists