[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202110280923.341FFA15D8@keescook>
Date: Thu, 28 Oct 2021 09:51:12 -0700
From: Kees Cook <keescook@...omium.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: x86@...nel.org, andrew.cooper3@...rix.com,
Josh Poimboeuf <jpoimboe@...hat.com>,
linux-kernel@...r.kernel.org, Mark Rutland <mark.rutland@....com>,
Will Deacon <will@...nel.org>, hjl.tools@...il.com
Subject: Re: [RFC][PATCH] x86: Add straight-line-speculation mitigation
On Thu, Oct 28, 2021 at 01:44:00PM +0200, Peter Zijlstra wrote:
> Hi,
>
> This little patch makes use of an upcomming GCC feature to mitigate
> straight-line-speculation for x86:
>
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102952
>
> It's built tested on x86_64-allyesconfig using GCC-12+patch and GCC-11.
> It's also been boot tested on x86_64-defconfig+kvm_guest.config using
> GCC-12+patch.
>
> Enjoy!
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
I'm all for such mitigations. In x86's case, it's small and easy. I do
note, however, than arm64 maintainers weren't as impressed:
https://lore.kernel.org/lkml/20210305095256.GA22536@willie-the-truck/
What's the image size impact?
> [...]
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -468,6 +468,18 @@ config RETPOLINE
> branches. Requires a compiler with -mindirect-branch=thunk-extern
> support for full protection. The kernel may run slower.
>
> +config CC_HAS_SLS
> + def_bool $(cc-option,-mharden-sls=all)
> +
> +config SLS
> + bool "Mitigate Straight-Line-Speculation"
> + depends on CC_HAS_SLS
> + default y
> + help
> + Compile the kernel with straight-line-speculation options to guard
> + against straight line speculation. The kernel image might be slightly
> + larger.
nit: differing indents; I'd expect the last two lines to match the first
(tab, space, space).
> +
> config X86_CPU_RESCTRL
> bool "x86 CPU resource control support"
> depends on X86 && (CPU_SUP_INTEL || CPU_SUP_AMD)
> --- a/arch/x86/Makefile
> +++ b/arch/x86/Makefile
> @@ -179,6 +179,10 @@ ifdef CONFIG_RETPOLINE
> endif
> endif
>
> +ifdef CONFIG_SLS
> + KBUILD_CFLAGS += -mharden-sls=all
> +endif
> +
> KBUILD_LDFLAGS += -m elf_$(UTS_MACHINE)
>
> ifdef CONFIG_LTO_CLANG
Given the earlier patch for ARM, perhaps the Kconfig and Makefile chunks
should be at the top level instead, making this feature easier to
implement in other architectures?
> [...]
> --- a/scripts/Makefile.lib
> +++ b/scripts/Makefile.lib
> @@ -241,7 +241,8 @@ objtool_args = \
> $(if $(CONFIG_GCOV_KERNEL)$(CONFIG_LTO_CLANG), --no-unreachable)\
> $(if $(CONFIG_RETPOLINE), --retpoline) \
> $(if $(CONFIG_X86_SMAP), --uaccess) \
> - $(if $(CONFIG_FTRACE_MCOUNT_USE_OBJTOOL), --mcount)
> + $(if $(CONFIG_FTRACE_MCOUNT_USE_OBJTOOL), --mcount) \
> + $(if $(CONFIG_SLS), --sls)
>
> # Useful for describing the dependency of composite objects
> # Usage:
> --- a/tools/objtool/arch/x86/decode.c
> +++ b/tools/objtool/arch/x86/decode.c
> @@ -531,6 +531,11 @@ int arch_decode_instruction(struct objto
> }
> break;
>
> + case 0xcc:
> + /* int3 */
> + *type = INSN_TRAP;
> + break;
> +
> case 0xe3:
> /* jecxz/jrcxz */
> *type = INSN_JUMP_CONDITIONAL;
> @@ -697,10 +702,10 @@ const char *arch_ret_insn(int len)
> {
> static const char ret[5][5] = {
> { BYTE_RET },
> - { BYTE_RET, BYTES_NOP1 },
> - { BYTE_RET, BYTES_NOP2 },
> - { BYTE_RET, BYTES_NOP3 },
> - { BYTE_RET, BYTES_NOP4 },
> + { BYTE_RET, 0xcc },
> + { BYTE_RET, 0xcc, BYTES_NOP1 },
> + { BYTE_RET, 0xcc, BYTES_NOP2 },
> + { BYTE_RET, 0xcc, BYTES_NOP3 },
> };
>
> if (len < 1 || len > 5) {
> --- a/tools/objtool/builtin-check.c
> +++ b/tools/objtool/builtin-check.c
> @@ -20,7 +20,7 @@
> #include <objtool/objtool.h>
>
> bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats,
> - validate_dup, vmlinux, mcount, noinstr, backup;
> + validate_dup, vmlinux, mcount, noinstr, backup, sls;
>
> static const char * const check_usage[] = {
> "objtool check [<options>] file.o",
> @@ -45,6 +45,7 @@ const struct option check_options[] = {
> OPT_BOOLEAN('l', "vmlinux", &vmlinux, "vmlinux.o validation"),
> OPT_BOOLEAN('M', "mcount", &mcount, "generate __mcount_loc"),
> OPT_BOOLEAN('B', "backup", &backup, "create .orig files before modification"),
> + OPT_BOOLEAN('S', "sls", &sls, "validate straight-line-speculation"),
> OPT_END(),
> };
>
> --- a/tools/objtool/check.c
> +++ b/tools/objtool/check.c
> @@ -3084,6 +3084,12 @@ static int validate_branch(struct objtoo
> switch (insn->type) {
>
> case INSN_RETURN:
> + if (next_insn && next_insn->type == INSN_TRAP) {
> + next_insn->ignore = true;
> + } else if (sls && !insn->retpoline_safe) {
> + WARN_FUNC("missing int3 after ret",
> + insn->sec, insn->offset);
> + }
> return validate_return(func, insn, &state);
>
> case INSN_CALL:
> @@ -3127,6 +3133,14 @@ static int validate_branch(struct objtoo
> break;
>
> case INSN_JUMP_DYNAMIC:
> + if (next_insn && next_insn->type == INSN_TRAP) {
> + next_insn->ignore = true;
> + } else if (sls && !insn->retpoline_safe) {
> + WARN_FUNC("missing int3 after indirect jump",
> + insn->sec, insn->offset);
> + }
> +
> + /* fallthrough */
> case INSN_JUMP_DYNAMIC_CONDITIONAL:
> if (is_sibling_call(insn)) {
> ret = validate_sibling_call(file, insn, &state);
Oh very nice; I was going to ask "how can we make sure no bare 'ret's
sneak back into .S files" and here it is. Excellent.
Random thought, not for this patch, but can objtool validate the int3
linker padding too? (i.e. to double-check the behavior of
7705dc855797 ("x86/vmlinux: Use INT3 instead of NOP for linker fill bytes"))
-Kees
--
Kees Cook
Powered by blists - more mailing lists