[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <07bf51833b5e1c52bfd9a4dbda41b80c508dffff.1439521412.git.jpoimboe@redhat.com>
Date: Thu, 13 Aug 2015 22:10:24 -0500
From: Josh Poimboeuf <jpoimboe@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org
Cc: linux-kernel@...r.kernel.org, Michal Marek <mmarek@...e.cz>,
Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andi Kleen <andi@...stfloor.org>,
Pedro Alves <palves@...hat.com>,
Namhyung Kim <namhyung@...il.com>,
Bernd Petrovitsch <bernd@...rovitsch.priv.at>,
"Chris J Arges" <chris.j.arges@...onical.com>,
live-patching@...r.kernel.org
Subject: [PATCH v10 03/20] x86/stackvalidate: Compile-time stack validation
This adds a CONFIG_STACK_VALIDATION option which enables a host tool
named stackvalidate which runs at compile time. It analyzes every .o
file and ensures the validity of its stack metadata. It enforces a set
of rules on asm code and C inline assembly code so that stack traces can
be reliable.
Currently it checks frame pointer usage. I also plan to add DWARF CFI
validation for C .o files and CFI generation for asm .o files.
For each function, it recursively follows all possible code paths and
validates the correct frame pointer state at each instruction.
It also follows code paths involving special sections, like
.altinstructions, __jump_table, and __ex_table, which can add
alternative execution paths to a given instruction (or set of
instructions). Similarly, it knows how to follow switch statements, for
which gcc sometimes uses jump tables.
To achieve the validation, stackvalidate enforces the following rules:
1. Each callable function must be annotated as such with the ELF
function type. In asm code, this is typically done using the
ENTRY/ENDPROC macros. If stackvalidate finds a return instruction
outside of a function, it flags an error since that usually indicates
callable code which should be annotated accordingly.
2. Conversely, each section of code which is *not* callable should *not*
be annotated as an ELF function. The ENDPROC macro shouldn't be used
in this case.
3. Each callable function which calls another function must have the
correct frame pointer logic, if required by CONFIG_FRAME_POINTER or
the architecture's back chain rules. This can by done in asm code
with the FRAME_BEGIN/FRAME_END macros.
4. Dynamic jumps and jumps to undefined symbols are only allowed if:
a) the jump is part of a switch statement; or
b) the jump matches sibling call semantics and the frame pointer has
the same value it had on function entry.
5. A callable function may not execute kernel entry/exit instructions.
The only code which needs such instructions is kernel entry code,
which shouldn't be in callable functions anyway.
It currently only supports x86_64. I tried to make the code generic so
that support for other architectures can hopefully be plugged in
relatively easily.
As a first step, CONFIG_STACK_VALIDATION is disabled by default, and all
reported non-compliances result in warnings. Once we get them all
cleaned up, we can change the default to CONFIG_STACK_VALIDATION=y and
change the warnings to errors to keep the stack metadata clean.
On my Lenovo laptop with a i7-4810MQ 4-core/8-thread CPU, building the
kernel with it enabled adds less than 3 seconds of total build time. It
hasn't been optimized for performance yet, so there are probably some
opportunities for better build performance.
Signed-off-by: Josh Poimboeuf <jpoimboe@...hat.com>
---
Documentation/stack-validation.txt | 193 +++++++
MAINTAINERS | 8 +
arch/Kconfig | 6 +
arch/x86/Kconfig | 1 +
arch/x86/Makefile | 6 +-
lib/Kconfig.debug | 11 +
scripts/Makefile | 1 +
scripts/Makefile.build | 37 +-
scripts/stackvalidate/Makefile | 24 +
scripts/stackvalidate/arch-x86.c | 160 ++++++
scripts/stackvalidate/arch.h | 44 ++
scripts/stackvalidate/elf.c | 427 +++++++++++++++
scripts/stackvalidate/elf.h | 92 ++++
scripts/stackvalidate/list.h | 217 ++++++++
scripts/stackvalidate/special.c | 199 +++++++
scripts/stackvalidate/special.h | 42 ++
scripts/stackvalidate/stackvalidate.c | 976 ++++++++++++++++++++++++++++++++++
17 files changed, 2439 insertions(+), 5 deletions(-)
create mode 100644 Documentation/stack-validation.txt
create mode 100644 scripts/stackvalidate/Makefile
create mode 100644 scripts/stackvalidate/arch-x86.c
create mode 100644 scripts/stackvalidate/arch.h
create mode 100644 scripts/stackvalidate/elf.c
create mode 100644 scripts/stackvalidate/elf.h
create mode 100644 scripts/stackvalidate/list.h
create mode 100644 scripts/stackvalidate/special.c
create mode 100644 scripts/stackvalidate/special.h
create mode 100644 scripts/stackvalidate/stackvalidate.c
diff --git a/Documentation/stack-validation.txt b/Documentation/stack-validation.txt
new file mode 100644
index 0000000..c3d3d35
--- /dev/null
+++ b/Documentation/stack-validation.txt
@@ -0,0 +1,193 @@
+Compile-time stack validation
+=============================
+
+
+Overview
+--------
+
+The CONFIG_STACK_VALIDATION option enables a host tool named
+stackvalidate which runs at compile time. It analyzes every .o file and
+ensures the validity of its stack metadata. It enforces a set of rules
+on asm code and C inline assembly code so that stack traces can be
+reliable.
+
+Currently it only checks frame pointer usage, but there are plans to add
+CFI validation for C files and CFI generation for asm files.
+
+For each function, it recursively follows all possible code paths and
+validates the correct frame pointer state at each instruction.
+
+It also follows code paths involving special sections, like
+.altinstructions, __jump_table, and __ex_table, which can add
+alternative execution paths to a given instruction (or set of
+instructions). Similarly, it knows how to follow switch statements, for
+which gcc sometimes uses jump tables.
+
+
+Rules
+-----
+
+To achieve the validation, stackvalidate enforces the following rules:
+
+1. Each callable function must be annotated as such with the ELF
+ function type. In asm code, this is typically done using the
+ ENTRY/ENDPROC macros. If stackvalidate finds a return instruction
+ outside of a function, it flags an error since that usually indicates
+ callable code which should be annotated accordingly.
+
+2. Conversely, each section of code which is *not* callable should *not*
+ be annotated as an ELF function. The ENDPROC macro shouldn't be used
+ in this case.
+
+3. Each callable function which calls another function must have the
+ correct frame pointer logic, if required by CONFIG_FRAME_POINTER or
+ the architecture's back chain rules. This can by done in asm code
+ with the FRAME_BEGIN/FRAME_END macros.
+
+4. Dynamic jumps and jumps to undefined symbols are only allowed if:
+
+ a) the jump is part of a switch statement; or
+
+ b) the jump matches sibling call semantics and the frame pointer has
+ the same value it had on function entry.
+
+5. A callable function may not execute kernel entry/exit instructions.
+ The only code which needs such instructions is kernel entry code,
+ which shouldn't be be in callable functions anyway.
+
+
+Errors in .S files
+------------------
+
+If you're getting an error in a compiled .S file which you don't
+understand, first make sure that the affected code follows the above
+rules.
+
+Here are some examples of common problems and suggestions for how to fix
+them.
+
+
+1. stackvalidate: asm_file.o: func()+0x128: call without frame pointer save/setup
+
+ The func() function made a function call without first saving and/or
+ updating the frame pointer.
+
+ If func() is indeed a callable function, add proper frame pointer
+ logic using the FP_SAVE and FP_RESTORE macros. Otherwise, remove its
+ ELF function annotation by changing ENDPROC to END.
+
+ If you're getting this error in a .c file, see the "Errors in .c
+ files" section.
+
+
+2. stackvalidate: asm_file.o: .text+0x53: return instruction outside of a callable function
+
+ A return instruction was detected, but stackvalidate couldn't find a
+ way for a callable function to reach the instruction.
+
+ If the return instruction is inside (or reachable from) a callable
+ function, the function needs to be annotated with the ENTRY/ENDPROC
+ macros.
+
+ If you _really_ need a return instruction outside of a function, and
+ are 100% sure that it won't affect stack traces, you can tell
+ stackvalidate to ignore it. See the "Adding exceptions" section
+ below.
+
+
+3. stackvalidate: asm_file.o: func()+0x9: function has unreachable instruction
+
+ The instruction lives inside of a callable function, but there's no
+ possible control flow path from the beginning of the function to the
+ instruction.
+
+ If the instruction is actually needed, and it's actually in a
+ callable function, ensure that its function is properly annotated
+ with ENTRY/ENDPROC.
+
+ If it's not actually in a callable function (e.g. kernel entry code),
+ change ENDPROC to END.
+
+
+4. stackvalidate: asm_file.o: func(): can't find starting instruction
+ or
+ stackvalidate: asm_file.o: func()+0x11dd: can't decode instruction
+
+ Did you put data in a text section? If so, that can confuse
+ stackvalidate's instruction decoder. Move the data to a more
+ appropriate section like .data or .rodata.
+
+
+5. stackvalidate: asm_file.o: func()+0x6: kernel entry/exit from callable instruction
+
+ This is a kernel entry/exit instruction like sysenter or sysret.
+ Such instructions aren't allowed in a callable function, and are most
+ likely part of the kernel entry code.
+
+ If the instruction isn't actually in a callable function, change
+ ENDPROC to END.
+
+
+6. stackvalidate: asm_file.o: func()+0x26: sibling call from callable instruction with changed frame pointer
+
+ This is a dynamic jump or a jump to an undefined symbol.
+ Stackvalidate assumed it's a sibling call and detected that the frame
+ pointer wasn't first restored to its original state.
+
+ If it's not really a sibling call, you may need to move the
+ destination code to the local file.
+
+ If the instruction is not actually in a callable function (e.g.
+ kernel entry code), change ENDPROC to END.
+
+
+7. stackvalidate: asm_file: func()+0x5c: frame pointer state mismatch
+
+ The instruction's frame pointer state is inconsistent, depending on
+ which execution path was taken to reach the instruction.
+
+ Make sure the function pushes and sets up the frame pointer (for
+ x86_64, this means rbp) at the beginning of the function and pops it
+ at the end of the function. Also make sure that no other code in the
+ function touches the frame pointer.
+
+
+Errors in .c files
+------------------
+
+If you're getting a stackvalidate error in a compiled .c file, chances
+are the file uses an asm() statement which has a "call" instruction. An
+asm() statement with a call instruction must declare the use of the
+stack pointer in its output operand. For example, on x86_64:
+
+ register void *__sp asm("rsp");
+ asm volatile("call func" : "+r" (__sp));
+
+Otherwise the stack frame may not get created before the call.
+
+Another possible cause for errors in C code is if the Makefile removes
+-fno-omit-frame-pointer or adds -fomit-frame-pointer to the gcc options.
+
+Also see the above section for .S file errors for more information what
+the individual error messages mean.
+
+
+
+Adding exceptions
+-----------------
+
+If you _really_ need stackvalidate to ignore something, and are 100%
+sure that it won't affect kernel stack traces, you can tell
+stackvalidate to ignore it:
+
+- To skip validation of an instruction, use the
+ STACKVALIDATE_IGNORE_INSN macro immediately before the instruction.
+
+- To skip validation of a function, use the STACKVALIDATE_IGNORE_FUNC
+ macro.
+
+- To skip validation of a file, add "STACKVALIDATE_filename.o := n" to
+ the Makefile.
+
+- To skip validation of a directory, add "STACKVALIDATE := n" to the
+ Makefile.
diff --git a/MAINTAINERS b/MAINTAINERS
index c1452d1..fb3deb4 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9679,6 +9679,14 @@ L: stable@...r.kernel.org
S: Supported
F: Documentation/stable_kernel_rules.txt
+STACK VALIDATION
+M: Josh Poimboeuf <jpoimboe@...hat.com>
+S: Supported
+F: scripts/stackvalidate/
+F: include/linux/stackvalidate.h
+F: arch/x86/include/asm/stackvalidate.h
+F: Documentation/stack-validation.txt
+
STAGING SUBSYSTEM
M: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
diff --git a/arch/Kconfig b/arch/Kconfig
index 021f8f9..ceb4eda 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -525,6 +525,12 @@ config HAVE_COPY_THREAD_TLS
normal C parameter passing, rather than extracting the syscall
argument from pt_regs.
+config HAVE_STACK_VALIDATION
+ bool
+ help
+ Architecture supports the stackvalidate host tool, which adds
+ compile-time stack metadata validation.
+
#
# ABI hall of shame
#
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index f37010f..50877dc 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -151,6 +151,7 @@ config X86
select VIRT_TO_BUS
select X86_DEV_DMA_OPS if X86_64
select X86_FEATURE_NAMES if PROC_FS
+ select HAVE_STACK_VALIDATION if X86_64
config INSTRUCTION_DECODER
def_bool y
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 747860c..fb7135b 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -181,9 +181,13 @@ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
KBUILD_CFLAGS += $(mflags-y)
KBUILD_AFLAGS += $(mflags-y)
-archscripts: scripts_basic
+archscripts: scripts_basic $(objtree)/arch/x86/lib/inat-tables.c
$(Q)$(MAKE) $(build)=arch/x86/tools relocs
+# this file is needed early by scripts/stackvalidate
+$(objtree)/arch/x86/lib/inat-tables.c:
+ $(Q)$(MAKE) $(build)=arch/x86/lib $@
+
###
# Syscall table generation
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ab76b99..1c235c6 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -332,6 +332,17 @@ config FRAME_POINTER
larger and slower, but it gives very useful debugging information
in case of kernel bugs. (precise oopses/stacktraces/warnings)
+config STACK_VALIDATION
+ bool "Enable compile-time stack metadata validation"
+ depends on HAVE_STACK_VALIDATION
+ default n
+ help
+ Add compile-time checks to validate stack metadata, including frame
+ pointers and back chain pointers. This helps ensure that runtime
+ stack traces are more reliable.
+
+ For more information, see Documentation/stack-validation.txt.
+
config DEBUG_FORCE_WEAK_PER_CPU
bool "Force weak per-cpu definitions"
depends on DEBUG_KERNEL
diff --git a/scripts/Makefile b/scripts/Makefile
index 2016a64..c882a91 100644
--- a/scripts/Makefile
+++ b/scripts/Makefile
@@ -37,6 +37,7 @@ subdir-y += mod
subdir-$(CONFIG_SECURITY_SELINUX) += selinux
subdir-$(CONFIG_DTC) += dtc
subdir-$(CONFIG_GDB_SCRIPTS) += gdb
+subdir-$(CONFIG_STACK_VALIDATION) += stackvalidate
# Let clean descend into subdirs
subdir- += basic kconfig package
diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 01df30a..a1c5376 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -241,10 +241,30 @@ cmd_record_mcount = \
fi;
endif
+ifdef CONFIG_STACK_VALIDATION
+
+__stackvalidate_obj = $(objtree)/scripts/stackvalidate/stackvalidate
+
+ifndef CONFIG_FRAME_POINTER
+nofp = --no-frame-pointer
+endif
+
+# Set STACKVALIDATE_foo.o=n to skip stack validation for a file.
+# Set STACKVALIDATE=n to skip stack validation for a directory.
+stackvalidate_obj = $(if $(patsubst n%,, \
+ $(STACKVALIDATE_$(basetarget).o)$(STACKVALIDATE)y), \
+ $(__stackvalidate_obj))
+cmd_stackvalidate = $(if $(patsubst n%,, \
+ $(STACKVALIDATE_$(basetarget).o)$(STACKVALIDATE)y), \
+ $(__stackvalidate_obj) $(nofp) "$(@)";)
+
+endif # CONFIG_STACK_VALIDATION
+
define rule_cc_o_c
$(call echo-cmd,checksrc) $(cmd_checksrc) \
$(call echo-cmd,cc_o_c) $(cmd_cc_o_c); \
$(cmd_modversions) \
+ $(cmd_stackvalidate) \
$(call echo-cmd,record_mcount) \
$(cmd_record_mcount) \
scripts/basic/fixdep $(depfile) $@ '$(call make-cmd,cc_o_c)' > \
@@ -253,14 +273,23 @@ define rule_cc_o_c
mv -f $(dot-target).tmp $(dot-target).cmd
endef
+define rule_as_o_S
+ $(call echo-cmd,as_o_S) $(cmd_as_o_S); \
+ $(cmd_stackvalidate) \
+ scripts/basic/fixdep $(depfile) $@ '$(call make-cmd,as_o_S)' > \
+ $(dot-target).tmp; \
+ rm -f $(depfile); \
+ mv -f $(dot-target).tmp $(dot-target).cmd
+endef
+
# Built-in and composite module parts
-$(obj)/%.o: $(src)/%.c $(recordmcount_source) FORCE
+$(obj)/%.o: $(src)/%.c $(recordmcount_source) $(stackvalidate_obj) FORCE
$(call cmd,force_checksrc)
$(call if_changed_rule,cc_o_c)
# Single-part modules are special since we need to mark them in $(MODVERDIR)
-$(single-used-m): $(obj)/%.o: $(src)/%.c $(recordmcount_source) FORCE
+$(single-used-m): $(obj)/%.o: $(src)/%.c $(recordmcount_source) $(stackvalidate_obj) FORCE
$(call cmd,force_checksrc)
$(call if_changed_rule,cc_o_c)
@{ echo $(@:.o=.ko); echo $@; } > $(MODVERDIR)/$(@F:.o=.mod)
@@ -290,8 +319,8 @@ $(obj)/%.s: $(src)/%.S FORCE
quiet_cmd_as_o_S = AS $(quiet_modtag) $@
cmd_as_o_S = $(CC) $(a_flags) -c -o $@ $<
-$(obj)/%.o: $(src)/%.S FORCE
- $(call if_changed_dep,as_o_S)
+$(obj)/%.o: $(src)/%.S $(stackvalidate_obj) FORCE
+ $(call if_changed_rule,as_o_S)
targets += $(real-objs-y) $(real-objs-m) $(lib-y)
targets += $(extra-y) $(MAKECMDGOALS) $(always)
diff --git a/scripts/stackvalidate/Makefile b/scripts/stackvalidate/Makefile
new file mode 100644
index 0000000..468c075
--- /dev/null
+++ b/scripts/stackvalidate/Makefile
@@ -0,0 +1,24 @@
+hostprogs-y := stackvalidate
+always := $(hostprogs-y)
+
+stackvalidate-objs := stackvalidate.o elf.o special.o
+
+HOSTCFLAGS += -Werror
+HOSTLOADLIBES_stackvalidate := -lelf
+
+ifdef CONFIG_X86
+
+stackvalidate-objs += arch-x86.o
+
+HOSTCFLAGS_arch-x86.o := -I$(objtree)/arch/x86/lib/ \
+ -I$(srctree)/arch/x86/include/ \
+ -I$(srctree)/arch/x86/lib/
+
+$(obj)/arch-x86.o: $(srctree)/arch/x86/lib/insn.c \
+ $(srctree)/arch/x86/lib/inat.c \
+ $(srctree)/arch/x86/include/asm/inat_types.h \
+ $(srctree)/arch/x86/include/asm/inat.h \
+ $(srctree)/arch/x86/include/asm/insn.h \
+ $(objtree)/arch/x86/lib/inat-tables.c
+
+endif
diff --git a/scripts/stackvalidate/arch-x86.c b/scripts/stackvalidate/arch-x86.c
new file mode 100644
index 0000000..edc43a8
--- /dev/null
+++ b/scripts/stackvalidate/arch-x86.c
@@ -0,0 +1,160 @@
+/*
+ * Copyright (C) 2015 Josh Poimboeuf <jpoimboe@...hat.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <stdio.h>
+
+#define unlikely(cond) (cond)
+#include <asm/insn.h>
+#include <inat.c>
+#include <insn.c>
+#include <stdlib.h>
+
+#include "elf.h"
+#include "arch.h"
+
+static int is_x86_64(struct elf *elf)
+{
+ switch (elf->ehdr.e_machine) {
+ case EM_X86_64:
+ return 1;
+ case EM_386:
+ return 0;
+ default:
+ WARN("unexpected ELF machine type %d", elf->ehdr.e_machine);
+ return -1;
+ }
+}
+
+int arch_decode_instruction(struct elf *elf, struct section *sec,
+ unsigned long offset, unsigned int maxlen,
+ unsigned int *len, unsigned char *type,
+ unsigned long *immediate)
+{
+ struct insn insn;
+ int x86_64;
+ unsigned char op1, op2, ext;
+
+ x86_64 = is_x86_64(elf);
+ if (x86_64 == -1)
+ return -1;
+
+ insn_init(&insn, (void *)(sec->data + offset), maxlen, x86_64);
+ insn_get_length(&insn);
+ insn_get_opcode(&insn);
+ insn_get_modrm(&insn);
+ insn_get_immediate(&insn);
+
+ if (!insn.opcode.got || !insn.modrm.got || !insn.immediate.got) {
+ WARN("%s: can't decode instruction",
+ offstr(sec, offset));
+ return -1;
+ }
+
+ *len = insn.length;
+ *type = INSN_OTHER;
+
+ if (insn.vex_prefix.nbytes)
+ return 0;
+
+ op1 = insn.opcode.bytes[0];
+ op2 = insn.opcode.bytes[1];
+
+ switch (op1) {
+ case 0x55:
+ if (!insn.rex_prefix.nbytes)
+ /* push rbp */
+ *type = INSN_FP_SAVE;
+ break;
+
+ case 0x5d:
+ if (!insn.rex_prefix.nbytes)
+ /* pop rbp */
+ *type = INSN_FP_RESTORE;
+ break;
+
+ case 0x70 ... 0x7f:
+ *type = INSN_JUMP_CONDITIONAL;
+ break;
+
+ case 0x89:
+ if (insn.rex_prefix.nbytes == 1 &&
+ insn.rex_prefix.bytes[0] == 0x48 &&
+ insn.modrm.nbytes && insn.modrm.bytes[0] == 0xe5)
+ /* mov rsp, rbp */
+ *type = INSN_FP_SETUP;
+ break;
+
+ case 0x90:
+ *type = INSN_NOP;
+ break;
+
+ case 0x0f:
+ if (op2 >= 0x80 && op2 <= 0x8f)
+ *type = INSN_JUMP_CONDITIONAL;
+ else if (op2 == 0x05 || op2 == 0x07 || op2 == 0x34 ||
+ op2 == 0x35)
+ /* sysenter, sysret */
+ *type = INSN_CONTEXT_SWITCH;
+ else if (op2 == 0x0b || op2 == 0xb9)
+ /* ud2 */
+ *type = INSN_BUG;
+ else if (op2 == 0x0d || op2 == 0x1f)
+ /* nopl/nopw */
+ *type = INSN_NOP;
+ break;
+
+ case 0xc9: /* leave */
+ *type = INSN_FP_RESTORE;
+ break;
+
+ case 0xe3: /* jecxz/jrcxz */
+ *type = INSN_JUMP_CONDITIONAL;
+ break;
+
+ case 0xe9:
+ case 0xeb:
+ *type = INSN_JUMP_UNCONDITIONAL;
+ break;
+
+ case 0xc2:
+ case 0xc3:
+ *type = INSN_RETURN;
+ break;
+
+ case 0xc5: /* iret */
+ case 0xca: /* retf */
+ case 0xcb: /* retf */
+ *type = INSN_CONTEXT_SWITCH;
+ break;
+
+ case 0xe8:
+ *type = INSN_CALL;
+ break;
+
+ case 0xff:
+ ext = X86_MODRM_REG(insn.modrm.bytes[0]);
+ if (ext == 2 || ext == 3)
+ *type = INSN_CALL_DYNAMIC;
+ else if (ext == 4 || ext == 5)
+ *type = INSN_JUMP_DYNAMIC;
+ break;
+ }
+
+ *immediate = insn.immediate.nbytes ? insn.immediate.value : 0;
+
+ return 0;
+}
diff --git a/scripts/stackvalidate/arch.h b/scripts/stackvalidate/arch.h
new file mode 100644
index 0000000..f7350fc
--- /dev/null
+++ b/scripts/stackvalidate/arch.h
@@ -0,0 +1,44 @@
+/*
+ * Copyright (C) 2015 Josh Poimboeuf <jpoimboe@...hat.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ARCH_H
+#define _ARCH_H
+
+#include <stdbool.h>
+#include "elf.h"
+
+#define INSN_FP_SAVE 1
+#define INSN_FP_SETUP 2
+#define INSN_FP_RESTORE 3
+#define INSN_JUMP_CONDITIONAL 4
+#define INSN_JUMP_UNCONDITIONAL 5
+#define INSN_JUMP_DYNAMIC 6
+#define INSN_CALL 7
+#define INSN_CALL_DYNAMIC 8
+#define INSN_RETURN 9
+#define INSN_CONTEXT_SWITCH 10
+#define INSN_BUG 11
+#define INSN_NOP 12
+#define INSN_OTHER 13
+#define INSN_LAST INSN_OTHER
+
+int arch_decode_instruction(struct elf *elf, struct section *sec,
+ unsigned long offset, unsigned int maxlen,
+ unsigned int *len, unsigned char *type,
+ unsigned long *displacement);
+
+#endif /* _ARCH_H */
diff --git a/scripts/stackvalidate/elf.c b/scripts/stackvalidate/elf.c
new file mode 100644
index 0000000..29c3f29
--- /dev/null
+++ b/scripts/stackvalidate/elf.c
@@ -0,0 +1,427 @@
+/*
+ * elf.c - ELF access library
+ *
+ * Adapted from kpatch (https://github.com/dynup/kpatch):
+ * Copyright (C) 2013-2015 Josh Poimboeuf <jpoimboe@...hat.com>
+ * Copyright (C) 2014 Seth Jennings <sjenning@...hat.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+
+#include "elf.h"
+
+char *offstr(struct section *sec, unsigned long offset)
+{
+ struct symbol *func;
+ char *name, *str;
+ unsigned long name_off;
+
+ func = find_containing_func(sec, offset);
+ if (func) {
+ name = func->name;
+ name_off = offset - func->offset;
+ } else {
+ name = sec->name;
+ name_off = offset;
+ }
+
+ str = malloc(strlen(name) + 20);
+
+ if (func)
+ sprintf(str, "%s()+0x%lx", name, name_off);
+ else
+ sprintf(str, "%s+0x%lx", name, name_off);
+
+ return str;
+}
+
+struct section *find_section_by_name(struct elf *elf, const char *name)
+{
+ struct section *sec;
+
+ list_for_each_entry(sec, &elf->sections, list)
+ if (!strcmp(sec->name, name))
+ return sec;
+
+ return NULL;
+}
+
+static struct section *find_section_by_index(struct elf *elf,
+ unsigned int index)
+{
+ struct section *sec;
+
+ list_for_each_entry(sec, &elf->sections, list)
+ if (sec->index == index)
+ return sec;
+
+ return NULL;
+}
+
+static struct symbol *find_symbol_by_index(struct elf *elf, unsigned int index)
+{
+ struct section *sec;
+ struct symbol *sym;
+
+ list_for_each_entry(sec, &elf->sections, list)
+ list_for_each_entry(sym, &sec->symbols, list)
+ if (sym->index == index)
+ return sym;
+
+ return NULL;
+}
+
+struct symbol *find_symbol_by_offset(struct section *sec, unsigned long offset)
+{
+ struct symbol *sym;
+
+ list_for_each_entry(sym, &sec->symbols, list)
+ if (sym->type != STT_SECTION &&
+ sym->offset == offset)
+ return sym;
+
+ return NULL;
+}
+
+struct rela *find_rela_by_dest_range(struct section *sec, unsigned long offset,
+ unsigned int len)
+{
+ struct rela *rela;
+
+ if (!sec->rela)
+ return NULL;
+
+ list_for_each_entry(rela, &sec->rela->relas, list)
+ if (rela->offset >= offset && rela->offset < offset + len)
+ return rela;
+
+ return NULL;
+}
+
+struct rela *find_rela_by_dest(struct section *sec, unsigned long offset)
+{
+ return find_rela_by_dest_range(sec, offset, 1);
+}
+
+struct symbol *find_containing_func(struct section *sec, unsigned long offset)
+{
+ struct symbol *func;
+
+ list_for_each_entry(func, &sec->symbols, list)
+ if (func->type == STT_FUNC && offset >= func->offset &&
+ offset < func->offset + func->len)
+ return func;
+
+ return NULL;
+}
+
+static int read_sections(struct elf *elf)
+{
+ Elf_Scn *s = NULL;
+ struct section *sec;
+ size_t shstrndx, sections_nr;
+ int i;
+
+ if (elf_getshdrnum(elf->elf, §ions_nr)) {
+ perror("elf_getshdrnum");
+ return -1;
+ }
+
+ if (elf_getshdrstrndx(elf->elf, &shstrndx)) {
+ perror("elf_getshdrstrndx");
+ return -1;
+ }
+
+ for (i = 0; i < sections_nr; i++) {
+ sec = malloc(sizeof(*sec));
+ if (!sec) {
+ perror("malloc");
+ return -1;
+ }
+ memset(sec, 0, sizeof(*sec));
+
+ INIT_LIST_HEAD(&sec->symbols);
+ INIT_LIST_HEAD(&sec->relas);
+
+ list_add_tail(&sec->list, &elf->sections);
+
+ s = elf_getscn(elf->elf, i);
+ if (!s) {
+ perror("elf_getscn");
+ return -1;
+ }
+
+ sec->index = elf_ndxscn(s);
+
+ if (!gelf_getshdr(s, &sec->sh)) {
+ perror("gelf_getshdr");
+ return -1;
+ }
+
+ sec->name = elf_strptr(elf->elf, shstrndx, sec->sh.sh_name);
+ if (!sec->name) {
+ perror("elf_strptr");
+ return -1;
+ }
+
+ sec->elf_data = elf_getdata(s, NULL);
+ if (!sec->elf_data) {
+ perror("elf_getdata");
+ return -1;
+ }
+
+ if (sec->elf_data->d_off != 0 ||
+ sec->elf_data->d_size != sec->sh.sh_size) {
+ WARN("unexpected data attributes for %s", sec->name);
+ return -1;
+ }
+
+ sec->data = (unsigned long)sec->elf_data->d_buf;
+ sec->len = sec->elf_data->d_size;
+ }
+
+ /* sanity check, one more call to elf_nextscn() should return NULL */
+ if (elf_nextscn(elf->elf, s)) {
+ WARN("section entry mismatch");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int read_symbols(struct elf *elf)
+{
+ struct section *symtab;
+ struct symbol *sym;
+ struct list_head *entry, *tmp;
+ int symbols_nr, i;
+
+ symtab = find_section_by_name(elf, ".symtab");
+ if (!symtab) {
+ WARN("missing symbol table");
+ return -1;
+ }
+
+ symbols_nr = symtab->sh.sh_size / symtab->sh.sh_entsize;
+
+ for (i = 0; i < symbols_nr; i++) {
+ sym = malloc(sizeof(*sym));
+ if (!sym) {
+ perror("malloc");
+ return -1;
+ }
+ memset(sym, 0, sizeof(*sym));
+
+ sym->index = i;
+
+ if (!gelf_getsym(symtab->elf_data, i, &sym->sym)) {
+ perror("gelf_getsym");
+ goto err;
+ }
+
+ sym->name = elf_strptr(elf->elf, symtab->sh.sh_link,
+ sym->sym.st_name);
+ if (!sym->name) {
+ perror("elf_strptr");
+ goto err;
+ }
+
+ sym->type = GELF_ST_TYPE(sym->sym.st_info);
+ sym->bind = GELF_ST_BIND(sym->sym.st_info);
+
+ if (sym->sym.st_shndx > SHN_UNDEF &&
+ sym->sym.st_shndx < SHN_LORESERVE) {
+ sym->sec = find_section_by_index(elf,
+ sym->sym.st_shndx);
+ if (!sym->sec) {
+ WARN("couldn't find section for symbol %s",
+ sym->name);
+ goto err;
+ }
+ if (sym->type == STT_SECTION) {
+ sym->name = sym->sec->name;
+ sym->sec->sym = sym;
+ }
+ } else
+ sym->sec = find_section_by_index(elf, 0);
+
+ sym->offset = sym->sym.st_value;
+ sym->len = sym->sym.st_size;
+
+ /* sorted insert into a per-section list */
+ entry = &sym->sec->symbols;
+ list_for_each_prev(tmp, &sym->sec->symbols) {
+ struct symbol *s;
+
+ s = list_entry(tmp, struct symbol, list);
+
+ if (sym->offset > s->offset) {
+ entry = tmp;
+ break;
+ }
+
+ if (sym->offset == s->offset && sym->len >= s->len) {
+ entry = tmp;
+ break;
+ }
+ }
+ list_add(&sym->list, entry);
+ }
+
+ return 0;
+
+err:
+ free(sym);
+ return -1;
+}
+
+static int read_relas(struct elf *elf)
+{
+ struct section *sec;
+ struct rela *rela;
+ int i;
+ unsigned int symndx;
+
+ list_for_each_entry(sec, &elf->sections, list) {
+ if (sec->sh.sh_type != SHT_RELA)
+ continue;
+
+ sec->base = find_section_by_name(elf, sec->name + 5);
+ if (!sec->base) {
+ WARN("can't find base section for rela section %s",
+ sec->name);
+ return -1;
+ }
+
+ sec->base->rela = sec;
+
+ for (i = 0; i < sec->sh.sh_size / sec->sh.sh_entsize; i++) {
+ rela = malloc(sizeof(*rela));
+ if (!rela) {
+ perror("malloc");
+ return -1;
+ }
+ memset(rela, 0, sizeof(*rela));
+
+ list_add_tail(&rela->list, &sec->relas);
+
+ if (!gelf_getrela(sec->elf_data, i, &rela->rela)) {
+ perror("gelf_getrela");
+ return -1;
+ }
+
+ rela->type = GELF_R_TYPE(rela->rela.r_info);
+ rela->addend = rela->rela.r_addend;
+ rela->offset = rela->rela.r_offset;
+ symndx = GELF_R_SYM(rela->rela.r_info);
+ rela->sym = find_symbol_by_index(elf, symndx);
+ if (!rela->sym) {
+ WARN("can't find rela entry symbol %d for %s",
+ symndx, sec->name);
+ return -1;
+ }
+ }
+ }
+
+ return 0;
+}
+
+struct elf *elf_open(const char *name)
+{
+ struct elf *elf;
+
+ elf_version(EV_CURRENT);
+
+ elf = malloc(sizeof(*elf));
+ if (!elf) {
+ perror("malloc");
+ return NULL;
+ }
+ memset(elf, 0, sizeof(*elf));
+
+ INIT_LIST_HEAD(&elf->sections);
+
+ elf->name = strdup(name);
+ if (!elf->name) {
+ perror("strdup");
+ goto err;
+ }
+
+ elf->fd = open(name, O_RDONLY);
+ if (elf->fd == -1) {
+ perror("open");
+ goto err;
+ }
+
+ elf->elf = elf_begin(elf->fd, ELF_C_READ_MMAP, NULL);
+ if (!elf->elf) {
+ perror("elf_begin");
+ goto err;
+ }
+
+ if (!gelf_getehdr(elf->elf, &elf->ehdr)) {
+ perror("gelf_getehdr");
+ goto err;
+ }
+
+ if (read_sections(elf))
+ goto err;
+
+ if (read_symbols(elf))
+ goto err;
+
+ if (read_relas(elf))
+ goto err;
+
+ return elf;
+
+err:
+ elf_close(elf);
+ return NULL;
+}
+
+void elf_close(struct elf *elf)
+{
+ struct section *sec, *tmpsec;
+ struct symbol *sym, *tmpsym;
+ struct rela *rela, *tmprela;
+
+ list_for_each_entry_safe(sec, tmpsec, &elf->sections, list) {
+ list_for_each_entry_safe(sym, tmpsym, &sec->symbols, list) {
+ list_del(&sym->list);
+ free(sym);
+ }
+ list_for_each_entry_safe(rela, tmprela, &sec->relas, list) {
+ list_del(&rela->list);
+ free(rela);
+ }
+ list_del(&sec->list);
+ free(sec);
+ }
+ if (elf->name)
+ free(elf->name);
+ if (elf->fd > 0)
+ close(elf->fd);
+ if (elf->elf)
+ elf_end(elf->elf);
+ free(elf);
+}
diff --git a/scripts/stackvalidate/elf.h b/scripts/stackvalidate/elf.h
new file mode 100644
index 0000000..feb1f38
--- /dev/null
+++ b/scripts/stackvalidate/elf.h
@@ -0,0 +1,92 @@
+/*
+ * Copyright (C) 2015 Josh Poimboeuf <jpoimboe@...hat.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _STACKVALIDATE_ELF_H
+#define _STACKVALIDATE_ELF_H
+
+#include <stdio.h>
+#include <gelf.h>
+#include "list.h"
+
+#define WARN(format, ...) \
+ fprintf(stderr, \
+ "stackvalidate: %s: " format "\n", \
+ elf->name, ##__VA_ARGS__)
+
+#define WARN_FUNC(format, sec, offset, ...) \
+({ \
+ char *_str = offstr(sec, offset); \
+ WARN("%s: " format, _str, ##__VA_ARGS__); \
+ free(_str); \
+})
+
+struct section {
+ struct list_head list;
+ GElf_Shdr sh;
+ struct list_head symbols;
+ struct list_head relas;
+ struct section *base, *rela;
+ struct symbol *sym;
+ Elf_Data *elf_data;
+ char *name;
+ int index;
+ unsigned long data;
+ unsigned int len;
+};
+
+struct symbol {
+ struct list_head list;
+ GElf_Sym sym;
+ struct section *sec;
+ char *name;
+ int index;
+ unsigned char bind, type;
+ unsigned long offset;
+ unsigned int len;
+};
+
+struct rela {
+ struct list_head list;
+ GElf_Rela rela;
+ struct symbol *sym;
+ unsigned int type;
+ int offset;
+ int addend;
+};
+
+struct elf {
+ Elf *elf;
+ GElf_Ehdr ehdr;
+ int fd;
+ char *name;
+ struct list_head sections;
+};
+
+
+struct elf *elf_open(const char *name);
+struct section *find_section_by_name(struct elf *elf, const char *name);
+struct symbol *find_symbol_by_offset(struct section *sec, unsigned long offset);
+struct rela *find_rela_by_dest(struct section *sec, unsigned long offset);
+struct rela *find_rela_by_dest_range(struct section *sec, unsigned long offset,
+ unsigned int len);
+struct symbol *find_containing_func(struct section *sec, unsigned long offset);
+char *offstr(struct section *sec, unsigned long offset);
+void elf_close(struct elf *elf);
+
+
+
+#endif /* _STACKVALIDATE_ELF_H */
diff --git a/scripts/stackvalidate/list.h b/scripts/stackvalidate/list.h
new file mode 100644
index 0000000..25716b5
--- /dev/null
+++ b/scripts/stackvalidate/list.h
@@ -0,0 +1,217 @@
+#ifndef _LIST_H
+#define _LIST_H
+
+#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
+
+#define container_of(ptr, type, member) ({ \
+ const typeof(((type *)0)->member) *__mptr = (ptr); \
+ (type *)((char *)__mptr - offsetof(type, member)); })
+
+#define LIST_POISON1 ((void *) 0x00100100)
+#define LIST_POISON2 ((void *) 0x00200200)
+
+struct list_head {
+ struct list_head *next, *prev;
+};
+
+#define LIST_HEAD_INIT(name) { &(name), &(name) }
+
+#define LIST_HEAD(name) \
+ struct list_head name = LIST_HEAD_INIT(name)
+
+static inline void INIT_LIST_HEAD(struct list_head *list)
+{
+ list->next = list;
+ list->prev = list;
+}
+
+static inline void __list_add(struct list_head *new,
+ struct list_head *prev,
+ struct list_head *next)
+{
+ next->prev = new;
+ new->next = next;
+ new->prev = prev;
+ prev->next = new;
+}
+
+static inline void list_add(struct list_head *new, struct list_head *head)
+{
+ __list_add(new, head, head->next);
+}
+
+static inline void list_add_tail(struct list_head *new, struct list_head *head)
+{
+ __list_add(new, head->prev, head);
+}
+
+static inline void __list_del(struct list_head *prev, struct list_head *next)
+{
+ next->prev = prev;
+ prev->next = next;
+}
+
+static inline void __list_del_entry(struct list_head *entry)
+{
+ __list_del(entry->prev, entry->next);
+}
+
+static inline void list_del(struct list_head *entry)
+{
+ __list_del(entry->prev, entry->next);
+ entry->next = LIST_POISON1;
+ entry->prev = LIST_POISON2;
+}
+
+static inline void list_replace(struct list_head *old,
+ struct list_head *new)
+{
+ new->next = old->next;
+ new->next->prev = new;
+ new->prev = old->prev;
+ new->prev->next = new;
+}
+
+static inline void list_replace_init(struct list_head *old,
+ struct list_head *new)
+{
+ list_replace(old, new);
+ INIT_LIST_HEAD(old);
+}
+
+static inline void list_del_init(struct list_head *entry)
+{
+ __list_del_entry(entry);
+ INIT_LIST_HEAD(entry);
+}
+
+static inline void list_move(struct list_head *list, struct list_head *head)
+{
+ __list_del_entry(list);
+ list_add(list, head);
+}
+
+static inline void list_move_tail(struct list_head *list,
+ struct list_head *head)
+{
+ __list_del_entry(list);
+ list_add_tail(list, head);
+}
+
+static inline int list_is_last(const struct list_head *list,
+ const struct list_head *head)
+{
+ return list->next == head;
+}
+
+static inline int list_empty(const struct list_head *head)
+{
+ return head->next == head;
+}
+
+static inline int list_empty_careful(const struct list_head *head)
+{
+ struct list_head *next = head->next;
+
+ return (next == head) && (next == head->prev);
+}
+
+static inline void list_rotate_left(struct list_head *head)
+{
+ struct list_head *first;
+
+ if (!list_empty(head)) {
+ first = head->next;
+ list_move_tail(first, head);
+ }
+}
+
+static inline int list_is_singular(const struct list_head *head)
+{
+ return !list_empty(head) && (head->next == head->prev);
+}
+
+#define list_entry(ptr, type, member) \
+ container_of(ptr, type, member)
+
+#define list_first_entry(ptr, type, member) \
+ list_entry((ptr)->next, type, member)
+
+#define list_last_entry(ptr, type, member) \
+ list_entry((ptr)->prev, type, member)
+
+#define list_first_entry_or_null(ptr, type, member) \
+ (!list_empty(ptr) ? list_first_entry(ptr, type, member) : NULL)
+
+#define list_next_entry(pos, member) \
+ list_entry((pos)->member.next, typeof(*(pos)), member)
+
+#define list_prev_entry(pos, member) \
+ list_entry((pos)->member.prev, typeof(*(pos)), member)
+
+#define list_for_each(pos, head) \
+ for (pos = (head)->next; pos != (head); pos = pos->next)
+
+#define list_for_each_prev(pos, head) \
+ for (pos = (head)->prev; pos != (head); pos = pos->prev)
+
+#define list_for_each_safe(pos, n, head) \
+ for (pos = (head)->next, n = pos->next; pos != (head); \
+ pos = n, n = pos->next)
+
+#define list_for_each_prev_safe(pos, n, head) \
+ for (pos = (head)->prev, n = pos->prev; \
+ pos != (head); \
+ pos = n, n = pos->prev)
+
+#define list_for_each_entry(pos, head, member) \
+ for (pos = list_first_entry(head, typeof(*pos), member); \
+ &pos->member != (head); \
+ pos = list_next_entry(pos, member))
+
+#define list_for_each_entry_reverse(pos, head, member) \
+ for (pos = list_last_entry(head, typeof(*pos), member); \
+ &pos->member != (head); \
+ pos = list_prev_entry(pos, member))
+
+#define list_prepare_entry(pos, head, member) \
+ ((pos) ? : list_entry(head, typeof(*pos), member))
+
+#define list_for_each_entry_continue(pos, head, member) \
+ for (pos = list_next_entry(pos, member); \
+ &pos->member != (head); \
+ pos = list_next_entry(pos, member))
+
+#define list_for_each_entry_continue_reverse(pos, head, member) \
+ for (pos = list_prev_entry(pos, member); \
+ &pos->member != (head); \
+ pos = list_prev_entry(pos, member))
+
+#define list_for_each_entry_from(pos, head, member) \
+ for (; &pos->member != (head); \
+ pos = list_next_entry(pos, member))
+
+#define list_for_each_entry_safe(pos, n, head, member) \
+ for (pos = list_first_entry(head, typeof(*pos), member), \
+ n = list_next_entry(pos, member); \
+ &pos->member != (head); \
+ pos = n, n = list_next_entry(n, member))
+
+#define list_for_each_entry_safe_continue(pos, n, head, member) \
+ for (pos = list_next_entry(pos, member), \
+ n = list_next_entry(pos, member); \
+ &pos->member != (head); \
+ pos = n, n = list_next_entry(n, member))
+
+#define list_for_each_entry_safe_from(pos, n, head, member) \
+ for (n = list_next_entry(pos, member); \
+ &pos->member != (head); \
+ pos = n, n = list_next_entry(n, member))
+
+#define list_for_each_entry_safe_reverse(pos, n, head, member) \
+ for (pos = list_last_entry(head, typeof(*pos), member), \
+ n = list_prev_entry(pos, member); \
+ &pos->member != (head); \
+ pos = n, n = list_prev_entry(n, member))
+
+#endif /* _LIST_H */
diff --git a/scripts/stackvalidate/special.c b/scripts/stackvalidate/special.c
new file mode 100644
index 0000000..3882839
--- /dev/null
+++ b/scripts/stackvalidate/special.c
@@ -0,0 +1,199 @@
+/*
+ * Copyright (C) 2015 Josh Poimboeuf <jpoimboe@...hat.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+/*
+ * This file reads all the special sections which have alternate instructions
+ * which can be patched in or redirected to at runtime.
+ */
+
+#include <stdlib.h>
+#include <string.h>
+#include "special.h"
+
+#define EX_ENTRY_SIZE 8
+#define EX_ORIG_OFFSET 0
+#define EX_NEW_OFFSET 4
+
+#define JUMP_ENTRY_SIZE 24
+#define JUMP_ORIG_OFFSET 0
+#define JUMP_NEW_OFFSET 8
+
+#define ALT_ENTRY_SIZE 13
+#define ALT_ORIG_OFFSET 0
+#define ALT_NEW_OFFSET 4
+#define ALT_FEATURE_OFFSET 8
+#define ALT_ORIG_LEN_OFFSET 10
+#define ALT_NEW_LEN_OFFSET 11
+
+#define X86_FEATURE_POPCNT 4*32+23
+
+struct special_entry {
+ char *sec;
+ bool group, jump_or_nop;
+ unsigned char size, orig, new;
+ unsigned char orig_len, new_len; /* group only */
+ unsigned char feature; /* ALTERNATIVE macro CPU feature */
+};
+
+struct special_entry entries[] = {
+ {
+ .sec = ".altinstructions",
+ .group = true,
+ .size = ALT_ENTRY_SIZE,
+ .orig = ALT_ORIG_OFFSET,
+ .orig_len = ALT_ORIG_LEN_OFFSET,
+ .new = ALT_NEW_OFFSET,
+ .new_len = ALT_NEW_LEN_OFFSET,
+ .feature = ALT_FEATURE_OFFSET,
+ },
+ {
+ .sec = "__jump_table",
+ .jump_or_nop = true,
+ .size = JUMP_ENTRY_SIZE,
+ .orig = JUMP_ORIG_OFFSET,
+ .new = JUMP_NEW_OFFSET,
+ },
+ {
+ .sec = "__ex_table",
+ .size = EX_ENTRY_SIZE,
+ .orig = EX_ORIG_OFFSET,
+ .new = EX_NEW_OFFSET,
+ },
+ {},
+};
+
+static int get_alt_entry(struct elf *elf, struct special_entry *entry,
+ struct section *sec, int index,
+ struct special_alt *alt)
+{
+ struct rela *orig_rela, *new_rela;
+ unsigned long offset;
+
+ offset = index * entry->size;
+
+ alt->group = entry->group;
+ alt->jump_or_nop = entry->jump_or_nop;
+
+ if (alt->group) {
+ alt->orig_len = *(unsigned char *)(sec->data + offset +
+ entry->orig_len);
+ alt->new_len = *(unsigned char *)(sec->data + offset +
+ entry->new_len);
+ }
+
+ if (entry->feature) {
+ unsigned short feature;
+
+ feature = *(unsigned short *)(sec->data + offset +
+ entry->feature);
+
+ /*
+ * It has been requested that we don't validate the !POPCNT
+ * feature path which is a "very very small percentage of
+ * machines".
+ */
+ if (feature == X86_FEATURE_POPCNT)
+ alt->skip_orig = true;
+ }
+
+ orig_rela = find_rela_by_dest(sec, offset + entry->orig);
+ if (!orig_rela) {
+ WARN("%s: can't find orig rela",
+ offstr(sec, offset + entry->orig));
+ return -1;
+ }
+ if (orig_rela->sym->type != STT_SECTION) {
+ WARN("%s: don't know how to handle non-section rela symbol %s",
+ offstr(sec, offset + entry->orig),
+ orig_rela->sym->name);
+ return -1;
+ }
+
+ alt->orig_sec = orig_rela->sym->sec;
+ alt->orig_off = orig_rela->addend;
+
+ if (!entry->group || alt->new_len) {
+ new_rela = find_rela_by_dest(sec, offset + entry->new);
+ if (!new_rela) {
+ WARN("%s: can't find new rela",
+ offstr(sec, offset + entry->new));
+ return -1;
+ }
+ if (new_rela->sym->type != STT_SECTION) {
+ WARN("%s: don't know how to handle non-section rela symbol %s",
+ offstr(sec, offset + entry->new),
+ new_rela->sym->name);
+ return -1;
+ }
+
+ alt->new_sec = new_rela->sym->sec;
+ alt->new_off = (unsigned int)new_rela->addend;
+
+ /* _ASM_EXTABLE_EX hack */
+ if (alt->new_off >= 0x7ffffff0)
+ alt->new_off -= 0x7ffffff0;
+ }
+
+ return 0;
+}
+
+/*
+ * Read all the special sections and create a list of special_alt structs which
+ * describe all the alternate instructions which can be patched in or
+ * redirected to at runtime.
+ */
+int special_get_alts(struct elf *elf, struct list_head *alts)
+{
+ struct special_entry *entry;
+ struct section *sec;
+ unsigned int nr_entries;
+ struct special_alt *alt;
+ int index, ret;
+
+ INIT_LIST_HEAD(alts);
+
+ for (entry = entries; entry->sec; entry++) {
+ sec = find_section_by_name(elf, entry->sec);
+ if (!sec)
+ continue;
+
+ if (sec->len % entry->size != 0) {
+ WARN("%s size not a multiple of %d",
+ sec->name, JUMP_ENTRY_SIZE);
+ return -1;
+ }
+
+ nr_entries = sec->len / entry->size;
+
+ for (index = 0; index < nr_entries; index++) {
+ alt = malloc(sizeof(*alt));
+ if (!alt) {
+ WARN("malloc failed");
+ return -1;
+ }
+ memset(alt, 0, sizeof(*alt));
+
+ ret = get_alt_entry(elf, entry, sec, index, alt);
+ if (ret)
+ return ret;
+
+ list_add_tail(&alt->list, alts);
+ }
+ }
+
+ return 0;
+}
diff --git a/scripts/stackvalidate/special.h b/scripts/stackvalidate/special.h
new file mode 100644
index 0000000..fad1d09
--- /dev/null
+++ b/scripts/stackvalidate/special.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2015 Josh Poimboeuf <jpoimboe@...hat.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _SPECIAL_H
+#define _SPECIAL_H
+
+#include <stdbool.h>
+#include "elf.h"
+
+struct special_alt {
+ struct list_head list;
+
+ bool group;
+ bool skip_orig;
+ bool jump_or_nop;
+
+ struct section *orig_sec;
+ unsigned long orig_off;
+
+ struct section *new_sec;
+ unsigned long new_off;
+
+ unsigned int orig_len, new_len; /* group only */
+};
+
+int special_get_alts(struct elf *elf, struct list_head *alts);
+
+#endif /* _SPECIAL_H */
diff --git a/scripts/stackvalidate/stackvalidate.c b/scripts/stackvalidate/stackvalidate.c
new file mode 100644
index 0000000..de0d977
--- /dev/null
+++ b/scripts/stackvalidate/stackvalidate.c
@@ -0,0 +1,976 @@
+/*
+ * Copyright (C) 2015 Josh Poimboeuf <jpoimboe@...hat.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+/*
+ * stackvalidate:
+ *
+ * This tool analyzes every .o file and ensures the validity of its stack trace
+ * metadata. It enforces a set of rules on asm code and C inline assembly code
+ * so that stack traces can be reliable.
+ *
+ * For more information, see Documentation/stack-validation.txt.
+ */
+
+#include <argp.h>
+#include <stdbool.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include "elf.h"
+#include "arch.h"
+#include "special.h"
+
+#define STATE_FP_SAVED 0x1
+#define STATE_FP_SETUP 0x2
+
+int warnings;
+
+struct instruction {
+ struct list_head list;
+ struct section *sec;
+ unsigned long offset;
+ unsigned int len, state;
+ unsigned char type;
+ unsigned long immediate;
+ bool alt_group, visited, ignore;
+ struct symbol *call_dest;
+ struct instruction *jump_dest;
+ struct list_head alts;
+};
+
+struct list_head insns;
+
+struct alternative {
+ struct list_head list;
+ struct instruction *insn;
+};
+
+struct args {
+ char *args[1];
+ bool nofp;
+};
+struct args args;
+static const char args_doc[] = "file.o";
+static struct argp_option options[] = {
+ {"no-frame-pointer", 1, 0, 0, "Don't check frame pointers" },
+ {0},
+};
+static error_t parse_opt(int key, char *arg, struct argp_state *state)
+{
+ /* Get the input argument from argp_parse, which we
+ know is a pointer to our args structure. */
+ struct args *args = state->input;
+
+ switch (key) {
+ case 1: /* --no-frame-pointer */
+ args->nofp = true;
+ break;
+
+ case ARGP_KEY_ARG:
+ if (state->arg_num >= 1)
+ /* Too many arguments. */
+ argp_usage(state);
+ args->args[state->arg_num] = arg;
+ break;
+
+ case ARGP_KEY_END:
+ if (state->arg_num < 1)
+ /* Not enough arguments. */
+ argp_usage(state);
+ break;
+
+ default:
+ return ARGP_ERR_UNKNOWN;
+ }
+ return 0;
+}
+static struct argp argp = { options, parse_opt, args_doc, 0 };
+
+/*
+ * Check for the STACKVALIDATE_IGNORE_INSN macro.
+ */
+static bool ignore_insn(struct elf *elf, struct instruction *insn)
+{
+ struct section *macro_sec;
+ struct rela *rela;
+
+ macro_sec = find_section_by_name(elf, "__stackvalidate_ignore_insn");
+ if (!macro_sec || !macro_sec->rela)
+ return false;
+
+ list_for_each_entry(rela, ¯o_sec->rela->relas, list)
+ if (rela->sym->type == STT_SECTION &&
+ rela->sym == insn->sec->sym &&
+ rela->addend == insn->offset)
+ return true;
+
+ return false;
+}
+
+/*
+ * Check for the STACKVALIDATE_IGNORE_FUNC macro.
+ */
+static bool ignore_func(struct elf *elf, struct symbol *func)
+{
+ struct section *macro_sec;
+ struct rela *rela;
+
+ macro_sec = find_section_by_name(elf, "__stackvalidate_ignore_func");
+ if (!macro_sec || !macro_sec->rela)
+ return false;
+
+ list_for_each_entry(rela, ¯o_sec->rela->relas, list)
+ if (rela->sym->sec == func->sec &&
+ rela->addend == func->offset)
+ return true;
+
+
+ return false;
+}
+
+static struct instruction *find_instruction(struct section *sec,
+ unsigned long offset)
+{
+ struct instruction *insn;
+
+ list_for_each_entry(insn, &insns, list)
+ if (insn->sec == sec && insn->offset == offset)
+ return insn;
+
+ return NULL;
+}
+
+/*
+ * This checks to see if the given function is a "noreturn" function.
+ *
+ * For global functions which are outside the scope of this object file, we
+ * have to keep a manual list of them.
+ *
+ * For local functions, we have to detect them manually by simply looking for
+ * the lack of a return instruction.
+ */
+static bool dead_end_function(struct symbol *func)
+{
+ struct instruction *insn;
+
+ if (func->bind == STB_GLOBAL &&
+ (!strcmp(func->name, "__stack_chk_fail") ||
+ !strcmp(func->name, "panic") ||
+ !strcmp(func->name, "do_exit")))
+ return true;
+
+ if (!func->sec)
+ return false;
+
+ insn = find_instruction(func->sec, func->offset);
+ if (!insn)
+ return false;
+
+ list_for_each_entry_from(insn, &insns, list) {
+ if (insn->sec != func->sec ||
+ insn->offset >= func->offset + func->len)
+ break;
+
+ if (insn->type == INSN_RETURN)
+ return false;
+
+ if (insn->type == INSN_JUMP_UNCONDITIONAL) {
+ struct instruction *dest = insn->jump_dest;
+ struct symbol *dest_func;
+
+ if (!dest)
+ /* sibling call to another file */
+ return false;
+
+ if (dest->sec != func->sec ||
+ dest->offset < func->offset ||
+ dest->offset >= func->offset + func->len) {
+ /* local sibling call */
+ dest_func = find_symbol_by_offset(dest->sec,
+ dest->offset);
+ if (!dest_func)
+ continue;
+
+ return dead_end_function(dest_func);
+ }
+ }
+
+ if (insn->type == INSN_JUMP_DYNAMIC)
+ /* sibling call */
+ return false;
+ }
+
+ return true;
+}
+
+/*
+ * Call the arch-specific instruction decoder for all the instructions and add
+ * them to the global insns list.
+ */
+static int decode_instructions(struct elf *elf)
+{
+ struct section *sec;
+ unsigned long offset;
+ struct instruction *insn;
+ int ret;
+
+ INIT_LIST_HEAD(&insns);
+
+ list_for_each_entry(sec, &elf->sections, list) {
+
+ if (!(sec->sh.sh_flags & SHF_EXECINSTR))
+ continue;
+
+ for (offset = 0; offset < sec->len; offset += insn->len) {
+ insn = malloc(sizeof(*insn));
+ memset(insn, 0, sizeof(*insn));
+
+ INIT_LIST_HEAD(&insn->alts);
+ insn->sec = sec;
+ insn->offset = offset;
+
+ ret = arch_decode_instruction(elf, sec, offset,
+ sec->len - offset,
+ &insn->len, &insn->type,
+ &insn->immediate);
+ if (ret)
+ return ret;
+
+ if (!insn->type || insn->type > INSN_LAST) {
+ WARN_FUNC("invalid instruction type %d",
+ insn->sec, insn->offset, insn->type);
+ return -1;
+ }
+
+ list_add_tail(&insn->list, &insns);
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * Warnings shouldn't be reported for ignored instructions. Set insn->ignore
+ * for each ignored instruction and each instruction contained in an ignored
+ * function.
+ */
+static void get_ignores(struct elf *elf)
+{
+ struct instruction *insn;
+ struct section *sec;
+ struct symbol *func;
+
+ list_for_each_entry(insn, &insns, list)
+ insn->ignore = ignore_insn(elf, insn);
+
+ list_for_each_entry(sec, &elf->sections, list) {
+ list_for_each_entry(func, &sec->symbols, list) {
+ if (func->type != STT_FUNC)
+ continue;
+
+ if (!ignore_func(elf, func))
+ continue;
+
+ insn = find_instruction(sec, func->offset);
+ if (!insn)
+ continue;
+
+ list_for_each_entry_from(insn, &insns, list) {
+ if (insn->sec != func->sec ||
+ insn->offset >= func->offset + func->len)
+ break;
+
+ insn->ignore = true;
+ insn->visited = true;
+ }
+ }
+ }
+}
+
+/*
+ * Find the destination instructions for all jumps.
+ */
+static int get_jump_destinations(struct elf *elf)
+{
+ struct instruction *insn;
+ struct rela *rela;
+ struct section *dest_sec;
+ unsigned long dest_off;
+
+ list_for_each_entry(insn, &insns, list) {
+ if (insn->type != INSN_JUMP_CONDITIONAL &&
+ insn->type != INSN_JUMP_UNCONDITIONAL)
+ continue;
+
+ rela = find_rela_by_dest_range(insn->sec, insn->offset,
+ insn->len);
+ if (!rela) {
+ dest_sec = insn->sec;
+ dest_off = insn->offset + insn->len + insn->immediate;
+ } else if (rela->sym->type == STT_SECTION) {
+ dest_sec = rela->sym->sec;
+ dest_off = rela->addend + 4;
+ } else if (rela->sym->sec->index) {
+ dest_sec = rela->sym->sec;
+ dest_off = rela->sym->sym.st_value + rela->addend + 4;
+ } else {
+ /*
+ * This error (jump to another file) will be handled
+ * later in validate_functions().
+ */
+ continue;
+ }
+
+ insn->jump_dest = find_instruction(dest_sec, dest_off);
+ if (!insn->jump_dest && !insn->ignore) {
+
+ /*
+ * This is a special case where an alt instruction
+ * jumps past the end of the section. These are
+ * handled later.
+ */
+ if (!strcmp(insn->sec->name, ".altinstr_replacement"))
+ continue;
+
+ WARN_FUNC("can't find jump dest instruction at %s+0x%lx",
+ insn->sec, insn->offset, dest_sec->name,
+ dest_off);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * Find the destination instructions for all calls.
+ */
+static int get_call_destinations(struct elf *elf)
+{
+ struct instruction *insn;
+ unsigned long dest_off;
+ struct rela *rela;
+
+ list_for_each_entry(insn, &insns, list) {
+ if (insn->type != INSN_CALL)
+ continue;
+
+ rela = find_rela_by_dest_range(insn->sec, insn->offset,
+ insn->len);
+ if (!rela) {
+ dest_off = insn->offset + insn->len + insn->immediate;
+ insn->call_dest = find_symbol_by_offset(insn->sec,
+ dest_off);
+ if (!insn->call_dest) {
+ WARN_FUNC("can't find call dest symbol at offset 0x%lx",
+ insn->sec, insn->offset, dest_off);
+ return -1;
+ }
+ } else if (rela->sym->type == STT_SECTION) {
+ insn->call_dest = find_symbol_by_offset(rela->sym->sec,
+ rela->addend+4);
+ if (!insn->call_dest ||
+ insn->call_dest->type != STT_FUNC) {
+ WARN_FUNC("can't find call dest symbol at %s+0x%x",
+ insn->sec, insn->offset,
+ rela->sym->sec->name,
+ rela->addend + 4);
+ return -1;
+ }
+ } else
+ insn->call_dest = rela->sym;
+ }
+
+ return 0;
+}
+
+/*
+ * The .alternatives section requires some extra special care, over and above
+ * what other special sections require:
+ *
+ * 1. Because alternatives are patched in-place, we need to insert a fake jump
+ * instruction at the end so that validate_branch() skips all the original
+ * replaced instructions when validating the new instruction path.
+ *
+ * 2. An added wrinkle is that the new instruction length might be zero. In
+ * that case the old instructions are replaced with noops. We simulate that
+ * by creating a fake jump as the only new instruction.
+ *
+ * 3. In some cases, the alternative section includes an instruction which
+ * conditionally jumps to the _end_ of the entry. We have to modify these
+ * jumps' destinations to point back to .text rather than the end of the
+ * entry in .altinstr_replacement.
+ *
+ * 4. It has been requested that we don't validate the !POPCNT feature path
+ * which is a "very very small percentage of machines".
+ */
+static int handle_group_alt(struct elf *elf, struct special_alt *special_alt,
+ struct instruction *orig_insn,
+ struct instruction **new_insn)
+{
+ struct instruction *last_orig_insn, *last_new_insn, *insn, *fake_jump;
+ unsigned long dest_off;
+
+ last_orig_insn = NULL;
+ insn = orig_insn;
+ list_for_each_entry_from(insn, &insns, list) {
+ if (insn->sec != special_alt->orig_sec ||
+ insn->offset >= special_alt->orig_off + special_alt->orig_len)
+ break;
+
+ if (special_alt->skip_orig)
+ insn->ignore = true;
+
+ insn->alt_group = true;
+ last_orig_insn = insn;
+ }
+
+ if (list_is_last(&last_orig_insn->list, &insns) ||
+ list_next_entry(last_orig_insn, list)->sec != special_alt->orig_sec) {
+ WARN("%s: don't know how to handle alternatives at end of section",
+ special_alt->orig_sec->name);
+ return -1;
+ }
+
+ fake_jump = malloc(sizeof(*fake_jump));
+ if (!fake_jump) {
+ WARN("malloc failed");
+ return -1;
+ }
+ memset(fake_jump, 0, sizeof(*fake_jump));
+ INIT_LIST_HEAD(&fake_jump->alts);
+ fake_jump->sec = special_alt->new_sec;
+ fake_jump->offset = -1;
+ fake_jump->type = INSN_JUMP_UNCONDITIONAL;
+ fake_jump->jump_dest = list_next_entry(last_orig_insn, list);
+
+ if (!special_alt->new_len) {
+ *new_insn = fake_jump;
+ return 0;
+ }
+
+ last_new_insn = NULL;
+ insn = *new_insn;
+ list_for_each_entry_from(insn, &insns, list) {
+ if (insn->sec != special_alt->new_sec ||
+ insn->offset >= special_alt->new_off + special_alt->new_len)
+ break;
+
+ last_new_insn = insn;
+
+ if (insn->type != INSN_JUMP_CONDITIONAL &&
+ insn->type != INSN_JUMP_UNCONDITIONAL)
+ continue;
+
+ if (!insn->immediate)
+ continue;
+
+ dest_off = insn->offset + insn->len + insn->immediate;
+ if (dest_off == special_alt->new_off + special_alt->new_len)
+ insn->jump_dest = fake_jump;
+
+ if (!insn->jump_dest) {
+ WARN_FUNC("can't find alternative jump destination",
+ insn->sec, insn->offset);
+ return -1;
+ }
+ }
+
+ if (!last_new_insn) {
+ WARN_FUNC("can't find last new alternative instruction",
+ special_alt->new_sec, special_alt->new_off);
+ return -1;
+ }
+
+ list_add(&fake_jump->list, &last_new_insn->list);
+
+ return 0;
+}
+
+/*
+ * A jump table entry can either convert a nop to a jump or a jump to a nop.
+ * If the original instruction is a jump, make the alt entry an effective nop
+ * by just skipping the original instruction.
+ */
+static int handle_jump_alt(struct elf *elf, struct special_alt *special_alt,
+ struct instruction *orig_insn,
+ struct instruction **new_insn)
+{
+ if (orig_insn->type == INSN_NOP)
+ return 0;
+
+ if (orig_insn->type != INSN_JUMP_UNCONDITIONAL) {
+ WARN_FUNC("unsupported instruction at jump label",
+ orig_insn->sec, orig_insn->offset);
+ return -1;
+ }
+
+ *new_insn = list_next_entry(orig_insn, list);
+ return 0;
+}
+
+/*
+ * Read all the special sections which have alternate instructions which can be
+ * patched in or redirected to at runtime. Each instruction having alternate
+ * instruction(s) has them added to its insn->alts list, which will be
+ * traversed in validate_branch().
+ */
+static int get_special_section_alts(struct elf *elf)
+{
+ struct list_head special_alts;
+ struct instruction *orig_insn, *new_insn;
+ struct special_alt *special_alt, *tmp;
+ struct alternative *alt;
+ int ret;
+
+ ret = special_get_alts(elf, &special_alts);
+ if (ret)
+ return ret;
+
+ list_for_each_entry_safe(special_alt, tmp, &special_alts, list) {
+ alt = malloc(sizeof(*alt));
+ if (!alt) {
+ WARN("malloc failed");
+ ret = -1;
+ goto out;
+ }
+
+ orig_insn = find_instruction(special_alt->orig_sec,
+ special_alt->orig_off);
+ if (!orig_insn) {
+ WARN_FUNC("special: can't find orig instruction",
+ special_alt->orig_sec, special_alt->orig_off);
+ ret = -1;
+ goto out;
+ }
+
+ new_insn = NULL;
+ if (!special_alt->group || special_alt->new_len) {
+ new_insn = find_instruction(special_alt->new_sec,
+ special_alt->new_off);
+ if (!new_insn) {
+ WARN_FUNC("special: can't find new instruction",
+ special_alt->new_sec,
+ special_alt->new_off);
+ ret = -1;
+ goto out;
+ }
+ }
+
+ if (special_alt->group) {
+ ret = handle_group_alt(elf, special_alt, orig_insn,
+ &new_insn);
+ if (ret)
+ goto out;
+ } else if (special_alt->jump_or_nop) {
+ ret = handle_jump_alt(elf, special_alt, orig_insn,
+ &new_insn);
+ if (ret)
+ goto out;
+ }
+
+ alt->insn = new_insn;
+ list_add_tail(&alt->list, &orig_insn->alts);
+
+ list_del(&special_alt->list);
+ free(special_alt);
+ }
+
+out:
+ return ret;
+}
+
+/*
+ * For some switch statements, gcc generates a jump table in the .rodata
+ * section which contains a list of addresses within the function to jump to.
+ * This finds these jump tables and adds them to the insn->alts lists.
+ */
+static int get_switch_alts(struct elf *elf)
+{
+ struct instruction *insn, *alt_insn;
+ struct rela *rodata_rela, *rela;
+ struct section *rodata;
+ struct symbol *func;
+ struct alternative *alt;
+
+ list_for_each_entry(insn, &insns, list) {
+ if (insn->type != INSN_JUMP_DYNAMIC)
+ continue;
+
+ rodata_rela = find_rela_by_dest_range(insn->sec, insn->offset,
+ insn->len);
+ if (!rodata_rela || strcmp(rodata_rela->sym->name, ".rodata"))
+ continue;
+
+ rodata = find_section_by_name(elf, ".rodata");
+ if (!rodata || !rodata->rela)
+ continue;
+
+ rela = find_rela_by_dest(rodata, rodata_rela->addend);
+ if (!rela)
+ continue;
+
+ func = find_containing_func(insn->sec, insn->offset);
+ if (!func) {
+ WARN_FUNC("can't find containing func",
+ insn->sec, insn->offset);
+ return -1;
+ }
+
+ list_for_each_entry_from(rela, &rodata->rela->relas, list) {
+ if (rela->sym->sec != insn->sec ||
+ rela->addend <= func->offset ||
+ rela->addend >= func->offset + func->len)
+ break;
+
+ alt_insn = find_instruction(insn->sec, rela->addend);
+ if (!alt_insn) {
+ WARN("%s: can't find instruction at %s+0x%x",
+ rodata->rela->name, insn->sec->name,
+ rela->addend);
+ return -1;
+ }
+
+ alt = malloc(sizeof(*alt));
+ if (!alt) {
+ WARN("malloc failed");
+ return -1;
+ }
+
+ alt->insn = alt_insn;
+ list_add_tail(&alt->list, &insn->alts);
+ }
+ }
+
+ return 0;
+}
+
+static int decode_sections(struct elf *elf)
+{
+ int ret;
+
+ ret = decode_instructions(elf);
+ if (ret)
+ return ret;
+
+ get_ignores(elf);
+
+ ret = get_jump_destinations(elf);
+ if (ret)
+ return ret;
+
+ ret = get_call_destinations(elf);
+ if (ret)
+ return ret;
+
+ ret = get_special_section_alts(elf);
+ if (ret)
+ return ret;
+
+ ret = get_switch_alts(elf);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+/*
+ * Follow the branch starting at the given instruction, and recursively follow
+ * any other branches (jumps). Meanwhile, track the frame pointer state at
+ * each instruction and validate all the rules described in
+ * Documentation/stack-validation.txt.
+ */
+static int validate_branch(struct elf *elf, struct instruction *first,
+ unsigned char first_state)
+{
+ struct alternative *alt;
+ struct instruction *insn;
+ struct section *sec;
+ unsigned char state;
+ int ret, warnings = 0;
+
+ insn = first;
+ sec = insn->sec;
+ state = first_state;
+
+ if (insn->alt_group && list_empty(&insn->alts)) {
+ WARN_FUNC("don't know how to handle branch to middle of alternative instruction group",
+ sec, insn->offset);
+ warnings++;
+ }
+
+ while (1) {
+ if (insn->visited) {
+ if (insn->state != state) {
+ WARN_FUNC("frame pointer state mismatch",
+ sec, insn->offset);
+ warnings++;
+ }
+
+ return warnings;
+ }
+
+ insn->visited = true;
+ insn->state = state;
+
+ list_for_each_entry(alt, &insn->alts, list) {
+ ret = validate_branch(elf, alt->insn, state);
+ warnings += ret;
+ }
+
+ switch (insn->type) {
+
+ case INSN_FP_SAVE:
+ if (!args.nofp) {
+ if (insn->state & STATE_FP_SAVED) {
+ WARN_FUNC("duplicate frame pointer save",
+ sec, insn->offset);
+ warnings++;
+ }
+ state |= STATE_FP_SAVED;
+ }
+ break;
+
+ case INSN_FP_SETUP:
+ if (!args.nofp) {
+ if (insn->state & STATE_FP_SETUP) {
+ WARN_FUNC("duplicate frame pointer setup",
+ sec, insn->offset);
+ warnings++;
+ }
+ state |= STATE_FP_SETUP;
+ }
+ break;
+
+ case INSN_FP_RESTORE:
+ if (!args.nofp) {
+ if (!insn->state) {
+ WARN_FUNC("frame pointer restore without save/setup",
+ sec, insn->offset);
+ warnings++;
+ }
+ state &= ~STATE_FP_SAVED;
+ state &= ~STATE_FP_SETUP;
+ }
+ break;
+
+ case INSN_RETURN:
+ if (!args.nofp && insn->state) {
+ WARN_FUNC("return without frame pointer restore",
+ sec, insn->offset);
+ warnings++;
+ }
+ return warnings;
+
+ case INSN_CALL:
+ if (insn->call_dest->type == STT_NOTYPE &&
+ !strcmp(insn->call_dest->name, "__fentry__"))
+ break;
+
+ if (dead_end_function(insn->call_dest))
+ return warnings;
+
+ if (!args.nofp && !insn->state && !insn->ignore) {
+ WARN_FUNC("call without frame pointer save/setup",
+ sec, insn->offset);
+ warnings++;
+ }
+ break;
+
+ case INSN_CALL_DYNAMIC:
+ if (!args.nofp && !insn->state && !insn->ignore) {
+ WARN_FUNC("call without frame pointer save/setup",
+ sec, insn->offset);
+ warnings++;
+ }
+ break;
+
+ case INSN_JUMP_CONDITIONAL:
+ case INSN_JUMP_UNCONDITIONAL:
+ if (insn->jump_dest) {
+ ret = validate_branch(elf, insn->jump_dest,
+ state);
+ warnings += ret;
+ } else if (insn->state) {
+ WARN_FUNC("sibling call from callable instruction with changed frame pointer",
+ sec, insn->offset);
+ warnings++;
+ } /* else it's a sibling call */
+
+ if (insn->type == INSN_JUMP_UNCONDITIONAL)
+ return warnings;
+
+ break;
+
+ case INSN_JUMP_DYNAMIC:
+ if (list_empty(&insn->alts) && insn->state &&
+ !insn->ignore) {
+ WARN_FUNC("sibling call from callable instruction with changed frame pointer",
+ sec, insn->offset);
+ warnings++;
+ }
+
+ return warnings;
+
+ case INSN_CONTEXT_SWITCH:
+ if (!insn->ignore) {
+ WARN_FUNC("kernel entry/exit from callable instruction",
+ sec, insn->offset);
+ warnings++;
+ }
+
+ return warnings;
+
+ case INSN_BUG:
+ return warnings;
+
+ }
+
+ insn = list_next_entry(insn, list);
+
+ if (&insn->list == &insns || insn->sec != sec) {
+ WARN("%s: unexpected end of section", sec->name);
+ warnings++;
+ return warnings;
+ }
+ }
+
+ return warnings;
+}
+
+static int validate_functions(struct elf *elf)
+{
+ struct section *sec;
+ struct symbol *func;
+ struct instruction *insn;
+ int ret, warnings = 0;
+
+ list_for_each_entry(sec, &elf->sections, list) {
+ list_for_each_entry(func, &sec->symbols, list) {
+ if (func->type != STT_FUNC)
+ continue;
+
+ insn = find_instruction(sec, func->offset);
+ if (!insn) {
+ WARN("%s(): can't find starting instruction",
+ func->name);
+ warnings++;
+ continue;
+ }
+
+ ret = validate_branch(elf, insn, 0);
+ warnings += ret;
+ }
+ }
+
+ list_for_each_entry(sec, &elf->sections, list) {
+ list_for_each_entry(func, &sec->symbols, list) {
+ if (func->type != STT_FUNC)
+ continue;
+
+ insn = find_instruction(sec, func->offset);
+ if (!insn)
+ continue;
+
+ list_for_each_entry_from(insn, &insns, list) {
+ if (insn->sec != func->sec ||
+ insn->offset >= func->offset + func->len)
+ break;
+
+ if (!insn->visited && insn->type != INSN_NOP) {
+ WARN_FUNC("function has unreachable instruction",
+ insn->sec, insn->offset);
+ warnings++;
+ }
+
+ insn->visited = true;
+ }
+ }
+ }
+
+ return warnings;
+}
+
+static int validate_uncallable_instructions(struct elf *elf)
+{
+ struct instruction *insn;
+ int warnings = 0;
+
+ list_for_each_entry(insn, &insns, list) {
+ if (!insn->visited && insn->type == INSN_RETURN &&
+ !insn->ignore) {
+ WARN_FUNC("return instruction outside of a callable function",
+ insn->sec, insn->offset);
+ warnings++;
+ }
+ }
+
+ return warnings;
+}
+
+static void cleanup(struct elf *elf)
+{
+ struct instruction *insn, *tmpinsn;
+ struct alternative *alt, *tmpalt;
+
+ list_for_each_entry_safe(insn, tmpinsn, &insns, list) {
+ list_for_each_entry_safe(alt, tmpalt, &insn->alts, list) {
+ list_del(&alt->list);
+ free(alt);
+ }
+ list_del(&insn->list);
+ free(insn);
+ }
+ elf_close(elf);
+}
+
+int main(int argc, char *argv[])
+{
+ struct elf *elf;
+ int ret = 0, warnings = 0;
+
+ argp_parse(&argp, argc, argv, 0, 0, &args);
+
+ elf = elf_open(args.args[0]);
+ if (!elf) {
+ fprintf(stderr, "error reading elf file %s\n", args.args[0]);
+ return 1;
+ }
+
+ ret = decode_sections(elf);
+ if (ret < 0)
+ goto out;
+ warnings += ret;
+
+ ret = validate_functions(elf);
+ if (ret < 0)
+ goto out;
+ warnings += ret;
+
+ ret = validate_uncallable_instructions(elf);
+ if (ret < 0)
+ goto out;
+ warnings += ret;
+
+out:
+ cleanup(elf);
+
+ /* ignore warnings for now until we get all the code cleaned up */
+ if (ret || warnings)
+ return 0;
+ return 0;
+}
--
2.4.3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists