[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211110100102.250793167@infradead.org>
Date: Wed, 10 Nov 2021 11:01:02 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: x86@...nel.org
Cc: linux-kernel@...r.kernel.org, peterz@...radead.org,
jpoimboe@...hat.com, mark.rutland@....com, dvyukov@...gle.com,
seanjc@...gle.com, pbonzini@...hat.com, mbenes@...e.cz
Subject: [PATCH v2 00/23] x86: Remove anonymous out-of-line fixups
Hi,
Direct counterpart to the arm64 series from Mark:
https://lkml.kernel.org/r/20211019160219.5202-1-mark.rutland@arm.com
Since he already put it rather well:
"We recently realised that out-of-line extable fixups cause a number of problems
for backtracing (mattering both for developers and for RELIABLE_STACKTRACE and
LIVEPATCH). Dmitry spotted a confusing backtrace, which we identified was due
to problems with unwinding fixups, as summarized in:
https://lore.kernel.org/linux-arm-kernel/20210927171812.GB9201@C02TD0UTHF1T.local/
The gist is that while backtracing through a fixup, the fixup gets symbolized
as an offset from the nearest prior symbol (which happens to be
`__entry_tramp_text_end`), and we the backtrace misses the function that was
being fixed up (because the fixup handling adjusts the PC, then the fixup does
a direct branch back to the original function). We can't reliably map from an
arbitrary PC in the fixup text back to the original function.
The way we create fixups is a bit unfortunate: most fixups are generated from
common templates, and only differ in register to be poked and the address to
branch back to, leading to redundant copies of the same logic that must pollute
Since the fixups are all written in assembly, and duplicated for each fixup
site, we can only perform very simple fixups, and can't handle any complex
triage that we might need for some exceptions (e.g. MTE faults)."
Also available here:
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git x86/wip.extable
Changes since v1:
- Dropped using __cold on labels, because clang. Also gcc doesn't actually
generate different code with it. The intent was for the code to end up in
.text.cold but that doesn't happen.
- Fixed the vmread constraints to disallow %0 == %1.
- Added a asm-goto-output variant to vmx's vmread implementation.
- Audited Xen and FPU code and converted them from -1 to -EFAULT,
as a concequence EX_TYPE_NEG_REG no longer exists.
- Fixed the EX_DATA_*_MASK macros to include an explicit 'int' cast,
such that FIELD_GET() will sign-extend the top field.
---
arch/x86/entry/entry_32.S | 28 ++-----
arch/x86/entry/entry_64.S | 13 ++-
arch/x86/entry/vdso/vdso-layout.lds.S | 1 -
arch/x86/include/asm/asm.h | 33 ++++++++
arch/x86/include/asm/extable.h | 6 +-
arch/x86/include/asm/extable_fixup_types.h | 50 ++++++++++--
arch/x86/include/asm/futex.h | 28 ++-----
arch/x86/include/asm/insn-eval.h | 2 +
arch/x86/include/asm/msr.h | 26 ++----
arch/x86/include/asm/segment.h | 9 +--
arch/x86/include/asm/sgx.h | 18 +++++
arch/x86/include/asm/uaccess.h | 39 ++++-----
arch/x86/include/asm/word-at-a-time.h | 66 ++++++++++-----
arch/x86/include/asm/xen/page.h | 14 +---
arch/x86/kernel/cpu/sgx/encls.h | 36 ++-------
arch/x86/kernel/fpu/legacy.h | 6 +-
arch/x86/kernel/fpu/xstate.h | 6 +-
arch/x86/kernel/vmlinux.lds.S | 1 -
arch/x86/kvm/emulate.c | 16 +---
arch/x86/kvm/vmx/vmx_ops.h | 43 +++++++---
arch/x86/lib/checksum_32.S | 19 +----
arch/x86/lib/copy_mc_64.S | 12 +--
arch/x86/lib/copy_user_64.S | 32 +++-----
arch/x86/lib/insn-eval.c | 66 +++++++++------
arch/x86/lib/mmx_32.c | 86 +++++++-------------
arch/x86/lib/usercopy_32.c | 66 ++++++---------
arch/x86/lib/usercopy_64.c | 8 +-
arch/x86/mm/extable.c | 124 ++++++++++++++++++++++-------
arch/x86/net/bpf_jit_comp.c | 2 +-
include/linux/bitfield.h | 19 ++++-
tools/objtool/check.c | 8 +-
31 files changed, 477 insertions(+), 406 deletions(-)
Powered by blists - more mailing lists