[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220726073750.3219117-16-kaleshsingh@google.com>
Date: Tue, 26 Jul 2022 00:37:48 -0700
From: Kalesh Singh <kaleshsingh@...gle.com>
To: maz@...nel.org, mark.rutland@....com, broonie@...nel.org,
madvenka@...ux.microsoft.com, tabba@...gle.com,
oliver.upton@...ux.dev
Cc: will@...nel.org, qperret@...gle.com, kaleshsingh@...gle.com,
james.morse@....com, alexandru.elisei@....com,
suzuki.poulose@....com, catalin.marinas@....com,
andreyknvl@...il.com, vincenzo.frascino@....com,
mhiramat@...nel.org, ast@...nel.org, wangkefeng.wang@...wei.com,
elver@...gle.com, keirf@...gle.com, yuzenghui@...wei.com,
ardb@...nel.org, oupton@...gle.com,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org, android-mm@...gle.com,
kernel-team@...roid.com
Subject: [PATCH v6 15/17] KVM: arm64: Save protected-nVHE (pKVM) hyp stacktrace
In protected nVHE mode, the host cannot access private owned hypervisor
memory. Also the hypervisor aims to remains simple to reduce the attack
surface and does not provide any printk support.
For the above reasons, the approach taken to provide hypervisor stacktraces
in protected mode is:
1) Unwind and save the hyp stack addresses in EL2 to a shared buffer
with the host (done in this patch).
2) Delegate the dumping and symbolization of the addresses to the
host in EL1 (later patch in the series).
On hyp_panic(), the hypervisor prepares the stacktrace before returning to
the host.
Signed-off-by: Kalesh Singh <kaleshsingh@...gle.com>
---
Changes in v6:
- Simplify pkvm_save_backtrace_entry() using array semantics instead
of the pointer arithmetic, per Oliver.
Changes in v5:
- Comment/clarify pkvm_save_backtrace_entry(), per Fuad
- kvm_nvhe_unwind_init(), doesn't need to be always inline, make it
inline instead to avoid linking issues, per Marc
- Use regular comments instead of doc comments, per Fuad
arch/arm64/kvm/hyp/nvhe/stacktrace.c | 55 +++++++++++++++++++++++++++-
1 file changed, 54 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
index e2edda92a108..900324b7a08f 100644
--- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c
+++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
@@ -35,7 +35,60 @@ static void hyp_prepare_backtrace(unsigned long fp, unsigned long pc)
}
#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE
+#include <asm/stacktrace/nvhe.h>
+
DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace);
+
+/*
+ * pkvm_save_backtrace_entry - Saves a protected nVHE HYP stacktrace entry
+ *
+ * @arg : index of the entry in the stacktrace buffer
+ * @where : the program counter corresponding to the stack frame
+ *
+ * Save the return address of a stack frame to the shared stacktrace buffer.
+ * The host can access this shared buffer from EL1 to dump the backtrace.
+ */
+static bool pkvm_save_backtrace_entry(void *arg, unsigned long where)
+{
+ unsigned long *stacktrace = this_cpu_ptr(pkvm_stacktrace);
+ int size = NVHE_STACKTRACE_SIZE / sizeof(long);
+ int *idx = (int *)arg;
+
+ /*
+ * Need 2 free slots: 1 for current entry and 1 for the
+ * delimiter.
+ */
+ if (*idx > size - 2)
+ return false;
+
+ stacktrace[*idx] = where;
+ stacktrace[++*idx] = 0UL;
+
+ return true;
+}
+
+/*
+ * pkvm_save_backtrace - Saves the protected nVHE HYP stacktrace
+ *
+ * @fp : frame pointer at which to start the unwinding.
+ * @pc : program counter at which to start the unwinding.
+ *
+ * Save the unwinded stack addresses to the shared stacktrace buffer.
+ * The host can access this shared buffer from EL1 to dump the backtrace.
+ */
+static void pkvm_save_backtrace(unsigned long fp, unsigned long pc)
+{
+ struct unwind_state state;
+ int idx = 0;
+
+ kvm_nvhe_unwind_init(&state, fp, pc);
+
+ unwind(&state, pkvm_save_backtrace_entry, &idx);
+}
+#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */
+static void pkvm_save_backtrace(unsigned long fp, unsigned long pc)
+{
+}
#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */
/*
@@ -50,7 +103,7 @@ DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrac
void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc)
{
if (is_protected_kvm_enabled())
- return;
+ pkvm_save_backtrace(fp, pc);
else
hyp_prepare_backtrace(fp, pc);
}
--
2.37.1.359.gd136c6c3e2-goog
Powered by blists - more mailing lists