[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <449b62f6603d57cb80519176406a1adc2936a144.1430387326.git.jslaby@suse.cz>
Date: Thu, 30 Apr 2015 14:11:45 +0200
From: Jiri Slaby <jslaby@...e.cz>
To: stable@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, Marc Zyngier <marc.zyngier@....com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Rob Herring <rob.herring@...aro.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Shannon Zhao <shannon.zhao@...aro.org>,
Jiri Slaby <jslaby@...e.cz>
Subject: [PATCH 3.12 16/63] arm/arm64: KVM: detect CPU reset on CPU_PM_EXIT
From: Marc Zyngier <marc.zyngier@....com>
3.12-stable review patch. If anyone has any objections, please let me know.
===============
commit b20c9f29c5c25921c6ad18b50d4b61e6d181c3cc upstream.
Commit 1fcf7ce0c602 (arm: kvm: implement CPU PM notifier) added
support for CPU power-management, using a cpu_notifier to re-init
KVM on a CPU that entered CPU idle.
The code assumed that a CPU entering idle would actually be powered
off, loosing its state entierely, and would then need to be
reinitialized. It turns out that this is not always the case, and
some HW performs CPU PM without actually killing the core. In this
case, we try to reinitialize KVM while it is still live. It ends up
badly, as reported by Andre Przywara (using a Calxeda Midway):
[ 3.663897] Kernel panic - not syncing: unexpected prefetch abort in Hyp mode at: 0x685760
[ 3.663897] unexpected data abort in Hyp mode at: 0xc067d150
[ 3.663897] unexpected HVC/SVC trap in Hyp mode at: 0xc0901dd0
The trick here is to detect if we've been through a full re-init or
not by looking at HVBAR (VBAR_EL2 on arm64). This involves
implementing the backend for __hyp_get_vectors in the main KVM HYP
code (rather small), and checking the return value against the
default one when the CPU notifier is called on CPU_PM_EXIT.
Reported-by: Andre Przywara <osp@...rep.de>
Tested-by: Andre Przywara <osp@...rep.de>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@....com>
Cc: Rob Herring <rob.herring@...aro.org>
Acked-by: Christoffer Dall <christoffer.dall@...aro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@....com>
Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
Signed-off-by: Shannon Zhao <shannon.zhao@...aro.org>
Signed-off-by: Jiri Slaby <jslaby@...e.cz>
---
arch/arm/kvm/arm.c | 3 ++-
arch/arm/kvm/interrupts.S | 11 ++++++++++-
arch/arm64/kvm/hyp.S | 27 +++++++++++++++++++++++++--
3 files changed, 37 insertions(+), 4 deletions(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 64ec98b786ae..8da56e484b50 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -863,7 +863,8 @@ static int hyp_init_cpu_pm_notifier(struct notifier_block *self,
unsigned long cmd,
void *v)
{
- if (cmd == CPU_PM_EXIT) {
+ if (cmd == CPU_PM_EXIT &&
+ __hyp_get_vectors() == hyp_default_vectors) {
cpu_init_hyp_mode(NULL);
return NOTIFY_OK;
}
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index ddc15539bad2..0d68d4073068 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -220,6 +220,10 @@ after_vfp_restore:
* in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are
* passed in r0 and r1.
*
+ * A function pointer with a value of 0xffffffff has a special meaning,
+ * and is used to implement __hyp_get_vectors in the same way as in
+ * arch/arm/kernel/hyp_stub.S.
+ *
* The calling convention follows the standard AAPCS:
* r0 - r3: caller save
* r12: caller save
@@ -363,6 +367,11 @@ hyp_hvc:
host_switch_to_hyp:
pop {r0, r1, r2}
+ /* Check for __hyp_get_vectors */
+ cmp r0, #-1
+ mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
+ beq 1f
+
push {lr}
mrs lr, SPSR
push {lr}
@@ -378,7 +387,7 @@ THUMB( orr lr, #1)
pop {lr}
msr SPSR_csxf, lr
pop {lr}
- eret
+1: eret
guest_trap:
load_vcpu @ Load VCPU pointer to r0
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 1ac0bbbdddb2..d5581ffa7006 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -681,6 +681,24 @@ __hyp_panic_str:
.align 2
+/*
+ * u64 kvm_call_hyp(void *hypfn, ...);
+ *
+ * This is not really a variadic function in the classic C-way and care must
+ * be taken when calling this to ensure parameters are passed in registers
+ * only, since the stack will change between the caller and the callee.
+ *
+ * Call the function with the first argument containing a pointer to the
+ * function you wish to call in Hyp mode, and subsequent arguments will be
+ * passed as x0, x1, and x2 (a maximum of 3 arguments in addition to the
+ * function pointer can be passed). The function being called must be mapped
+ * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are
+ * passed in r0 and r1.
+ *
+ * A function pointer with a value of 0 has a special meaning, and is
+ * used to implement __hyp_get_vectors in the same way as in
+ * arch/arm64/kernel/hyp_stub.S.
+ */
ENTRY(kvm_call_hyp)
hvc #0
ret
@@ -724,7 +742,12 @@ el1_sync: // Guest trapped into EL2
pop x2, x3
pop x0, x1
- push lr, xzr
+ /* Check for __hyp_get_vectors */
+ cbnz x0, 1f
+ mrs x0, vbar_el2
+ b 2f
+
+1: push lr, xzr
/*
* Compute the function address in EL2, and shuffle the parameters.
@@ -737,7 +760,7 @@ el1_sync: // Guest trapped into EL2
blr lr
pop lr, xzr
- eret
+2: eret
el1_trap:
/*
--
2.3.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists