[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180521210450.041829852@linuxfoundation.org>
Date: Mon, 21 May 2018 23:11:02 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Jan Glauber <jan.glauber@...iumnetworks.com>,
Andre Przywara <andre.przywara@....com>,
Christoffer Dall <christoffer.dall@....com>,
Paolo Bonzini <pbonzini@...hat.com>
Subject: [PATCH 4.14 12/95] KVM: arm/arm64: VGIC/ITS save/restore: protect kvm_read_guest() calls
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andre Przywara <andre.przywara@....com>
commit 711702b57cc3c50b84bd648de0f1ca0a378805be upstream.
kvm_read_guest() will eventually look up in kvm_memslots(), which requires
either to hold the kvm->slots_lock or to be inside a kvm->srcu critical
section.
In contrast to x86 and s390 we don't take the SRCU lock on every guest
exit, so we have to do it individually for each kvm_read_guest() call.
Use the newly introduced wrapper for that.
Cc: Stable <stable@...r.kernel.org> # 4.12+
Reported-by: Jan Glauber <jan.glauber@...iumnetworks.com>
Signed-off-by: Andre Przywara <andre.przywara@....com>
Acked-by: Christoffer Dall <christoffer.dall@....com>
Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
virt/kvm/arm/vgic/vgic-its.c | 4 ++--
virt/kvm/arm/vgic/vgic-v3.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
--- a/virt/kvm/arm/vgic/vgic-its.c
+++ b/virt/kvm/arm/vgic/vgic-its.c
@@ -1830,7 +1830,7 @@ static int scan_its_table(struct vgic_it
int next_offset;
size_t byte_offset;
- ret = kvm_read_guest(kvm, gpa, entry, esz);
+ ret = kvm_read_guest_lock(kvm, gpa, entry, esz);
if (ret)
return ret;
@@ -2191,7 +2191,7 @@ static int vgic_its_restore_cte(struct v
int ret;
BUG_ON(esz > sizeof(val));
- ret = kvm_read_guest(kvm, gpa, &val, esz);
+ ret = kvm_read_guest_lock(kvm, gpa, &val, esz);
if (ret)
return ret;
val = le64_to_cpu(val);
--- a/virt/kvm/arm/vgic/vgic-v3.c
+++ b/virt/kvm/arm/vgic/vgic-v3.c
@@ -297,7 +297,7 @@ retry:
bit_nr = irq->intid % BITS_PER_BYTE;
ptr = pendbase + byte_offset;
- ret = kvm_read_guest(kvm, ptr, &val, 1);
+ ret = kvm_read_guest_lock(kvm, ptr, &val, 1);
if (ret)
return ret;
@@ -350,7 +350,7 @@ int vgic_v3_save_pending_tables(struct k
ptr = pendbase + byte_offset;
if (byte_offset != last_byte_offset) {
- ret = kvm_read_guest(kvm, ptr, &val, 1);
+ ret = kvm_read_guest_lock(kvm, ptr, &val, 1);
if (ret)
return ret;
last_byte_offset = byte_offset;
Powered by blists - more mailing lists