[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180917211711.981110657@linuxfoundation.org>
Date: Tue, 18 Sep 2018 00:40:53 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Paul Mackerras <paulus@...abs.org>
Subject: [PATCH 4.18 023/158] KVM: PPC: Book3S HV: Use correct pagesize in kvm_unmap_radix()
4.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul Mackerras <paulus@...abs.org>
commit c066fafc595eef5ae3c83ae3a8305956b8c3ef15 upstream.
Since commit e641a317830b ("KVM: PPC: Book3S HV: Unify dirty page map
between HPT and radix", 2017-10-26), kvm_unmap_radix() computes the
number of PAGE_SIZEd pages being unmapped and passes it to
kvmppc_update_dirty_map(), which expects to be passed the page size
instead. Consequently it will only mark one system page dirty even
when a large page (for example a THP page) is being unmapped. The
consequence of this is that part of the THP page might not get copied
during live migration, resulting in memory corruption for the guest.
This fixes it by computing and passing the page size in kvm_unmap_radix().
Cc: stable@...r.kernel.org # v4.15+
Fixes: e641a317830b (KVM: PPC: Book3S HV: Unify dirty page map between HPT and radix)
Signed-off-by: Paul Mackerras <paulus@...abs.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -738,10 +738,10 @@ int kvm_unmap_radix(struct kvm *kvm, str
gpa, shift);
kvmppc_radix_tlbie_page(kvm, gpa, shift);
if ((old & _PAGE_DIRTY) && memslot->dirty_bitmap) {
- unsigned long npages = 1;
+ unsigned long psize = PAGE_SIZE;
if (shift)
- npages = 1ul << (shift - PAGE_SHIFT);
- kvmppc_update_dirty_map(memslot, gfn, npages);
+ psize = 1ul << shift;
+ kvmppc_update_dirty_map(memslot, gfn, psize);
}
}
return 0;
Powered by blists - more mailing lists