lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1500213406.207858647@decadent.org.uk>
Date:   Sun, 16 Jul 2017 14:56:46 +0100
From:   Ben Hutchings <ben@...adent.org.uk>
To:     linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC:     akpm@...ux-foundation.org, "Mark Rutland" <mark.rutland@....com>,
        "Paolo Bonzini" <pbonzin@...hat.com>,
        "Suzuki K Poulose" <suzuki.poulose@....com>,
        "Christoffer Dall" <cdall@...aro.org>,
        "Marc Zyngier" <marc.zyngier@....com>,
        "Christoffer Dall" <christoffer.dall@...aro.org>
Subject: [PATCH 3.16 121/178] kvm: arm/arm64: Fix locking for
 kvm_free_stage2_pgd

3.16.46-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: Suzuki K Poulose <suzuki.poulose@....com>

commit 8b3405e345b5a098101b0c31b264c812bba045d9 upstream.

In kvm_free_stage2_pgd() we don't hold the kvm->mmu_lock while calling
unmap_stage2_range() on the entire memory range for the guest. This could
cause problems with other callers (e.g, munmap on a memslot) trying to
unmap a range. And since we have to unmap the entire Guest memory range
holding a spinlock, make sure we yield the lock if necessary, after we
unmap each PUD range.

Fixes: commit d5d8184d35c9 ("KVM: ARM: Memory virtualization setup")
Cc: Paolo Bonzini <pbonzin@...hat.com>
Cc: Marc Zyngier <marc.zyngier@....com>
Cc: Christoffer Dall <christoffer.dall@...aro.org>
Cc: Mark Rutland <mark.rutland@....com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@....com>
[ Avoid vCPU starvation and lockup detector warnings ]
Signed-off-by: Marc Zyngier <marc.zyngier@....com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@....com>
Signed-off-by: Christoffer Dall <cdall@...aro.org>
[bwh: Backported to 3.16:
 - unmap_stage2_range() is a wrapper around unmap_range(), which is also used for
   HYP page table setup.  So unmap_range() should do the cond_resched_lock(), but
   only if kvm != NULL.
 - Adjust context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
 arch/arm/kvm/mmu.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -199,6 +199,12 @@ static void unmap_range(struct kvm *kvm,
 		next = kvm_pgd_addr_end(addr, end);
 		if (!pgd_none(*pgd))
 			unmap_puds(kvm, pgd, addr, next);
+		/*
+		 * If the range is too large, release the kvm->mmu_lock
+		 * to prevent starvation and lockup detector warnings.
+		 */
+		if (kvm && next != end)
+			cond_resched_lock(&kvm->mmu_lock);
 	} while (pgd++, addr = next, addr != end);
 }
 
@@ -553,6 +559,7 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm
  */
 static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
 {
+	assert_spin_locked(&kvm->mmu_lock);
 	unmap_range(kvm, kvm->arch.pgd, start, size);
 }
 
@@ -637,7 +644,10 @@ void kvm_free_stage2_pgd(struct kvm *kvm
 	if (kvm->arch.pgd == NULL)
 		return;
 
+	spin_lock(&kvm->mmu_lock);
 	unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
+	spin_unlock(&kvm->mmu_lock);
+
 	free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
 	kvm->arch.pgd = NULL;
 }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ