lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170425150815.765836285@linuxfoundation.org>
Date:   Tue, 25 Apr 2017 16:08:51 +0100
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     linux-kernel@...r.kernel.org
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        stable@...r.kernel.org, Paolo Bonzini <pbonzin@...hat.com>,
        Marc Zyngier <marc.zyngier@....com>,
        Christoffer Dall <christoffer.dall@...aro.org>,
        Mark Rutland <mark.rutland@....com>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Christoffer Dall <cdall@...aro.org>
Subject: [PATCH 4.4 20/28] kvm: arm/arm64: Fix locking for kvm_free_stage2_pgd

4.4-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Suzuki K Poulose <suzuki.poulose@....com>

commit 8b3405e345b5a098101b0c31b264c812bba045d9 upstream.

In kvm_free_stage2_pgd() we don't hold the kvm->mmu_lock while calling
unmap_stage2_range() on the entire memory range for the guest. This could
cause problems with other callers (e.g, munmap on a memslot) trying to
unmap a range. And since we have to unmap the entire Guest memory range
holding a spinlock, make sure we yield the lock if necessary, after we
unmap each PUD range.

Fixes: commit d5d8184d35c9 ("KVM: ARM: Memory virtualization setup")
Cc: Paolo Bonzini <pbonzin@...hat.com>
Cc: Marc Zyngier <marc.zyngier@....com>
Cc: Christoffer Dall <christoffer.dall@...aro.org>
Cc: Mark Rutland <mark.rutland@....com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@....com>
[ Avoid vCPU starvation and lockup detector warnings ]
Signed-off-by: Marc Zyngier <marc.zyngier@....com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@....com>
Signed-off-by: Christoffer Dall <cdall@...aro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>

---
 arch/arm/kvm/mmu.c |   12 ++++++++++++
 1 file changed, 12 insertions(+)

--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -300,6 +300,14 @@ static void unmap_range(struct kvm *kvm,
 		next = kvm_pgd_addr_end(addr, end);
 		if (!pgd_none(*pgd))
 			unmap_puds(kvm, pgd, addr, next);
+		/*
+		 * If we are dealing with a large range in
+		 * stage2 table, release the kvm->mmu_lock
+		 * to prevent starvation and lockup detector
+		 * warnings.
+		 */
+		if (kvm && (next != end))
+			cond_resched_lock(&kvm->mmu_lock);
 	} while (pgd++, addr = next, addr != end);
 }
 
@@ -738,6 +746,7 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm
  */
 static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
 {
+	assert_spin_locked(&kvm->mmu_lock);
 	unmap_range(kvm, kvm->arch.pgd, start, size);
 }
 
@@ -824,7 +833,10 @@ void kvm_free_stage2_pgd(struct kvm *kvm
 	if (kvm->arch.pgd == NULL)
 		return;
 
+	spin_lock(&kvm->mmu_lock);
 	unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
+	spin_unlock(&kvm->mmu_lock);
+
 	kvm_free_hwpgd(kvm_get_hwpgd(kvm));
 	if (KVM_PREALLOC_LEVEL > 0)
 		kfree(kvm->arch.pgd);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ