lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100615135530.4565745D@kernel.beaverton.ibm.com>
Date:	Tue, 15 Jun 2010 06:55:30 -0700
From:	Dave Hansen <dave@...ux.vnet.ibm.com>
To:	linux-kernel@...r.kernel.org
Cc:	kvm@...r.kernel.org, Dave Hansen <dave@...ux.vnet.ibm.com>
Subject: [RFC][PATCH 9/9] make kvm mmu shrinker more aggressive


In a previous patch, we removed the 'nr_to_scan' tracking.
It was not being used to track the number of objects
scanned, so we stopped using it entirely.  Here, we
strart using it again.

The theory here is simple; if we already have the refcount
and the kvm->mmu_lock, then we should do as much work as
possible under the lock.  The downside is that we're less
fair about the KVM instances from which we reclaim.  Each
call to mmu_shrink() will tend to "pick on" one instance,
after which it gets moved to the end of the list and left
alone for a while.

If mmu_shrink() has already done a significant amount of
scanning, the use of 'nr_to_scan' inside shrink_kvm_mmu()
will also ensure that we do not over-reclaim when we have
already done a lot of work in this call.

In the end, this patch defines a "scan" as:
1. An attempt to acquire a refcount on a 'struct kvm'
2. freeing a kvm mmu page

This would probably be most ideal if we can expose some
of the work done by kvm_mmu_remove_some_alloc_mmu_pages()
as also counting as scanning, but I think we have churned
enough for the moment.

Signed-off-by: Dave Hansen <dave@...ux.vnet.ibm.com>
---

 linux-2.6.git-dave/arch/x86/kvm/mmu.c |   11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff -puN arch/x86/kvm/mmu.c~make-shrinker-more-aggressive arch/x86/kvm/mmu.c
--- linux-2.6.git/arch/x86/kvm/mmu.c~make-shrinker-more-aggressive	2010-06-14 11:30:44.000000000 -0700
+++ linux-2.6.git-dave/arch/x86/kvm/mmu.c	2010-06-14 11:38:04.000000000 -0700
@@ -2935,8 +2935,10 @@ static int shrink_kvm_mmu(struct kvm *kv
 
 	idx = srcu_read_lock(&kvm->srcu);
 	spin_lock(&kvm->mmu_lock);
-	if (kvm->arch.n_used_mmu_pages > 0)
-		freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm);
+	while (nr_to_scan > 0 && kvm->arch.n_used_mmu_pages > 0) {
+		freed_pages += kvm_mmu_remove_some_alloc_mmu_pages(kvm);
+		nr_to_scan--;
+	}
 
 	spin_unlock(&kvm->mmu_lock);
 	srcu_read_unlock(&kvm->srcu, idx);
@@ -2952,7 +2954,6 @@ static int shrink_kvm_mmu(struct kvm *kv
 static int mmu_shrink(int nr_to_scan, gfp_t gfp_mask)
 {
 	int err;
-	int freed;
 	struct kvm *kvm;
 
 	if (nr_to_scan == 0)
@@ -2989,11 +2990,11 @@ retry:
 	 * operation itself.
 	 */
 	spin_unlock(&kvm_lock);
-	freed = shrink_kvm_mmu(kvm, nr_to_scan);
+	nr_to_scan -= shrink_kvm_mmu(kvm, nr_to_scan);
 
 	kvm_put_kvm(kvm);
 
-	if (!freed && nr_to_scan > 0)
+	if (nr_to_scan > 0)
 		goto retry;
 
 out:
diff -puN arch/x86/kvm/x86.c~make-shrinker-more-aggressive arch/x86/kvm/x86.c
diff -puN arch/x86/include/asm/kvm_host.h~make-shrinker-more-aggressive arch/x86/include/asm/kvm_host.h
_
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ