lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 Jun 2010 11:37:20 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Dave Hansen <dave@...ux.vnet.ibm.com>
CC:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [RFC][PATCH 9/9] make kvm mmu shrinker more aggressive

On 06/16/2010 06:25 PM, Dave Hansen wrote:
>
>>> If mmu_shrink() has already done a significant amount of
>>> scanning, the use of 'nr_to_scan' inside shrink_kvm_mmu()
>>> will also ensure that we do not over-reclaim when we have
>>> already done a lot of work in this call.
>>>
>>> In the end, this patch defines a "scan" as:
>>> 1. An attempt to acquire a refcount on a 'struct kvm'
>>> 2. freeing a kvm mmu page
>>>
>>> This would probably be most ideal if we can expose some
>>> of the work done by kvm_mmu_remove_some_alloc_mmu_pages()
>>> as also counting as scanning, but I think we have churned
>>> enough for the moment.
>>>        
>> It usually removes one page.
>>      
> Does it always just go right now and free it, or is there any real
> scanning that has to go on?
>    

It picks a page from the tail of the LRU and frees it.  There is very 
little attempt to keep the LRU in LRU order, though.

We do need a scanner that looks at spte accessed bits if this isn't 
going to result in performance losses.

>>> diff -puN arch/x86/kvm/mmu.c~make-shrinker-more-aggressive arch/x86/kvm/mmu.c
>>> --- linux-2.6.git/arch/x86/kvm/mmu.c~make-shrinker-more-aggressive	2010-06-14 11:30:44.000000000 -0700
>>> +++ linux-2.6.git-dave/arch/x86/kvm/mmu.c	2010-06-14 11:38:04.000000000 -0700
>>> @@ -2935,8 +2935,10 @@ static int shrink_kvm_mmu(struct kvm *kv
>>>
>>>    	idx = srcu_read_lock(&kvm->srcu);
>>>    	spin_lock(&kvm->mmu_lock);
>>> -	if (kvm->arch.n_used_mmu_pages>   0)
>>> -		freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm);
>>> +	while (nr_to_scan>   0&&   kvm->arch.n_used_mmu_pages>   0) {
>>> +		freed_pages += kvm_mmu_remove_some_alloc_mmu_pages(kvm);
>>> +		nr_to_scan--;
>>> +	}
>>>
>>>        
>> What tree are you patching?
>>      
> These applied to Linus's latest as of yesterday.
>    

Please patch against kvm.git master (or next, which is usually a few 
unregression-tested patches ahead).  This code has changed.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ