lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5194DFAC.8010707@linux.vnet.ibm.com>
Date:	Thu, 16 May 2013 21:31:24 +0800
From:	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
To:	Paolo Bonzini <pbonzini@...hat.com>
CC:	gleb@...hat.com, avi.kivity@...il.com, mtosatti@...hat.com,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v5 8/8] KVM: MMU: zap pages in batch

On 05/16/2013 08:45 PM, Paolo Bonzini wrote:
> Il 16/05/2013 14:17, Xiao Guangrong ha scritto:
>> Zap at lease 10 pages before releasing mmu-lock to reduce the overload
>> caused by requiring lock
>>
>> [ It improves kernel building 0.6% ~ 1% ]
>>
>> Signed-off-by: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
>> ---
>>  arch/x86/kvm/mmu.c |   11 ++++++++---
>>  1 files changed, 8 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index e12f431..9c27fda 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -4216,10 +4216,12 @@ restart:
>>  	spin_unlock(&kvm->mmu_lock);
>>  }
>>  
>> +#define BATCH_ZAP_PAGES	10
>>  static void zap_invalid_pages(struct kvm *kvm)
>>  {
>>  	struct kvm_mmu_page *sp, *node;
>>  	LIST_HEAD(invalid_list);
>> +	int batch = 0;
>>  
>>  restart:
>>  	list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
>> @@ -4256,11 +4258,14 @@ restart:
>>  		 * Need not flush tlb since we only zap the sp with invalid
>>  		 * generation number.
>>  		 */
>> -		if (cond_resched_lock(&kvm->mmu_lock))
>> +		if ((batch >= BATCH_ZAP_PAGES) &&
>> +		      cond_resched_lock(&kvm->mmu_lock)) {
>> +			batch = 0;
>>  			goto restart;
>> +		}
>>  
>> -		if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list))
>> -			goto restart;
>> +		batch += kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
>> +		goto restart;
> 
> Would this look again and again at the same page if
> kvm_mmu_prepare_zap_page returns 0?

We skip the invalid page (sp->role.invalid) before call
kvm_mmu_prepare_zap_page so that kvm_mmu_prepare_zap_page can not
meet the same page. ;)



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ