lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 07 Apr 2015 17:41:57 +0200
From:	Paolo Bonzini <pbonzini@...hat.com>
To:	Wanpeng Li <wanpeng.li@...ux.intel.com>, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org
CC:	Xiao Guangrong <guangrong.xiao@...ux.intel.com>
Subject: Re: [PATCH v3] kvm: mmu: lazy collapse small sptes into large sptes



On 03/04/2015 09:40, Wanpeng Li wrote:
> There are two scenarios for the requirement of collapsing small sptes
> into large sptes.
> - dirty logging tracks sptes in 4k granularity, so large sptes are split,
>   the large sptes will be reallocated in the destination machine and the
>   guest in the source machine will be destroyed when live migration successfully.
>   However, the guest in the source machine will continue to run if live migration
>   fail due to some reasons, the sptes still keep small which lead to bad
>   performance.
> - our customers write tools to track the dirty speed of guests by EPT D bit/PML
>   in order to determine the most appropriate one to be live migrated, however
>   sptes will still keep small after tracking dirty speed.
> 
> This patch introduce lazy collapse small sptes into large sptes, the memory region
> will be scanned on the ioctl context when dirty log is stopped, the ones which can
> be collapsed into large pages will be dropped during the scan, it depends the on
> later #PF to reallocate all large sptes.
> 
> Reviewed-by: Xiao Guangrong <guangrong.xiao@...ux.intel.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@...ux.intel.com>
> ---
> v2 -> v3:
>  * update comments 
>  * fix infinite for loop
> v1 -> v2:
>  * use 'bool' instead of 'int'
>  * add more comments
>  * fix can not get the next spte after drop the current spte
> 
>  arch/x86/include/asm/kvm_host.h |  2 ++
>  arch/x86/kvm/mmu.c              | 73 +++++++++++++++++++++++++++++++++++++++++
>  arch/x86/kvm/x86.c              | 19 +++++++++++
>  3 files changed, 94 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 30b28dc..91b5bdb 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -854,6 +854,8 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
>  void kvm_mmu_reset_context(struct kvm_vcpu *vcpu);
>  void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
>  				      struct kvm_memory_slot *memslot);
> +void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
> +					struct kvm_memory_slot *memslot);
>  void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
>  				   struct kvm_memory_slot *memslot);
>  void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm,
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index cee7592..ba002a0 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4465,6 +4465,79 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
>  		kvm_flush_remote_tlbs(kvm);
>  }
>  
> +static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
> +		unsigned long *rmapp)
> +{
> +	u64 *sptep;
> +	struct rmap_iterator iter;
> +	int need_tlb_flush = 0;
> +	pfn_t pfn;
> +	struct kvm_mmu_page *sp;
> +
> +	for (sptep = rmap_get_first(*rmapp, &iter); sptep;) {
> +		BUG_ON(!(*sptep & PT_PRESENT_MASK));
> +
> +		sp = page_header(__pa(sptep));
> +		pfn = spte_to_pfn(*sptep);
> +
> +		/*
> +		 * Lets support EPT only for now, there still needs to figure
> +		 * out an efficient way to let these codes be aware what mapping
> +		 * level used in guest.
> +		 */
> +		if (sp->role.direct &&
> +			!kvm_is_reserved_pfn(pfn) &&
> +			PageTransCompound(pfn_to_page(pfn))) {
> +			drop_spte(kvm, sptep);
> +			sptep = rmap_get_first(*rmapp, &iter);
> +			need_tlb_flush = 1;
> +		} else
> +			sptep = rmap_get_next(&iter);
> +	}
> +
> +	return need_tlb_flush;
> +}
> +
> +void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
> +			struct kvm_memory_slot *memslot)
> +{
> +	bool flush = false;
> +	unsigned long *rmapp;
> +	unsigned long last_index, index;
> +	gfn_t gfn_start, gfn_end;
> +
> +	spin_lock(&kvm->mmu_lock);
> +
> +	gfn_start = memslot->base_gfn;
> +	gfn_end = memslot->base_gfn + memslot->npages - 1;
> +
> +	if (gfn_start >= gfn_end)
> +		goto out;
> +
> +	rmapp = memslot->arch.rmap[0];
> +	last_index = gfn_to_index(gfn_end, memslot->base_gfn,
> +					PT_PAGE_TABLE_LEVEL);
> +
> +	for (index = 0; index <= last_index; ++index, ++rmapp) {
> +		if (*rmapp)
> +			flush |= kvm_mmu_zap_collapsible_spte(kvm, rmapp);
> +
> +		if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
> +			if (flush) {
> +				kvm_flush_remote_tlbs(kvm);
> +				flush = false;
> +			}
> +			cond_resched_lock(&kvm->mmu_lock);
> +		}
> +	}
> +
> +	if (flush)
> +		kvm_flush_remote_tlbs(kvm);
> +
> +out:
> +	spin_unlock(&kvm->mmu_lock);
> +}
> +
>  void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
>  				   struct kvm_memory_slot *memslot)
>  {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 50861dd..a6cd10b 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7647,6 +7647,25 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
>  	new = id_to_memslot(kvm->memslots, mem->slot);
>  
>  	/*
> +	 * Dirty logging tracks sptes in 4k granularity, so large sptes are
> +	 * split, the large sptes will be reallocated in the destination
> +	 * machine and the guest in the source machine will be destroyed
> +	 * when live migration successfully. However, the guest in the source
> +	 * machine will continue to run if live migration fail due to some
> +	 * reasons, the sptes still keep small which lead to bad performance.
> +	 *
> +	 * Lazy collapse small sptes into large sptes is intended to handle
> +	 * this, the memory region will be scanned on the ioctl context when
> +	 * dirty log is stopped, the ones which can be collapsed into large
> +	 * pages will be dropped during the scan, it depends the on later #PF
> +	 * to reallocate all large sptes.
> +	 */
> +	if ((change != KVM_MR_DELETE) &&
> +		(old->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
> +		!(new->flags & KVM_MEM_LOG_DIRTY_PAGES))
> +		kvm_mmu_zap_collapsible_sptes(kvm, new);
> +
> +	/*
>  	 * Set up write protection and/or dirty logging for the new slot.
>  	 *
>  	 * For KVM_MR_DELETE and KVM_MR_MOVE, the shadow pages of old slot have
> 


Applied just with editing of the comments and commit message.

Thanks to you and Xiao.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ