lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130521084551.GX4725@redhat.com>
Date:	Tue, 21 May 2013 11:45:51 +0300
From:	Gleb Natapov <gleb@...hat.com>
To:	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
Cc:	Marcelo Tosatti <mtosatti@...hat.com>, avi.kivity@...il.com,
	pbonzini@...hat.com, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org
Subject: Re: [PATCH v6 3/7] KVM: MMU: fast invalidate all pages

On Tue, May 21, 2013 at 11:36:57AM +0800, Xiao Guangrong wrote:
> > So its better to just 
> > 
> > if (need_resched()) {
> > 	kvm_mmu_complete_zap_page(&list);
> 
> is kvm_mmu_commit_zap_page()?
> 
Also we need to check that someone waits on mmu_lock before entering
here.

> > 	cond_resched_lock(&kvm->mmu_lock);
> > }
> > 
> 
> Isn't it what Gleb said?
> 
It is.

> > If you want to collapse TLB flushes, please do it in a later patch.
> 
> Good to me.
> 
> > 
> >>>> +		if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list))
> >>>> +			goto restart;
> >>>> +	}
> >>>> +
> >>>> +	/*
> >>>> +	 * Should flush tlb before free page tables since lockless-walking
> >>>> +	 * may use the pages.
> >>>> +	 */
> >>>> +	kvm_mmu_commit_zap_page(kvm, &invalid_list);
> >>>> +}
> >>>> +
> >>>> +/*
> >>>> + * Fast invalidate all shadow pages.
> >>>> + *
> >>>> + * @zap_obsolete_pages indicates whether all the obsolete pages should
> >>>> + * be zapped. This is required when memslot is being deleted or VM is
> >>>> + * being destroyed, in these cases, we should ensure that KVM MMU does
> >>>> + * not use any resource of the being-deleted slot or all slots after
> >>>> + * calling the function.
> >>>> + *
> >>>> + * @zap_obsolete_pages == false means the caller just wants to flush all
> >>>> + * shadow page tables.
> >>>> + */
> >>>> +void kvm_mmu_invalidate_all_pages(struct kvm *kvm, bool zap_obsolete_pages)
> >>>> +{
> >>>> +	spin_lock(&kvm->mmu_lock);
> >>>> +	kvm->arch.mmu_valid_gen++;
> >>>> +
> >>>> +	/*
> >>>> +	 * Notify all vcpus to reload its shadow page table
> >>>> +	 * and flush TLB. Then all vcpus will switch to new
> >>>> +	 * shadow page table with the new mmu_valid_gen.
> >>>
> >>> Only if you zap the roots, which we agreed would be a second step, after
> >>> being understood its necessary.
> >>>
> >> I've lost you here. The patch implement what was agreed upon.
> > 
> > "
> > + /*
> > +  * Notify all vcpus to reload its shadow page table
> > +  * and flush TLB. Then all vcpus will switch to new
> > +  * shadow page table with the new mmu_valid_gen.
> > "
> > 
> > What was suggested was... go to phrase which starts with "The only purpose
> > of the generation number should be to".
> > 
> > The comment quoted here does not match that description.
> 
> So, is this your want?
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 2c512e8..2fd4c04 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4275,10 +4275,19 @@ restart:
>   */
>  void kvm_mmu_invalidate_all_pages(struct kvm *kvm, bool zap_obsolete_pages)
>  {
> +       bool zap_root = fase;
> +       struct kvm_mmu_page *sp;
> +
>         spin_lock(&kvm->mmu_lock);
>         trace_kvm_mmu_invalidate_all_pages(kvm, zap_obsolete_pages);
>         kvm->arch.mmu_valid_gen++;
> 
> +       list_for_each_entry(sp, kvm->arch.active_mmu_pages, link)
> +               if (sp->root_count && !sp->role.invalid) {
> +                       zap_root = true;
> +                       break;
> +               }
> +
That's the part I do not understand from what Marcelo suggest: why would zap_root
be ever false after this loop?

>         /*
>          * Notify all vcpus to reload its shadow page table
>          * and flush TLB. Then all vcpus will switch to new
> @@ -4288,7 +4297,8 @@ void kvm_mmu_invalidate_all_pages(struct kvm *kvm, bool zap_obsolete_pages)
>          * mmu-lock, otherwise, vcpu would purge shadow page
>          * but miss tlb flush.
>          */
> -       kvm_reload_remote_mmus(kvm);
> +       if (zap_root)
> +               kvm_reload_remote_mmus(kvm);
> 
>         if (zap_obsolete_pages)
>                 kvm_zap_obsolete_pages(kvm);
> 

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ