lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 3 May 2013 12:53:02 -0300
From:	Marcelo Tosatti <mtosatti@...hat.com>
To:	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
Cc:	gleb@...hat.com, avi.kivity@...il.com,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	takuya.yoshikawa@...il.com
Subject: Re: [PATCH v4 4/6] KVM: MMU: fast invalid all shadow pages

On Fri, May 03, 2013 at 01:52:07PM +0800, Xiao Guangrong wrote:
> On 05/03/2013 09:05 AM, Marcelo Tosatti wrote:
> 
> >> +
> >> +/*
> >> + * Fast invalid all shadow pages belong to @slot.
> >> + *
> >> + * @slot != NULL means the invalidation is caused the memslot specified
> >> + * by @slot is being deleted, in this case, we should ensure that rmap
> >> + * and lpage-info of the @slot can not be used after calling the function.
> >> + *
> >> + * @slot == NULL means the invalidation due to other reasons, we need
> >> + * not care rmap and lpage-info since they are still valid after calling
> >> + * the function.
> >> + */
> >> +void kvm_mmu_invalid_memslot_pages(struct kvm *kvm,
> >> +				   struct kvm_memory_slot *slot)
> >> +{
> >> +	spin_lock(&kvm->mmu_lock);
> >> +	kvm->arch.mmu_valid_gen++;
> >> +
> >> +	/*
> >> +	 * All shadow paes are invalid, reset the large page info,
> >> +	 * then we can safely desotry the memslot, it is also good
> >> +	 * for large page used.
> >> +	 */
> >> +	kvm_clear_all_lpage_info(kvm);
> > 
> > Xiao,
> > 
> > I understood it was agreed that simple mmu_lock lockbreak while
> > avoiding zapping of newly instantiated pages upon a
> > 
> > 	if(spin_needbreak)
> > 		cond_resched_lock()
> > 
> > cycle was enough as a first step? And then later introduce root zapping
> > along with measurements.
> > 
> > https://lkml.org/lkml/2013/4/22/544
> 
> Yes, it is.
> 
> See the changelog in 0/0:
> 
> " we use lock-break technique to zap all sptes linked on the
> invalid rmap, it is not very effective but good for the first step."
> 
> Thanks!

Sure, but what is up with zeroing kvm_clear_all_lpage_info(kvm) and
zapping the root? Only lock-break technique along with generation number 
was what was agreed.

That is, having:

> >> +  /*
> >> +   * All shadow paes are invalid, reset the large page info,
> >> +   * then we can safely desotry the memslot, it is also good
> >> +   * for large page used.
> >> +   */
> >> +  kvm_clear_all_lpage_info(kvm);

Was an optimization step that should be done after being shown it is an
advantage?

It is more work, but it leads to a better understanding of the issues in 
practice.

If you have reasons to do it now, then please have it in the final
patches, as an optimization on top of the first patches (where the
lockbreak technique plus generation numbers is introduced).

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ