lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 7 May 2013 11:58:48 +0300
From:	Gleb Natapov <gleb@...hat.com>
To:	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
Cc:	Marcelo Tosatti <mtosatti@...hat.com>, avi.kivity@...il.com,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	takuya.yoshikawa@...il.com
Subject: Re: [PATCH v4 4/6] KVM: MMU: fast invalid all shadow pages

On Tue, May 07, 2013 at 01:45:52AM +0800, Xiao Guangrong wrote:
> On 05/07/2013 01:24 AM, Gleb Natapov wrote:
> > On Mon, May 06, 2013 at 09:10:11PM +0800, Xiao Guangrong wrote:
> >> On 05/06/2013 08:36 PM, Gleb Natapov wrote:
> >>
> >>>>> Step 1) Fix kvm_mmu_zap_all's behaviour: introduce lockbreak via
> >>>>> spin_needbreak. Use generation numbers so that in case kvm_mmu_zap_all 
> >>>>> releases mmu_lock and reacquires it again, only shadow pages 
> >>>>> from the generation with which kvm_mmu_zap_all started are zapped (this
> >>>>> guarantees forward progress and eventual termination).
> >>>>>
> >>>>> kvm_mmu_zap_generation()
> >>>>> 	spin_lock(mmu_lock)
> >>>>> 	int generation = kvm->arch.mmu_generation;
> >>>>>
> >>>>> 	for_each_shadow_page(sp) {
> >>>>> 		if (sp->generation == kvm->arch.mmu_generation)
> >>>>> 			zap_page(sp)
> >>>>> 		if (spin_needbreak(mmu_lock)) {
> >>>>> 			kvm->arch.mmu_generation++;
> >>>>> 			cond_resched_lock(mmu_lock);
> >>>>> 		}
> >>>>> 	}
> >>>>>
> >>>>> kvm_mmu_zap_all()
> >>>>> 	spin_lock(mmu_lock)
> >>>>> 	for_each_shadow_page(sp) {
> >>>>> 		if (spin_needbreak(mmu_lock)) {
> >>>>> 			cond_resched_lock(mmu_lock);
> >>>>> 		}
> >>>>> 	}
> >>>>>
> >>>>> Use kvm_mmu_zap_generation for kvm_arch_flush_shadow_memslot.
> >>>>> Use kvm_mmu_zap_all for kvm_mmu_notifier_release,kvm_destroy_vm.
> >>>>>
> >>>>> This addresses the main problem: excessively long hold times 
> >>>>> of kvm_mmu_zap_all with very large guests.
> >>>>>
> >>>>> Do you see any problem with this logic? This was what i was thinking 
> >>>>> we agreed.
> >>>>
> >>>> No. I understand it and it can work.
> >>>>
> >>>> Actually, it is similar with Gleb's idea that "zapping stale shadow pages
> >>>> (and uses lock break technique)", after some discussion, we thought "only zap
> >>>> shadow pages that are reachable from the slot's rmap" is better, that is this
> >>>> patchset does.
> >>>> (https://lkml.org/lkml/2013/4/23/73)
> >>>>
> >>> But this is not what the patch is doing. Close, but not the same :)
> >>
> >> Okay. :)
> >>
> >>> Instead of zapping shadow pages reachable from slot's rmap the patch
> >>> does kvm_unmap_rmapp() which drop all spte without zapping shadow pages.
> >>> That is why you need special code to re-init lpage_info. What I proposed
> >>> was to call zap_page() on all shadow pages reachable from rmap. This
> >>> will take care of lpage_info counters. Does this make sense?
> >>
> >> Unfortunately, no! We still need to care lpage_info. lpage_info is used
> >> to count the number of guest page tables in the memslot.
> >>
> >> For example, there is a memslot:
> >> memslot[0].based_gfn = 0, memslot[0].npages = 100,
> >>
> >> and there is a shadow page:
> >> sp->role.direct =0, sp->role.level = 4, sp->gfn = 10.
> >>
> >> this sp is counted in the memslot[0] but it can not be found by walking
> >> memslot[0]->rmap since there is no last mapping in this shadow page.
> >>
> > Right, so what about walking mmu_page_hash for each gfn belonging to the
> > slot that is in process to be removed to find those?
> 
> That will cost lots of time. The size of hashtable is 1 << 10. If the
> memslot has 4M memory, it will walk all the entries, the cost is the same
> as walking active_list (maybe litter more). And a memslot has 4M memory is
> the normal case i think.
> 
Memslots will be much bigger with memory hotplug. Lock break should be
used while walking mmu_page_hash obviously, but still iterating over
entire memslot gfn space to find a few gfn that may be there is
suboptimal. We can keep a list of them in the memslot itself.

> Another point is that lpage_info stops mmu to use large page. If we
> do not reset lpage_info, mmu is using 4K page until the invalid-sp is
> zapped.
> 
I do not think this is a big issue. If lpage_info prevented the use of
large pages for some memory ranges before we zapped entire shadow pages
it was probably for a reason, so new shadow page will prevent large
pages from been created for the same memory ranges.

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ