[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130529123959.GC5931@amt.cnet>
Date: Wed, 29 May 2013 09:39:59 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
Cc: gleb@...hat.com, avi.kivity@...il.com, pbonzini@...hat.com,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v7 10/11] KVM: MMU: collapse TLB flushes when zap all
pages
On Wed, May 29, 2013 at 11:03:19AM +0800, Xiao Guangrong wrote:
> >>> the pages since other vcpus may be doing locklessly shadow
> >>> page walking
> >
> > Ah, yes, i agree with you.
> >
> > We can introduce a list, say kvm->arch.obsolte_pages, to link all of the
> > zapped-page, the page-shrink will free the page on that list first.
> >
> > Marcelo, if you do not have objection on patch 1 ~ 8 and 11, could you please
> > let them merged first, and do add some comments and tlb optimization later?
>
> Exclude patch 11 please, since it depends on the "collapse" optimization.
I'm fine with patch 1 being merged. I think the remaining patches need better
understanding or explanation. The problems i see are:
1) The magic number "10" to zap before considering reschedule is
annoying. It would be good to understand why it is needed at all.
But then again, the testcase is measuring kvm_mmu_zap_all performance
alone which we know is not a common operation, so perhaps there is
no need for that minimum-pages-to-zap-before-reschedule.
2) The problem above (retention of large number of pages while zapping)
can be fatal, it can lead to OOM and host crash.
3) Make sure that introduction of obsolete pages can not lead to a
huge number of shadow pages around (the correct reason you gave for not merging
https://patchwork.kernel.org/patch/2309641/ is not true anymore
obsolete pages).
Other than these points, i'm fine with obsolete pages optimization
to speed up kvm_mmu_zap_all and the rest of the patchset.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists