[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20130508104100.GU12349@redhat.com>
Date: Wed, 8 May 2013 13:41:00 +0300
From: Gleb Natapov <gleb@...hat.com>
To: Marcelo Tosatti <mtosatti@...hat.com>
Cc: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>,
avi.kivity@...il.com, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, takuya.yoshikawa@...il.com
Subject: Re: [PATCH v4 4/6] KVM: MMU: fast invalid all shadow pages
On Tue, May 07, 2013 at 12:09:29PM -0300, Marcelo Tosatti wrote:
> On Tue, May 07, 2013 at 05:56:08PM +0300, Gleb Natapov wrote:
> > > > Yes, I am missing what Marcelo means there too. We cannot free memslot
> > > > until we unmap its rmap one way or the other.
> > >
> > > I do not understand what are you optimizing for, given the four possible
> > > cases we discussed at
> > >
> > > https://lkml.org/lkml/2013/4/18/280
> > >
> > We are optimizing mmu_lock holding time for all of those cases.
> >
> > But you cannot just "zap roots + sp gen number increase." on slot
> > deletion because you need to transfer access/dirty information from rmap
> > that is going to be deleted to actual page before
> > kvm_set_memory_region() returns to a caller.
> >
> > > That is, why a simple for_each_all_shadow_page(zap_page) is not sufficient.
> > With a lock break? It is. We tried to optimize that by zapping only pages
> > that reference memslot that is going to be deleted and zap all other
> > later when recycling old sps, but if you think this is premature
> > optimization I am fine with it.
>
> If it can be shown that its not premature optimization, I am fine with
> it.
>
> AFAICS all cases are 1) rare and 2) not latency sensitive (as in there
> is no requirement for those cases to finish in a short period of time).
OK, lets start from a simple version. The one that goes through rmap
turned out to be more complicated that we expected.
--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists