[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130422230233.GA3337@amt.cnet>
Date: Mon, 22 Apr 2013 20:02:34 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Takuya Yoshikawa <takuya.yoshikawa@...il.com>
Cc: Gleb Natapov <gleb@...hat.com>,
Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>,
avi.kivity@...il.com, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Subject: Re: [PATCH v3 00/15] KVM: MMU: fast zap all shadow pages
On Mon, Apr 22, 2013 at 10:45:53PM +0900, Takuya Yoshikawa wrote:
> On Mon, 22 Apr 2013 15:39:38 +0300
> Gleb Natapov <gleb@...hat.com> wrote:
>
> > > > Do not want kvm_set_memory (cases: DELETE/MOVE/CREATES) to be
> > > > suspectible to:
> > > >
> > > > vcpu 1 | kvm_set_memory
> > > > create shadow page
> > > > nuke shadow page
> > > > create shadow page
> > > > nuke shadow page
> > > >
> > > > Which is guest triggerable behavior with spinlock preemption algorithm.
> > >
> > > Not only guest triggerable as in the sense of a malicious guest,
> > > but condition above can be induced by host workload with non-malicious
> > > guest system.
> > >
> > Is the problem that newly created shadow pages are immediately zapped?
> > Shouldn't generation number/kvm_mmu_zap_all_invalid() idea described here
> > https://lkml.org/lkml/2013/4/22/111 solve this?
>
> I guess so. That's what Avi described when he tried to achieve
> lockless TLB flushes. Mixing that idea with Xiao's approach will
> achieve reasonably nice performance, I think.
Yes.
> Various improvements should be added later on top of that if needed.
>
> > > Also kvm_set_memory being relatively fast with huge memory guests
> > > is nice (which is what Xiaos idea allows).
>
> I agree with this point. But if so, it should be actually measured on
> such guests, even if the algorithm looks promising.
Works for me.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists