[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120412072612.592b51427b04174ca7ecdf9f@gmail.com>
Date: Thu, 12 Apr 2012 07:26:12 +0900
From: Takuya Yoshikawa <takuya.yoshikawa@...il.com>
To: Avi Kivity <avi@...hat.com>
Cc: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Xiao Guangrong <xiaoguangrong.eric@...il.com>,
LKML <linux-kernel@...r.kernel.org>, KVM <kvm@...r.kernel.org>
Subject: Re: [PATCH 00/13] KVM: MMU: fast page fault
On Wed, 11 Apr 2012 17:21:30 +0300
Avi Kivity <avi@...hat.com> wrote:
> Currently the main performance bottleneck for migration is qemu, which
> is single threaded and generally inefficient. However I am sure that
> once the qemu bottlenecks will be removed we'll encounter kvm problems,
> particularly with wide (many vcpus) and large (lots of memory) guests.
> So it's a good idea to improve in this area. I agree we'll need to
> measure each change, perhaps with a test program until qemu catches up.
I agree.
I am especially interested in XBRLE + current srcu-less.
> > I am testing the current live migration to see when and for what it can
> > be used. I really want to see it become stable and usable for real
> > services.
> Well, it's used in production now.
About RHEL6 e.g., yes of course and we are ...
My comment was about the current srcu-less and whether I can make it enough
stable in this rc-cycle. I think it will enlarge real use cases in some
extent.
> > So I really do not want to see drastic change now without any real need
> > or feedback from real users -- this is my point.
> It's a good point, we should avoid change for its own sake.
Yes, especially because live migration users are limited to those who have
such services.
I hope that kernel developers start using it in their desktops!!!???
Thanks,
Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists