lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 10 Mar 2016 07:22:38 +0000
From:	"Li, Liang Z" <liang.z.li@...el.com>
To:	Jitendra Kolhe <jitendra.kolhe@....com>,
	"amit.shah@...hat.com" <amit.shah@...hat.com>
CC:	"dgilbert@...hat.com" <dgilbert@...hat.com>,
	"ehabkost@...hat.com" <ehabkost@...hat.com>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
	"mst@...hat.com" <mst@...hat.com>,
	"quintela@...hat.com" <quintela@...hat.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"qemu-devel@...gnu.org" <qemu-devel@...gnu.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"pbonzini@...hat.com" <pbonzini@...hat.com>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	"virtualization@...ts.linux-foundation.org" 
	<virtualization@...ts.linux-foundation.org>,
	"rth@...ddle.net" <rth@...ddle.net>,
	"mohan_parthasarathy@....com" <mohan_parthasarathy@....com>,
	"simhan@....com" <simhan@....com>
Subject: RE: [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live
 migration optimization

> On 3/8/2016 4:44 PM, Amit Shah wrote:
> > On (Fri) 04 Mar 2016 [15:02:47], Jitendra Kolhe wrote:
> >>>>
> >>>> * Liang Li (liang.z.li@...el.com) wrote:
> >>>>> The current QEMU live migration implementation mark the all the
> >>>>> guest's RAM pages as dirtied in the ram bulk stage, all these
> >>>>> pages will be processed and that takes quit a lot of CPU cycles.
> >>>>>
> >>>>> From guest's point of view, it doesn't care about the content in
> >>>>> free pages. We can make use of this fact and skip processing the
> >>>>> free pages in the ram bulk stage, it can save a lot CPU cycles and
> >>>>> reduce the network traffic significantly while speed up the live
> >>>>> migration process obviously.
> >>>>>
> >>>>> This patch set is the QEMU side implementation.
> >>>>>
> >>>>> The virtio-balloon is extended so that QEMU can get the free pages
> >>>>> information from the guest through virtio.
> >>>>>
> >>>>> After getting the free pages information (a bitmap), QEMU can use
> >>>>> it to filter out the guest's free pages in the ram bulk stage.
> >>>>> This make the live migration process much more efficient.
> >>>>
> >>>> Hi,
> >>>>   An interesting solution; I know a few different people have been
> >>>> looking at how to speed up ballooned VM migration.
> >>>>
> >>>
> >>> Ooh, different solutions for the same purpose, and both based on the
> balloon.
> >>
> >> We were also tying to address similar problem, without actually
> >> needing to modify the guest driver. Please find patch details under mail
> with subject.
> >> migration: skip sending ram pages released by virtio-balloon driver
> >
> > The scope of this patch series seems to be wider: don't send free
> > pages to a dest at all, vs. don't send pages that are ballooned out.
> >
> > 		Amit
> 
> Hi,
> 
> Thanks for your response. The scope of this patch series doesn’t seem to
> take care of ballooned out pages. To balloon out a guest ram page the guest
> balloon driver does a alloc_page() and then return the guest pfn to Qemu, so
> ballooned out pages will not be seen as free ram pages by the guest.
> Thus we will still end up scanning (for zero page) for ballooned out pages
> during migration. It would be ideal if we could have both solutions.
> 

Agree,  for users who care about the performance, just skipping the free pages.
For users who have already turned on virtio-balloon,  your solution can take effect.

Liang
> Thanks,
> - Jitendra

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ