[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5A587C61.2010204@intel.com>
Date: Fri, 12 Jan 2018 17:14:09 +0800
From: Wei Wang <wei.w.wang@...el.com>
To: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>, mst@...hat.com
CC: virtio-dev@...ts.oasis-open.org, linux-kernel@...r.kernel.org,
qemu-devel@...gnu.org, virtualization@...ts.linux-foundation.org,
kvm@...r.kernel.org, linux-mm@...ck.org, mhocko@...nel.org,
akpm@...ux-foundation.org, mawilcox@...rosoft.com,
david@...hat.com, cornelia.huck@...ibm.com,
mgorman@...hsingularity.net, aarcange@...hat.com,
amit.shah@...hat.com, pbonzini@...hat.com, willy@...radead.org,
liliang.opensource@...il.com, yang.zhang.wz@...il.com,
quan.xu0@...il.com, nilal@...hat.com, riel@...hat.com
Subject: Re: [PATCH v21 2/5 RESEND] virtio-balloon: VIRTIO_BALLOON_F_SG
On 01/11/2018 07:06 PM, Tetsuo Handa wrote:
> Wei Wang wrote:
>> Michael, could we merge patch 3-5 first?
> No! I'm repeatedly asking you to propose only VIRTIO_BALLOON_F_SG changes.
> Please don't ignore me.
>
>
>
> Patch 4 depends on patch 2. Thus, back to patch 2.
There is not strict dependence per se. I plan to split the two features
into 2 series, and post out 3-5 first, and the corresponding hypervisor
code.
After that's done, I'll get back to the discussion of patch 2.
> Now, proceeding to patch 4.
>
> Your patch is trying to call add_one_sg() for multiple times based on
>
> ----------------------------------------
> + /*
> + * This is expected to never fail: there is always at least 1 entry
> + * available on the vq, because when the vq is full the worker thread
> + * that adds the sg will be put into sleep until at least 1 entry is
> + * available to use.
> + */
This will be more clear in the new version which is not together with
patch 2.
>
> Now, I suspect we need to add VIRTIO_BALLOON_F_FREE_PAGE_VQ flag. I want to see
> the patch for the hypervisor side which makes use of VIRTIO_BALLOON_F_FREE_PAGE_VQ
> flag because its usage becomes tricky. Between the guest kernel obtains snapshot of
> free memory blocks and the hypervisor is told that some pages are currently free,
> these pages can become in use. That is, I don't think
>
> The second feature enables the optimization of the 1st round memory
> transfer - the hypervisor can skip the transfer of guest free pages in the
> 1st round.
>
> is accurate. The hypervisor is allowed to mark pages which are told as "currently
> unused" by the guest kernel as "write-protected" before starting the 1st round.
> Then, the hypervisor performs copying all pages except write-protected pages as
> the 1st round. Then, the 2nd and later rounds will be the same. That is,
> VIRTIO_BALLOON_F_FREE_PAGE_VQ requires the hypervisor to do 0th round as
> preparation. Thus, I want to see the patch for the hypervisor side.
>
> Now, what if all free pages in the guest kernel were reserved as ballooned pages?
> There will be no free pages which VIRTIO_BALLOON_F_FREE_PAGE_VQ flag would help.
> The hypervisor will have to copy all pages because all pages are either currently
> in-use or currently in balloons. After ballooning to appropriate size, there will
> be little free memory in the guest kernel, and the hypervisor already knows which
> pages are in the balloon. Thus, the hypervisor can skip copying the content of
> pages in the balloon, without using VIRTIO_BALLOON_F_FREE_PAGE_VQ flag.
>
> Then, why can't we do "inflate the balloon up to reasonable level (e.g. no need to
> wait for reclaim and no need to deflate)" instead of "find all the free pages as of
> specific moment" ? That is, code for VIRTIO_BALLOON_F_DEFLATE_ON_OOM could be reused
> instead of VIRTIO_BALLOON_F_FREE_PAGE_VQ ?
>
I think you misunderstood the work, which seems not easy to explain
everything from the beginning here. I wish to review patch 4 (I'll send
out a new independent version) with Michael if possible.
I'll discuss with you about patch 2 later. Thanks.
Best,
Wei
Powered by blists - more mailing lists