[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <01e42346-5b4d-8ccc-d485-5d866da7cf8d@redhat.com>
Date: Tue, 4 Jan 2022 15:44:55 +0100
From: David Hildenbrand <david@...hat.com>
To: Peng Liang <liangpeng10@...wei.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: akpm@...ux-foundation.org, hughd@...gle.com,
xiexiangyou@...wei.com, zhengchuan@...wei.com,
wanghao232@...wei.com, "dgilbert@...hat.com" <dgilbert@...hat.com>
Subject: Re: [RFC 0/1] memfd: Support mapping to zero page on reading
On 22.12.21 13:33, Peng Liang wrote:
> Hi all,
>
> Recently we are working on implementing CRIU [1] for QEMU based on
> Steven's work [2]. It will use memfd to allocate guest memory in order
> to restore (inherit) it in the new QEMU process. However, memfd will
> allocate a new page for reading while anonymous memory will map to zero
> page for reading. For QEMU, memfd may cause that all memory are
> allocated during the migration because QEMU will read all pages in
> migration. It may lead to OOM if over-committed memory is enabled,
> which is usually enabled in public cloud.
Hi,
it's the exact same problem as if just migrating a VM after inflating
the balloon, or after reporting free memory to the hypervisor via
virtio-balloon free page reporting.
Even populating the shared zero page still wastes CPU time and more
importantly memory for page tables. Further, you'll end up reading the
whole page to discover that you just populated the shared zeropage, far
from optimal. Instead of doing that dance, just check if there is
something worth reading at all.
You could simply sense if a page is actually populated before going
ahead and reading it for migration. I actually discussed that recently
with Dave Gilbert.
For anonymous memory it's pretty straight forward via
/proc/self/pagemap. For files you can use lseek.
https://lkml.kernel.org/r/20210923064618.157046-2-tiberiu.georgescu@nutanix.com
Contains some details. There was a discussion to eventually have a
better bulk interface for it if it's necessary for performance.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists