[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210219163157.GF6669@xz-x1>
Date: Fri, 19 Feb 2021 11:31:57 -0500
From: Peter Xu <peterx@...hat.com>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Arnd Bergmann <arnd@...db.de>, Michal Hocko <mhocko@...e.com>,
Oscar Salvador <osalvador@...e.de>,
Matthew Wilcox <willy@...radead.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Minchan Kim <minchan@...nel.org>, Jann Horn <jannh@...gle.com>,
Jason Gunthorpe <jgg@...pe.ca>,
Dave Hansen <dave.hansen@...el.com>,
Hugh Dickins <hughd@...gle.com>,
Rik van Riel <riel@...riel.com>,
"Michael S . Tsirkin" <mst@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Vlastimil Babka <vbabka@...e.cz>,
Richard Henderson <rth@...ddle.net>,
Ivan Kokshaysky <ink@...assic.park.msu.ru>,
Matt Turner <mattst88@...il.com>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
"James E.J. Bottomley" <James.Bottomley@...senpartnership.com>,
Helge Deller <deller@....de>, Chris Zankel <chris@...kel.net>,
Max Filippov <jcmvbkbc@...il.com>, linux-alpha@...r.kernel.org,
linux-mips@...r.kernel.org, linux-parisc@...r.kernel.org,
linux-xtensa@...ux-xtensa.org, linux-arch@...r.kernel.org
Subject: Re: [PATCH RFC] mm/madvise: introduce MADV_POPULATE to
prefault/prealloc memory
On Fri, Feb 19, 2021 at 09:20:16AM +0100, David Hildenbrand wrote:
> On 18.02.21 23:59, Peter Xu wrote:
> > Hi, David,
> >
> > On Wed, Feb 17, 2021 at 04:48:44PM +0100, David Hildenbrand wrote:
> > > When we manage sparse memory mappings dynamically in user space - also
> > > sometimes involving MADV_NORESERVE - we want to dynamically populate/
> > > discard memory inside such a sparse memory region. Example users are
> > > hypervisors (especially implementing memory ballooning or similar
> > > technologies like virtio-mem) and memory allocators. In addition, we want
> > > to fail in a nice way if populating does not succeed because we are out of
> > > backend memory (which can happen easily with file-based mappings,
> > > especially tmpfs and hugetlbfs).
> >
> > Could you explain a bit more on how do you plan to use this new interface for
> > the virtio-balloon scenario?
>
> Sure, that will bring up an interesting point to discuss
> (MADV_POPULATE_WRITE).
>
> I'm planning on using it in virtio-mem: whenever the guests requests the
> hypervisor (via a virtio-mem device) to make specific blocks available
> ("plug"), I want to have a configurable option ("populate=on" /
> "prealloc="on") to perform safety checks ("prealloc") and populate page
> tables.
As you mentioned in the commit message, the original goal for MADV_POPULATE
should be for performance's sake, which I can understand. But for safety
check, I'm curious whether we'd have better way to do that besides populating
the whole memory.
E.g., can we simply ask the kernel "how much memory this process can still
allocate", then get a number out of it? I'm not sure whether it can be done
already by either cgroup or any other facilities, or maybe it's still missing.
But I'd raise this question up, since these two requirements seem to be two
standalone issues to solve at least to me. It could be an overkill to populate
all the memory just for a sanity check.
>
> This becomes especially relevant for private/shared hugetlbfs and shared
> files/shmem where we have a limited pool size (e.g., huge pages, tmpfs size,
> filesystem size). But it will also come in handy when just preallocating
> (esp. zeroing) anonymous memory.
>
> For virito-balloon it is not applicable because it really only supports
> anonymous memory and we cannot fail requests to deflate ...
>
> --- Example ---
>
> Example: Assume the guests requests to make 128 MB available and we're using
> hugetlbfs. Assume we're out of huge pages in the hypervisor - we want to
> fail the request - I want to do some kind of preallocation.
>
> So I could do fallocate() on anything that's MAP_SHARED, but not on anything
> that's MAP_PRIVATE. hugetlbfs via memfd() cannot be preallocated without
> going via SIGBUS handlers.
>
> --- QEMU memory configurations ---
>
> I see the following combinations relevant in QEMU that I want to support
> with virito-mem:
>
> 1) MAP_PRIVATE anonymous memory
> 2) MAP_PRIVATE on hugetlbfs (esp. via memfd)
> 3) MAP_SHARED on hugetlbfs (esp. via memfd)
> 4) MAP_SHARED on shmem (file / memfd)
> 5) MAP_SHARED on some sparse file.
>
> Other MAP_PRIVATE mappings barely make any sense to me - "read the file and
> write to page cache" is not really applicable to VM RAM (not to mention
> doing fallocate(PUNCH_HOLE) that invalidates the private copies of all other
> mappings on that file).
>
> --- Ways to populate/preallocate ---
>
> I see the following ways to populate/preallocate:
>
> a) MADV_POPULATE: write fault on writable MAP_PRIVATE, read fault on
> MAP_SHARED
> b) Writing to MAP_PRIVATE | MAP_SHARED from user space.
> c) (below) MADV_POPULATE_WRITE: write fault on writable MAP_PRIVATE |
> MAP_SHARED
>
> Especially, 2) is kind of weird as implemented in QEMU
> (util/oslib-posix.c:do_touch_pages):
>
> "Read & write back the same value, so we don't corrupt existing user/app
> data ... TODO: get a better solution from kernel so we don't need to write
> at all so we don't cause wear on the storage backing the region..."
It's interesting to know about commit 1e356fc14be ("mem-prealloc: reduce large
guest start-up and migration time.", 2017-03-14). It seems for speeding up VM
boot, but what I can't understand is why it would cause the delay of hugetlb
accounting - I thought we'd fail even earlier at either fallocate() on the
hugetlb file (when we use /dev/hugepages) or on mmap() of the memfd which
contains the huge pages. See hugetlb_reserve_pages() and its callers. Or did
I miss something?
I think there's a special case if QEMU fork() with a MAP_PRIVATE hugetlbfs
mapping, that could cause the memory accouting to be delayed until COW happens.
However that's definitely not the case for QEMU since QEMU won't work at all as
late as that point.
IOW, for hugetlbfs I don't know why we need to populate the pages at all if we
simply want to know "whether we do still have enough space".. And IIUC 2)
above is the major issue you'd like to solve too.
>
> So if we have zero, we write zero. We'll COW pages, triggering a write fault
> - and that's the only good thing about it. For example, similar to
> MADV_POPULATE, nothing stops KSM from merging anonymous pages again. So for
> anonymous memory the actual write is not helpful at all. Similarly for
> hugetlbfs, the actual write is not necessary - but there is no other way to
> really achieve the goal.
>
> --- How MADV_POPULATE is useful ---
>
> With virito-mem, our VM will usually write to memory before it reads it.
>
> With 1) and 2) it does exactly what I want: trigger COW / allocate memory
> and trigger a write fault. The only issue with 1) is that KSM might come
> around and undo our work - but that could only be avoided by writing random
> numbers to all pages from user space. Or we could simply rather disable KSM
> in that setup ...
>
> --- How MADV_POPULATE is not perfect ---
>
> KSM can merge anonymous pages again. Just like the current QEMU
> implementation. The only way around that is writing random numbers to the
> pages or mlocking all memory. No big news.
>
> Nothing stops reclaim/swap code from depopulating when using files. Again,
> no big new - we have to mlock.
>
> --- HOW MADV_POPULATE_WRITE might be useful ---
>
> With 3) 4) 5) MADV_POPULATE does partially what I want: preallocate memory
> and populate page tables. But as it's a read fault, I think we'll have
> another minor fault on access. Not perfect, but better than failing with
> SIGBUS. One way around that would be having an additional
> MADV_POPULATE_WRITE, to use in cases where it makes sense (I think at least
> 3) and 4), most probably not on actual files like 5) ).
Right, it seems when populating memories we'll read-fault on file-backed.
However that'll be another performance issue to think about. So I'd hope we
can start with the current virtio-mem issue on memory accounting, then we can
discuss them separately.
Btw, thanks for the long write-up, it definitely helps me to understand what
you wanted to achieve.
Thanks,
--
Peter Xu
Powered by blists - more mailing lists