[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6bfcc500-7c11-f66a-26ea-e8b8bcc79e28@intel.com>
Date: Mon, 4 Jan 2021 15:00:31 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: David Hildenbrand <david@...hat.com>
Cc: Matthew Wilcox <willy@...radead.org>,
Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Dan Williams <dan.j.williams@...el.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Michal Hocko <mhocko@...e.com>,
Liang Li <liliangleo@...iglobal.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org
Subject: Re: [RFC v2 PATCH 4/4] mm: pre zero out free pages to speed up page
allocation for __GFP_ZERO
On 1/4/21 12:11 PM, David Hildenbrand wrote:
>> Yeah, it certainly can't be the default, but it *is* useful for
>> thing where we know that there are no cache benefits to zeroing
>> close to where the memory is allocated.
>>
>> The trick is opting into it somehow, either in a process or a VMA.
>>
> The patch set is mostly trying to optimize starting a new process. So
> process/vma doesnât really work.
Let's say you have a system-wide tunable that says: pre-zero pages and
keep 10GB of them around. Then, you opt-in a process to being allowed
to dip into that pool with a process-wide flag or an madvise() call.
You could even have the flag be inherited across execve() if you wanted
to have helper apps be able to set the policy and access the pool like
how numactl works.
Dan makes a very good point about using filesystems for this, though.
It wouldn't be rocket science to set up a special tmpfs mount just for
VM memory and pre-zero it from userspace. For qemu, you'd need to teach
the management layer to hand out zeroed files via mem-path=. Heck, if
you taught MADV_FREE how to handle tmpfs, you could even pre-zero *and*
get the memory back quickly if those files ended up over-sized somehow.
Powered by blists - more mailing lists