[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZCGx0unt2aofy8BW@dhcp22.suse.cz>
Date: Mon, 27 Mar 2023 17:10:10 +0200
From: Michal Hocko <mhocko@...e.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Mike Rapoport <rppt@...nel.org>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Song Liu <song@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, x86@...nel.org,
Mel Gorman <mgorman@...hsingularity.net>
Subject: Re: [RFC PATCH 1/5] mm: intorduce __GFP_UNMAPPED and unmapped_alloc()
On Mon 27-03-23 16:31:45, Vlastimil Babka wrote:
> On 3/27/23 15:43, Michal Hocko wrote:
> > On Sat 25-03-23 09:38:12, Mike Rapoport wrote:
> >> On Fri, Mar 24, 2023 at 09:37:31AM +0100, Michal Hocko wrote:
> >> > On Wed 08-03-23 11:41:02, Mike Rapoport wrote:
> >> > > From: "Mike Rapoport (IBM)" <rppt@...nel.org>
> >> > >
> >> > > When set_memory or set_direct_map APIs used to change attribute or
> >> > > permissions for chunks of several pages, the large PMD that maps these
> >> > > pages in the direct map must be split. Fragmenting the direct map in such
> >> > > manner causes TLB pressure and, eventually, performance degradation.
> >> > >
> >> > > To avoid excessive direct map fragmentation, add ability to allocate
> >> > > "unmapped" pages with __GFP_UNMAPPED flag that will cause removal of the
> >> > > allocated pages from the direct map and use a cache of the unmapped pages.
> >> > >
> >> > > This cache is replenished with higher order pages with preference for
> >> > > PMD_SIZE pages when possible so that there will be fewer splits of large
> >> > > pages in the direct map.
> >> > >
> >> > > The cache is implemented as a buddy allocator, so it can serve high order
> >> > > allocations of unmapped pages.
> >> >
> >> > Why do we need a dedicated gfp flag for all this when a dedicated
> >> > allocator is used anyway. What prevents users to call unmapped_pages_{alloc,free}?
> >>
> >> Using unmapped_pages_{alloc,free} adds complexity to the users which IMO
> >> outweighs the cost of a dedicated gfp flag.
> >
> > Aren't those users rare and very special anyway?
>
> I think it's mostly about the freeing that can happen from a generic context
> not aware of the special allocation, so it's not about how rare it is, but
> how complex would be to determine exhaustively those contexts and do
> something in them.
Yes, I can see a challenge with put_page users but that is not really
related to the gfp flag as those are only relevant for the allocation
context.
> >> For modules we'd have to make x86::module_{alloc,free}() take care of
> >> mapping and unmapping the allocated pages in the modules virtual address
> >> range. This also might become relevant for another architectures in future
> >> and than we'll have several complex module_alloc()s.
> >
> > The module_alloc use is lacking any justification. More context would be
> > more than useful. Also vmalloc support for the proposed __GFP_UNMAPPED
> > likely needs more explanation as well.
> >
> >> And for secretmem while using unmapped_pages_alloc() is easy, the free path
> >> becomes really complex because actual page freeing for fd-based memory is
> >> deeply buried in the page cache code.
> >
> > Why is that a problem? You already hook into the page freeing path and
> > special case unmapped memory.
>
> But the proposal of unmapped_pages_free() would suggest this would no longer
> be the case?
I can see a check in the freeing path.
> But maybe we could, as a compromise, provide unmapped_pages_alloc() to get
> rid of the new __GFP flag, provide unmapped_pages_free() to annotate places
> that are known to free unmapped memory explicitly, but the generic page
> freeing would also keep the hook?
Honestly I do not see a different option if those pages are to be
reference counted. Unless they can use a destructor concept like hugetlb
pages. At least secret mem usecase cannot AFAICS.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists