[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f2702360-d560-95da-a93d-16fae1dbf766@suse.cz>
Date: Mon, 27 Mar 2023 16:31:45 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Michal Hocko <mhocko@...e.com>, Mike Rapoport <rppt@...nel.org>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Song Liu <song@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, x86@...nel.org,
Mel Gorman <mgorman@...hsingularity.net>
Subject: Re: [RFC PATCH 1/5] mm: intorduce __GFP_UNMAPPED and unmapped_alloc()
On 3/27/23 15:43, Michal Hocko wrote:
> On Sat 25-03-23 09:38:12, Mike Rapoport wrote:
>> On Fri, Mar 24, 2023 at 09:37:31AM +0100, Michal Hocko wrote:
>> > On Wed 08-03-23 11:41:02, Mike Rapoport wrote:
>> > > From: "Mike Rapoport (IBM)" <rppt@...nel.org>
>> > >
>> > > When set_memory or set_direct_map APIs used to change attribute or
>> > > permissions for chunks of several pages, the large PMD that maps these
>> > > pages in the direct map must be split. Fragmenting the direct map in such
>> > > manner causes TLB pressure and, eventually, performance degradation.
>> > >
>> > > To avoid excessive direct map fragmentation, add ability to allocate
>> > > "unmapped" pages with __GFP_UNMAPPED flag that will cause removal of the
>> > > allocated pages from the direct map and use a cache of the unmapped pages.
>> > >
>> > > This cache is replenished with higher order pages with preference for
>> > > PMD_SIZE pages when possible so that there will be fewer splits of large
>> > > pages in the direct map.
>> > >
>> > > The cache is implemented as a buddy allocator, so it can serve high order
>> > > allocations of unmapped pages.
>> >
>> > Why do we need a dedicated gfp flag for all this when a dedicated
>> > allocator is used anyway. What prevents users to call unmapped_pages_{alloc,free}?
>>
>> Using unmapped_pages_{alloc,free} adds complexity to the users which IMO
>> outweighs the cost of a dedicated gfp flag.
>
> Aren't those users rare and very special anyway?
I think it's mostly about the freeing that can happen from a generic context
not aware of the special allocation, so it's not about how rare it is, but
how complex would be to determine exhaustively those contexts and do
something in them.
>> For modules we'd have to make x86::module_{alloc,free}() take care of
>> mapping and unmapping the allocated pages in the modules virtual address
>> range. This also might become relevant for another architectures in future
>> and than we'll have several complex module_alloc()s.
>
> The module_alloc use is lacking any justification. More context would be
> more than useful. Also vmalloc support for the proposed __GFP_UNMAPPED
> likely needs more explanation as well.
>
>> And for secretmem while using unmapped_pages_alloc() is easy, the free path
>> becomes really complex because actual page freeing for fd-based memory is
>> deeply buried in the page cache code.
>
> Why is that a problem? You already hook into the page freeing path and
> special case unmapped memory.
But the proposal of unmapped_pages_free() would suggest this would no longer
be the case?
But maybe we could, as a compromise, provide unmapped_pages_alloc() to get
rid of the new __GFP flag, provide unmapped_pages_free() to annotate places
that are known to free unmapped memory explicitly, but the generic page
freeing would also keep the hook?
>> My gut feeling is that for PKS using a gfp flag would save a lot of hassle
>> as well.
>
> Well, my take on this is that this is not a generic page allocator
> functionality. It is clearly an allocator on top of the page allocator.
> In general gfp flags are scarce and convenience argument usually fires
> back later on in hard to predict ways. So I've learned to be careful
> here. I am not saying this is a no-go but right now I do not see any
> acutal advantage. The vmalloc usecase could be interesting in that
> regards but it is not really clear to me whether this is a good idea in
> the first place.
>
Powered by blists - more mailing lists