[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZCGnv96/g/9HPX+p@kernel.org>
Date: Mon, 27 Mar 2023 17:27:11 +0300
From: Mike Rapoport <rppt@...nel.org>
To: linux-mm@...ck.org
Cc: Mel Gorman <mgorman@...hsingularity.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Song Liu <song@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
x86@...nel.org
Subject: Re: [RFC PATCH 0/5] Prototype for direct map awareness in page
allocator
(adding Mel)
On Wed, Mar 08, 2023 at 11:41:01AM +0200, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@...nel.org>
>
> Hi,
>
> This is a third attempt to make page allocator aware of the direct map
> layout and allow grouping of the pages that must be unmapped from
> the direct map.
>
> This a new implementation of __GFP_UNMAPPED, kinda a follow up for this set:
>
> https://lore.kernel.org/all/20220127085608.306306-1-rppt@kernel.org
>
> but instead of using a migrate type to cache the unmapped pages, the
> current implementation adds a dedicated cache to serve __GFP_UNMAPPED
> allocations.
>
> The last two patches in the series demonstrate how __GFP_UNMAPPED can be
> used in two in-tree use cases.
>
> First one is to switch secretmem to use the new mechanism, which is
> straight forward optimization.
>
> The second use-case is to enable __GFP_UNMAPPED in x86::module_alloc()
> that is essentially used as a method to allocate code pages and thus
> requires permission changes for basic pages in the direct map.
>
> This set is x86 specific at the moment because other architectures either
> do not support set_memory APIs that split the direct^w linear map (e.g.
> PowerPC) or only enable set_memory APIs when the linear map uses basic page
> size (like arm64).
>
> The patches are only lightly tested.
>
> == Motivation ==
>
> There are use-cases that need to remove pages from the direct map or at
> least map them with 4K granularity. Whenever this is done e.g. with
> set_memory/set_direct_map APIs, the PUD and PMD sized mappings in the
> direct map are split into smaller pages.
>
> To reduce the performance hit caused by the fragmentation of the direct map
> it makes sense to group and/or cache the pages removed from the direct
> map so that the split large pages won't be all over the place.
>
> There were RFCs for grouped page allocations for vmalloc permissions [1]
> and for using PKS to protect page tables [2] as well as an attempt to use a
> pool of large pages in secretmtm [3], but these suggestions address each
> use case separately, while having a common mechanism at the core mm level
> could be used by all use cases.
>
> == Implementation overview ==
>
> The pages that need to be removed from the direct map are grouped in a
> dedicated cache. When there is a page allocation request with
> __GFP_UNMAPPED set, it is redirected from __alloc_pages() to that cache
> using a new unmapped_alloc() function.
>
> The cache is implemented as a buddy allocator and it can handle high order
> requests.
>
> The cache starts empty and whenever it does not have enough pages to
> satisfy an allocation request the cache attempts to allocate PMD_SIZE page
> to replenish the cache. If PMD_SIZE page cannot be allocated, the cache is
> replenished with a page of the highest order available. That page is
> removed from the direct map and added to the local buddy allocator.
>
> There is also a shrinker that releases pages from the unmapped cache when
> there us a memory pressure in the system. When shrinker releases a page it
> is mapped back into the direct map.
>
> [1] https://lore.kernel.org/lkml/20210405203711.1095940-1-rick.p.edgecombe@intel.com
> [2] https://lore.kernel.org/lkml/20210505003032.489164-1-rick.p.edgecombe@intel.com
> [3] https://lore.kernel.org/lkml/20210121122723.3446-8-rppt@kernel.org
>
> Mike Rapoport (IBM) (5):
> mm: intorduce __GFP_UNMAPPED and unmapped_alloc()
> mm/unmapped_alloc: add debugfs file similar to /proc/pagetypeinfo
> mm/unmapped_alloc: add shrinker
> EXPERIMENTAL: x86: use __GFP_UNMAPPED for modele_alloc()
> EXPERIMENTAL: mm/secretmem: use __GFP_UNMAPPED
>
> arch/x86/Kconfig | 3 +
> arch/x86/kernel/module.c | 2 +-
> include/linux/gfp_types.h | 11 +-
> include/linux/page-flags.h | 6 +
> include/linux/pageblock-flags.h | 28 +++
> include/trace/events/mmflags.h | 10 +-
> mm/Kconfig | 4 +
> mm/Makefile | 1 +
> mm/internal.h | 24 +++
> mm/page_alloc.c | 39 +++-
> mm/secretmem.c | 26 +--
> mm/unmapped-alloc.c | 334 ++++++++++++++++++++++++++++++++
> mm/vmalloc.c | 2 +-
> 13 files changed, 459 insertions(+), 31 deletions(-)
> create mode 100644 mm/unmapped-alloc.c
>
>
> base-commit: fe15c26ee26efa11741a7b632e9f23b01aca4cc6
> --
> 2.35.1
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists