[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmgOFa3FUUpiANMq@kernel.org>
Date: Tue, 26 Apr 2022 18:21:57 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Hyeonggon Yoo <42.hyeyoo@...il.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Ira Weiny <ira.weiny@...el.com>,
Kees Cook <keescook@...omium.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
x86@...nel.org
Subject: Re: [RFC PATCH 0/3] Prototype for direct map awareness in page
allocator
Hello Hyeonggon,
On Tue, Apr 26, 2022 at 05:54:49PM +0900, Hyeonggon Yoo wrote:
> On Thu, Jan 27, 2022 at 10:56:05AM +0200, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@...ux.ibm.com>
> >
> > Hi,
> >
> > This is a second attempt to make page allocator aware of the direct map
> > layout and allow grouping of the pages that must be mapped at PTE level in
> > the direct map.
> >
>
> Hello mike, It may be a silly question...
>
> Looking at implementation of set_memory*(), they only split
> PMD/PUD-sized entries. But why not _merge_ them when all entries
> have same permissions after changing permission of an entry?
>
> I think grouping __GFP_UNMAPPED allocations would help reducing
> direct map fragmentation, but IMHO merging split entries seems better
> to be done in those helpers than in page allocator.
Maybe, I didn't got as far as to try merging split entries in the direct
map. IIRC, Kirill sent a patch for collapsing huge pages in the direct map
some time ago, but there still was something that had to initiate the
collapse.
> For example:
> 1) set_memory_ro() splits 1 RW PMD entry into 511 RW PTE
> entries and 1 RO PTE entry.
>
> 2) before freeing the pages, we call set_memory_rw() and we have
> 512 RW PTE entries. Then we can merge it to 1 RW PMD entry.
For this we need to check permissions of all 512 pages to make sure we can
use a PMD entry to map them.
Not sure that doing the scan in each set_memory call won't cause an overall
slowdown.
> 3) after 2) we can do same thing about PMD-sized entries
> and merge them into 1 PUD entry if 512 PMD entries have
> same permissions.
>
> [...]
>
> > Mike Rapoport (3):
> > mm/page_alloc: introduce __GFP_UNMAPPED and MIGRATE_UNMAPPED
> > mm/secretmem: use __GFP_UNMAPPED to allocate pages
> > EXPERIMENTAL: x86/module: use __GFP_UNMAPPED in module_alloc
> >
> > arch/Kconfig | 7 ++
> > arch/x86/Kconfig | 1 +
> > arch/x86/kernel/module.c | 2 +-
> > include/linux/gfp.h | 13 +++-
> > include/linux/mmzone.h | 11 +++
> > include/trace/events/mmflags.h | 3 +-
> > mm/internal.h | 2 +-
> > mm/page_alloc.c | 129 ++++++++++++++++++++++++++++++++-
> > mm/secretmem.c | 8 +-
> > 9 files changed, 162 insertions(+), 14 deletions(-)
> >
> >
> > base-commit: e783362eb54cd99b2cac8b3a9aeac942e6f6ac07
> > --
> > 2.34.1
> >
>
> --
> Thanks,
> Hyeonggon
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists