lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmezWeMZSRNRfXyG@hyeyoo>
Date:   Tue, 26 Apr 2022 17:54:49 +0900
From:   Hyeonggon Yoo <42.hyeyoo@...il.com>
To:     Mike Rapoport <rppt@...nel.org>
Cc:     linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        Andy Lutomirski <luto@...nel.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Ira Weiny <ira.weiny@...el.com>,
        Kees Cook <keescook@...omium.org>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Rick Edgecombe <rick.p.edgecombe@...el.com>,
        Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
        x86@...nel.org
Subject: Re: [RFC PATCH 0/3] Prototype for direct map awareness in page
 allocator

On Thu, Jan 27, 2022 at 10:56:05AM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@...ux.ibm.com>
> 
> Hi,
> 
> This is a second attempt to make page allocator aware of the direct map
> layout and allow grouping of the pages that must be mapped at PTE level in
> the direct map.
>

Hello mike, It may be a silly question...

Looking at implementation of set_memory*(), they only split
PMD/PUD-sized entries. But why not _merge_ them when all entries
have same permissions after changing permission of an entry?

I think grouping __GFP_UNMAPPED allocations would help reducing
direct map fragmentation, but IMHO merging split entries seems better
to be done in those helpers than in page allocator.

For example:
	1) set_memory_ro() splits 1 RW PMD entry into 511 RW PTE
	entries and 1 RO PTE entry.

	2) before freeing the pages, we call set_memory_rw() and we have
	512 RW PTE entries. Then we can merge it to 1 RW PMD entry.

	3) after 2) we can do same thing about PMD-sized entries
	and merge them into 1 PUD entry if 512 PMD entries have
	same permissions.

[...]

> Mike Rapoport (3):
>   mm/page_alloc: introduce __GFP_UNMAPPED and MIGRATE_UNMAPPED
>   mm/secretmem: use __GFP_UNMAPPED to allocate pages
>   EXPERIMENTAL: x86/module: use __GFP_UNMAPPED in module_alloc
> 
>  arch/Kconfig                   |   7 ++
>  arch/x86/Kconfig               |   1 +
>  arch/x86/kernel/module.c       |   2 +-
>  include/linux/gfp.h            |  13 +++-
>  include/linux/mmzone.h         |  11 +++
>  include/trace/events/mmflags.h |   3 +-
>  mm/internal.h                  |   2 +-
>  mm/page_alloc.c                | 129 ++++++++++++++++++++++++++++++++-
>  mm/secretmem.c                 |   8 +-
>  9 files changed, 162 insertions(+), 14 deletions(-)
> 
> 
> base-commit: e783362eb54cd99b2cac8b3a9aeac942e6f6ac07
> -- 
> 2.34.1
> 

-- 
Thanks,
Hyeonggon

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ