[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4yMOC5M9rnfgv9TXWAm2aMDUVOdDYvNjzqzu_oj9DBn8Q@mail.gmail.com>
Date: Wed, 21 Sep 2022 21:00:28 +1200
From: Barry Song <21cnbao@...il.com>
To: Mike Rapoport <rppt@...nel.org>
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64/mm: fold check for KFENCE into can_set_direct_map()
On Wed, Sep 21, 2022 at 8:26 PM Mike Rapoport <rppt@...nel.org> wrote:
>
> From: Mike Rapoport <rppt@...ux.ibm.com>
>
> KFENCE requires linear map to be mapped at page granularity, so that it
> is possible to protect/unprotect single pages, just like with
> rodata_full and DEBUG_PAGEALLOC.
>
> Instead of repating
>
> can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)
>
> make can_set_direct_map() handle the KFENCE case.
>
> This also prevents potential false positives in kernel_page_present()
> that may return true for non-present page if CONFIG_KFENCE is enabled.
>
> Signed-off-by: Mike Rapoport <rppt@...ux.ibm.com>
> ---
> arch/arm64/mm/mmu.c | 8 ++------
> arch/arm64/mm/pageattr.c | 8 +++++++-
> 2 files changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index e7ad44585f40..c5065abec55a 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -535,7 +535,7 @@ static void __init map_mem(pgd_t *pgdp)
> */
> BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end));
>
> - if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
> + if (can_set_direct_map())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> /*
> @@ -1547,11 +1547,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>
> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>
> - /*
> - * KFENCE requires linear map to be mapped at page granularity, so that
> - * it is possible to protect/unprotect single pages in the KFENCE pool.
> - */
> - if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
> + if (can_set_direct_map())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 64e985eaa52d..d107c3d434e2 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -21,7 +21,13 @@ bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED
>
> bool can_set_direct_map(void)
> {
> - return rodata_full || debug_pagealloc_enabled();
> + /*
> + * rodata_full, DEBUG_PAGEALLOC and KFENCE require linear map to be
> + * mapped at page granularity, so that it is possible to
> + * protect/unprotect single pages.
> + */
> + return rodata_full || debug_pagealloc_enabled() ||
> + IS_ENABLED(CONFIG_KFENCE);
might be irrelevant, i wonder if rodata_full is too strict as
rodata_full is almost
always true since RODATA_FULL_DEFAULT_ENABLED is default true.
> }
>
> static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
> --
> 2.35.3
>
Thanks
Barry
Powered by blists - more mailing lists