[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YyvmHT9TCwuxqD/t@kernel.org>
Date: Thu, 22 Sep 2022 07:35:41 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Anshuman Khandual <anshuman.khandual@....com>
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64/mm: fold check for KFENCE into can_set_direct_map()
On Thu, Sep 22, 2022 at 08:21:38AM +0530, Anshuman Khandual wrote:
>
> On 9/21/22 20:49, Mike Rapoport wrote:
> > Hi Anshuman,
> >
> > On Wed, Sep 21, 2022 at 05:09:19PM +0530, Anshuman Khandual wrote:
> >>
> >>
> >> On 9/21/22 13:18, Mike Rapoport wrote:
> >>> From: Mike Rapoport <rppt@...ux.ibm.com>
> >>>
> >>> KFENCE requires linear map to be mapped at page granularity, so that it
> >>> is possible to protect/unprotect single pages, just like with
> >>> rodata_full and DEBUG_PAGEALLOC.
> >>>
> >>> Instead of repating
> >>>
> >>> can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)
> >>>
> >>> make can_set_direct_map() handle the KFENCE case.
> >>>
> >>> This also prevents potential false positives in kernel_page_present()
> >>> that may return true for non-present page if CONFIG_KFENCE is enabled.
> >>>
> >>> Signed-off-by: Mike Rapoport <rppt@...ux.ibm.com>
> >>> ---
> >>> arch/arm64/mm/mmu.c | 8 ++------
> >>> arch/arm64/mm/pageattr.c | 8 +++++++-
> >>> 2 files changed, 9 insertions(+), 7 deletions(-)
> >>>
> >>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> >>> index e7ad44585f40..c5065abec55a 100644
> >>> --- a/arch/arm64/mm/mmu.c
> >>> +++ b/arch/arm64/mm/mmu.c
> >>> @@ -535,7 +535,7 @@ static void __init map_mem(pgd_t *pgdp)
> >>> */
> >>> BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end));
> >>>
> >>> - if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
> >>> + if (can_set_direct_map())
> >>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> >>>
> >>> /*
> >>> @@ -1547,11 +1547,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
> >>>
> >>> VM_BUG_ON(!mhp_range_allowed(start, size, true));
> >>>
> >>> - /*
> >>> - * KFENCE requires linear map to be mapped at page granularity, so that
> >>> - * it is possible to protect/unprotect single pages in the KFENCE pool.
> >>> - */
> >>> - if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
> >>> + if (can_set_direct_map())
> >>> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> >>>
> >>> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
> >>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> >>> index 64e985eaa52d..d107c3d434e2 100644
> >>> --- a/arch/arm64/mm/pageattr.c
> >>> +++ b/arch/arm64/mm/pageattr.c
> >>> @@ -21,7 +21,13 @@ bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED
> >>>
> >>> bool can_set_direct_map(void)
> >>> {
> >>> - return rodata_full || debug_pagealloc_enabled();
> >>> + /*
> >>> + * rodata_full, DEBUG_PAGEALLOC and KFENCE require linear map to be
> >>> + * mapped at page granularity, so that it is possible to
> >>> + * protect/unprotect single pages.
> >>> + */
> >>> + return rodata_full || debug_pagealloc_enabled() ||
> >>> + IS_ENABLED(CONFIG_KFENCE);
> >>> }
> >>
> >> Changing can_set_direct_map() also changes behaviour for other functions such as
> >>
> >> set_direct_map_default_noflush()
> >> set_direct_map_invalid_noflush()
> >> __kernel_map_pages()
> >>
> >> Is that okay ?
> >
> > Yes. Since KFENCE disables block mappings, these will actually change the
> > page tables.
> > Actually, before this change the test for can_set_direct_map() in these
> > functions was false negative when CONFIG_KFENCE=y
>
> Okay but then should not this have a "Fixes:" tag as well ?
I feel that this is more of a theoretical bug and it's not worth
backporting to stable.
> >>> static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
> >
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists