[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4jdxKC=6Wh+T37R_sBee0vE-yfRqdX8EbmvLiWTwYhwrw@mail.gmail.com>
Date: Fri, 4 Feb 2022 11:18:26 -0800
From: Dan Williams <dan.j.williams@...el.com>
To: "Weiny, Ira" <ira.weiny@...el.com>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>,
Fenghua Yu <fenghua.yu@...el.com>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH V8 40/44] memremap_pages: Add pgmap_protection_flag_invalid()
On Thu, Jan 27, 2022 at 9:55 AM <ira.weiny@...el.com> wrote:
>
> From: Ira Weiny <ira.weiny@...el.com>
>
> Some systems may be using pmem in ways that are known to be incompatible
"systems"? You mean kernel code paths? Is it really plural "ways" and
not just the kmap() problem?
> with the PKS implementation. One such example is the use of kmap() to
> create 'global' mappings.
What are the other examples? I.e. besides bugs, what are the
legitimate ways for the kernel to access memory that are now invalid
in the presence of kmap() access protections. They should all be
listed here. Like the CONFIG_64BIT direct page_address() use problem.
Are there others?
> Rather than only reporting the invalid access on fault, provide a call
> to flag those uses immediately. This allows for a much better splat for
> debugging to occur.
It does? The faulting RIP will be in the splat, that's not good enough?
It just seems like a lost cause to try to get all the potential paths
that get access protection wrong to spend the time instrumenting this
self-incriminating call. Just let the relaxed mode WARNs "name and
shame" those code paths.
>
> This is also nice because even if no invalid access' actually occurs,
> the invalid mapping can be fixed with kmap_local_page() rather than
> having to look for a different solution.
>
> Define pgmap_protection_flag_invalid() and have it follow the policy set
> by pks_fault_mode.
>
> Signed-off-by: Ira Weiny <ira.weiny@...el.com>
>
> ---
> Changes for V8
> Split this from the fault mode patch
> ---
> include/linux/mm.h | 23 +++++++++++++++++++++++
> mm/memremap.c | 9 +++++++++
> 2 files changed, 32 insertions(+)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index e900df563437..3c0aa686b5bd 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1162,6 +1162,7 @@ static inline bool devmap_protected(struct page *page)
> return false;
> }
>
> +void __pgmap_protection_flag_invalid(struct dev_pagemap *pgmap);
> void __pgmap_mk_readwrite(struct dev_pagemap *pgmap);
> void __pgmap_mk_noaccess(struct dev_pagemap *pgmap);
>
> @@ -1178,6 +1179,27 @@ static inline bool pgmap_check_pgmap_prot(struct page *page)
> return true;
> }
>
> +/*
> + * pgmap_protection_flag_invalid - Check and flag an invalid use of a pgmap
> + * protected page
> + *
> + * There are code paths which are known to not be compatible with pgmap
> + * protections. pgmap_protection_flag_invalid() is provided as a 'relief
> + * valve' to be used in those functions which are known to be incompatible.
> + *
> + * Thus an invalid use case can be flaged with more precise data rather than
> + * just flagging a fault. Like the fault handler code this abandons the use of
> + * the PKS key and optionally allows the calling code path to continue based on
> + * the configuration of the memremap.pks_fault_mode command line
> + * (and/or sysfs) option.
> + */
> +static inline void pgmap_protection_flag_invalid(struct page *page)
> +{
> + if (!pgmap_check_pgmap_prot(page))
> + return;
> + __pgmap_protection_flag_invalid(page->pgmap);
> +}
> +
> static inline void pgmap_mk_readwrite(struct page *page)
> {
> if (!pgmap_check_pgmap_prot(page))
> @@ -1200,6 +1222,7 @@ bool pgmap_pks_fault_callback(struct pt_regs *regs, unsigned long address,
>
> static inline void __pgmap_mk_readwrite(struct dev_pagemap *pgmap) { }
> static inline void __pgmap_mk_noaccess(struct dev_pagemap *pgmap) { }
> +static inline void pgmap_protection_flag_invalid(struct page *page) { }
> static inline void pgmap_mk_readwrite(struct page *page) { }
> static inline void pgmap_mk_noaccess(struct page *page) { }
>
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 783b1cd4bb42..fd4b9b83b770 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -150,6 +150,15 @@ static const struct kernel_param_ops param_ops_pks_fault_modes = {
> __param_check(name, p, pks_fault_modes)
> module_param(pks_fault_mode, pks_fault_modes, 0644);
>
> +void __pgmap_protection_flag_invalid(struct dev_pagemap *pgmap)
> +{
> + if (pks_fault_mode == PKS_MODE_STRICT)
> + return;
> +
> + WARN_ONCE(1, "Invalid page map use");
> +}
> +EXPORT_SYMBOL_GPL(__pgmap_protection_flag_invalid);
> +
> bool pgmap_pks_fault_callback(struct pt_regs *regs, unsigned long address,
> bool write)
> {
> --
> 2.31.1
>
Powered by blists - more mailing lists