[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJZ5v0jY2vqZxdD7CaGUsCb2ePodamDnneOLHZcagCODn5kmrQ@mail.gmail.com>
Date: Thu, 29 Oct 2020 17:30:06 +0100
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Mike Rapoport <rppt@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Albert Ou <aou@...s.berkeley.edu>,
Andy Lutomirski <luto@...nel.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Borislav Petkov <bp@...en8.de>,
Catalin Marinas <catalin.marinas@....com>,
Christian Borntraeger <borntraeger@...ibm.com>,
Christoph Lameter <cl@...ux.com>,
"David S. Miller" <davem@...emloft.net>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>,
David Rientjes <rientjes@...gle.com>,
"Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
"H. Peter Anvin" <hpa@...or.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Ingo Molnar <mingo@...hat.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Len Brown <len.brown@...el.com>,
Michael Ellerman <mpe@...erman.id.au>,
Mike Rapoport <rppt@...ux.ibm.com>,
Palmer Dabbelt <palmer@...belt.com>,
Paul Mackerras <paulus@...ba.org>,
Paul Walmsley <paul.walmsley@...ive.com>,
Pavel Machek <pavel@....cz>, Pekka Enberg <penberg@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Thomas Gleixner <tglx@...utronix.de>,
Vasily Gorbik <gor@...ux.ibm.com>,
Will Deacon <will@...nel.org>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
Linux PM <linux-pm@...r.kernel.org>,
linux-riscv@...ts.infradead.org, linux-s390@...r.kernel.org,
linuxppc-dev <linuxppc-dev@...ts.ozlabs.org>,
sparclinux@...r.kernel.org,
"the arch/x86 maintainers" <x86@...nel.org>
Subject: Re: [PATCH v2 2/4] PM: hibernate: make direct map manipulations more explicit
On Thu, Oct 29, 2020 at 5:19 PM Mike Rapoport <rppt@...nel.org> wrote:
>
> From: Mike Rapoport <rppt@...ux.ibm.com>
>
> When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
> not present in the direct map and has to be explicitly mapped before it
> could be copied.
>
> On arm64 it is possible that a page would be removed from the direct map
> using set_direct_map_invalid_noflush() but __kernel_map_pages() will refuse
> to map this page back if DEBUG_PAGEALLOC is disabled.
>
> Introduce hibernate_map_page() that will explicitly use
> set_direct_map_{default,invalid}_noflush() for ARCH_HAS_SET_DIRECT_MAP case
> and debug_pagealloc_map_pages() for DEBUG_PAGEALLOC case.
>
> The remapping of the pages in safe_copy_page() presumes that it only
> changes protection bits in an existing PTE and so it is safe to ignore
> return value of set_direct_map_{default,invalid}_noflush().
>
> Still, add a WARN_ON() so that future changes in set_memory APIs will not
> silently break hibernation.
>
> Signed-off-by: Mike Rapoport <rppt@...ux.ibm.com>
>From the hibernation support perspective:
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> ---
> include/linux/mm.h | 12 ------------
> kernel/power/snapshot.c | 30 ++++++++++++++++++++++++++++--
> 2 files changed, 28 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 1fc0609056dc..14e397f3752c 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2927,16 +2927,6 @@ static inline bool debug_pagealloc_enabled_static(void)
> #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
> extern void __kernel_map_pages(struct page *page, int numpages, int enable);
>
> -/*
> - * When called in DEBUG_PAGEALLOC context, the call should most likely be
> - * guarded by debug_pagealloc_enabled() or debug_pagealloc_enabled_static()
> - */
> -static inline void
> -kernel_map_pages(struct page *page, int numpages, int enable)
> -{
> - __kernel_map_pages(page, numpages, enable);
> -}
> -
> static inline void debug_pagealloc_map_pages(struct page *page,
> int numpages, int enable)
> {
> @@ -2948,8 +2938,6 @@ static inline void debug_pagealloc_map_pages(struct page *page,
> extern bool kernel_page_present(struct page *page);
> #endif /* CONFIG_HIBERNATION */
> #else /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */
> -static inline void
> -kernel_map_pages(struct page *page, int numpages, int enable) {}
> static inline void debug_pagealloc_map_pages(struct page *page,
> int numpages, int enable) {}
> #ifdef CONFIG_HIBERNATION
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 46b1804c1ddf..054c8cce4236 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -76,6 +76,32 @@ static inline void hibernate_restore_protect_page(void *page_address) {}
> static inline void hibernate_restore_unprotect_page(void *page_address) {}
> #endif /* CONFIG_STRICT_KERNEL_RWX && CONFIG_ARCH_HAS_SET_MEMORY */
>
> +static inline void hibernate_map_page(struct page *page, int enable)
> +{
> + if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
> + unsigned long addr = (unsigned long)page_address(page);
> + int ret;
> +
> + /*
> + * This should not fail because remapping a page here means
> + * that we only update protection bits in an existing PTE.
> + * It is still worth to have WARN_ON() here if something
> + * changes and this will no longer be the case.
> + */
> + if (enable)
> + ret = set_direct_map_default_noflush(page);
> + else
> + ret = set_direct_map_invalid_noflush(page);
> +
> + if (WARN_ON(ret))
> + return;
> +
> + flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> + } else {
> + debug_pagealloc_map_pages(page, 1, enable);
> + }
> +}
> +
> static int swsusp_page_is_free(struct page *);
> static void swsusp_set_page_forbidden(struct page *);
> static void swsusp_unset_page_forbidden(struct page *);
> @@ -1355,9 +1381,9 @@ static void safe_copy_page(void *dst, struct page *s_page)
> if (kernel_page_present(s_page)) {
> do_copy_page(dst, page_address(s_page));
> } else {
> - kernel_map_pages(s_page, 1, 1);
> + hibernate_map_page(s_page, 1);
> do_copy_page(dst, page_address(s_page));
> - kernel_map_pages(s_page, 1, 0);
> + hibernate_map_page(s_page, 0);
> }
> }
>
> --
> 2.28.0
>
Powered by blists - more mailing lists