[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250407140138.162383-2-jfalempe@redhat.com>
Date: Mon, 7 Apr 2025 15:42:25 +0200
From: Jocelyn Falempe <jfalempe@...hat.com>
To: Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>,
Simona Vetter <simona@...ll.ch>,
Ryosuke Yasuoka <ryasuoka@...hat.com>,
Javier Martinez Canillas <javierm@...hat.com>,
Wei Yang <richard.weiyang@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
John Ogness <john.ogness@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>,
linux-mm@...ck.org,
dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org
Cc: Jocelyn Falempe <jfalempe@...hat.com>,
Simona Vetter <simona.vetter@...ll.ch>
Subject: [PATCH v3 1/2] mm/kmap: Add kmap_local_page_try_from_panic()
kmap_local_page() can be unsafe to call from a panic handler, if
CONFIG_HIGHMEM is set, and the page is in the highmem zone.
So add kmap_local_page_try_from_panic() to handle this case.
Suggested-by: Simona Vetter <simona.vetter@...ll.ch>
Reviewed-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Jocelyn Falempe <jfalempe@...hat.com>
---
v3:
* Add a comment in kmap_local_page_try_from_panic() (Thomas Gleixner)
include/linux/highmem-internal.h | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h
index dd100e849f5e0..9a7683d79a4b1 100644
--- a/include/linux/highmem-internal.h
+++ b/include/linux/highmem-internal.h
@@ -73,6 +73,14 @@ static inline void *kmap_local_page(struct page *page)
return __kmap_local_page_prot(page, kmap_prot);
}
+static inline void *kmap_local_page_try_from_panic(struct page *page)
+{
+ if (!PageHighMem(page))
+ return page_address(page);
+ /* If the page is in HighMem, it's not safe to kmap it.*/
+ return NULL;
+}
+
static inline void *kmap_local_folio(struct folio *folio, size_t offset)
{
struct page *page = folio_page(folio, offset / PAGE_SIZE);
@@ -180,6 +188,11 @@ static inline void *kmap_local_page(struct page *page)
return page_address(page);
}
+static inline void *kmap_local_page_try_from_panic(struct page *page)
+{
+ return page_address(page);
+}
+
static inline void *kmap_local_folio(struct folio *folio, size_t offset)
{
return page_address(&folio->page) + offset;
--
2.49.0
Powered by blists - more mailing lists