[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240607083711.62833-2-david@redhat.com>
Date: Fri, 7 Jun 2024 10:37:10 +0200
From: David Hildenbrand <david@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org,
David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Wei Yang <richard.weiyang@...il.com>
Subject: [PATCH v1 1/2] mm/highmem: reimplement totalhigh_pages() by walking zones
Can we get rid of the highmem ifdef in adjust_managed_page_count()?
Likely yes: we don't have that many totalhigh_pages() users, and they
all don't seem to be very performance critical.
So let's implement totalhigh_pages() like nr_free_highpages(),
collecting information from all zones. This is now similar to what we do
in si_meminfo_node() to collect the per-node highmem page count.
In the common case (single node, 3-4 zones), we really shouldn't care.
We could optimize a bit further (only walk ZONE_HIGHMEM and ZONE_MOVABLE
if required), but there doesn't seem a real need for that.
Signed-off-by: David Hildenbrand <david@...hat.com>
---
include/linux/highmem-internal.h | 9 ++-------
mm/highmem.c | 16 +++++++++++++---
mm/page_alloc.c | 4 ----
3 files changed, 15 insertions(+), 14 deletions(-)
diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h
index a3028e400a9c6..65f865fbbac04 100644
--- a/include/linux/highmem-internal.h
+++ b/include/linux/highmem-internal.h
@@ -132,7 +132,7 @@ static inline void __kunmap_atomic(const void *addr)
}
unsigned int __nr_free_highpages(void);
-extern atomic_long_t _totalhigh_pages;
+unsigned long __totalhigh_pages(void);
static inline unsigned int nr_free_highpages(void)
{
@@ -141,12 +141,7 @@ static inline unsigned int nr_free_highpages(void)
static inline unsigned long totalhigh_pages(void)
{
- return (unsigned long)atomic_long_read(&_totalhigh_pages);
-}
-
-static inline void totalhigh_pages_add(long count)
-{
- atomic_long_add(count, &_totalhigh_pages);
+ return __totalhigh_pages();
}
static inline bool is_kmap_addr(const void *x)
diff --git a/mm/highmem.c b/mm/highmem.c
index bd48ba445dd41..3c4e9f8c26dcd 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -111,9 +111,6 @@ static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
}
#endif
-atomic_long_t _totalhigh_pages __read_mostly;
-EXPORT_SYMBOL(_totalhigh_pages);
-
unsigned int __nr_free_highpages(void)
{
struct zone *zone;
@@ -127,6 +124,19 @@ unsigned int __nr_free_highpages(void)
return pages;
}
+unsigned long __totalhigh_pages(void)
+{
+ unsigned long pages = 0;
+ struct zone *zone;
+
+ for_each_populated_zone(zone) {
+ if (is_highmem(zone))
+ pages += zone_managed_pages(zone);
+ }
+
+ return pages;
+}
+
static int pkmap_count[LAST_PKMAP];
static __cacheline_aligned_in_smp DEFINE_SPINLOCK(kmap_lock);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fc98082a9cf9c..2224965ada468 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5794,10 +5794,6 @@ void adjust_managed_page_count(struct page *page, long count)
{
atomic_long_add(count, &page_zone(page)->managed_pages);
totalram_pages_add(count);
-#ifdef CONFIG_HIGHMEM
- if (PageHighMem(page))
- totalhigh_pages_add(count);
-#endif
}
EXPORT_SYMBOL(adjust_managed_page_count);
--
2.45.1
Powered by blists - more mailing lists