[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240926050647.5653-1-zhaoyang.huang@unisoc.com>
Date: Thu, 26 Sep 2024 13:06:47 +0800
From: "zhaoyang.huang" <zhaoyang.huang@...soc.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand
<david@...hat.com>,
Matthew Wilcox <willy@...radead.org>, Yu Zhao
<yuzhao@...gle.com>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Zhaoyang Huang <huangzhaoyang@...il.com>, <steve.kang@...soc.com>
Subject: [PATCHv2] mm: migrate LRU_REFS_MASK bits in folio_migrate_flags
From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
Bits of LRU_REFS_MASK are not inherited during migration which lead to
new folio start from tier0 when MGLRU enabled. Try to bring as much bits
of folio->flags as possible since compaction and alloc_contig_range
which introduce migration do happen at times.
Suggested-by: Yu Zhao <yuzhao@...gle.com>
Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
---
v2: modification as Yu Zhao suggested
---
---
include/linux/mm_inline.h | 10 ++++++++++
mm/migrate.c | 1 +
2 files changed, 11 insertions(+)
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index f4fe593c1400..6f801c7b36e2 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -291,6 +291,12 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio,
return true;
}
+static inline void folio_migrate_refs(struct folio *new, struct folio *old)
+{
+ unsigned long refs = READ_ONCE(old->flags) & LRU_REFS_MASK;
+
+ set_mask_bits(&new->flags, LRU_REFS_MASK, refs);
+}
#else /* !CONFIG_LRU_GEN */
static inline bool lru_gen_enabled(void)
@@ -313,6 +319,10 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio,
return false;
}
+static inline void folio_migrate_refs(struct folio *new, struct folio *old)
+{
+
+}
#endif /* CONFIG_LRU_GEN */
static __always_inline
diff --git a/mm/migrate.c b/mm/migrate.c
index 923ea80ba744..60c97e235ae7 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -618,6 +618,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
if (folio_test_idle(folio))
folio_set_idle(newfolio);
+ folio_migrate_refs(newfolio, folio);
/*
* Copy NUMA information to the new page, to prevent over-eager
* future migrations of this same page.
--
2.25.1
Powered by blists - more mailing lists