[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220814140534.363348-2-haiyue.wang@intel.com>
Date: Sun, 14 Aug 2022 22:05:32 +0800
From: Haiyue Wang <haiyue.wang@...el.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: akpm@...ux-foundation.org, david@...hat.com, linmiaohe@...wei.com,
ying.huang@...el.com, songmuchun@...edance.com,
naoya.horiguchi@...ux.dev, alex.sierra@....com,
Haiyue Wang <haiyue.wang@...el.com>
Subject: [PATCH v2 1/3] mm: revert handling Non-LRU pages returned by follow_page
The commit
3218f8712d6b ("mm: handling Non-LRU pages returned by vm_normal_pages")
doesn't handle the follow_page with flag FOLL_GET correctly, this will
do get_page on page, it shouldn't just return directly without put_page.
So revert the related fix to prepare for clean patch to handle Non-LRU
pages returned by follow_page.
Signed-off-by: Haiyue Wang <haiyue.wang@...el.com>
---
mm/huge_memory.c | 2 +-
mm/ksm.c | 6 +++---
mm/migrate.c | 4 ++--
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8a7c1b344abe..2ee6d38a1426 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2963,7 +2963,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
/* FOLL_DUMP to ignore special (like zero) pages */
page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP);
- if (IS_ERR_OR_NULL(page) || is_zone_device_page(page))
+ if (IS_ERR_OR_NULL(page))
continue;
if (!is_transparent_hugepage(page))
diff --git a/mm/ksm.c b/mm/ksm.c
index 42ab153335a2..fe3e0a39f73a 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -475,7 +475,7 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
cond_resched();
page = follow_page(vma, addr,
FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE);
- if (IS_ERR_OR_NULL(page) || is_zone_device_page(page))
+ if (IS_ERR_OR_NULL(page))
break;
if (PageKsm(page))
ret = handle_mm_fault(vma, addr,
@@ -560,7 +560,7 @@ static struct page *get_mergeable_page(struct rmap_item *rmap_item)
goto out;
page = follow_page(vma, addr, FOLL_GET);
- if (IS_ERR_OR_NULL(page) || is_zone_device_page(page))
+ if (IS_ERR_OR_NULL(page))
goto out;
if (PageAnon(page)) {
flush_anon_page(vma, page, addr);
@@ -2308,7 +2308,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page)
if (ksm_test_exit(mm))
break;
*page = follow_page(vma, ksm_scan.address, FOLL_GET);
- if (IS_ERR_OR_NULL(*page) || is_zone_device_page(*page)) {
+ if (IS_ERR_OR_NULL(*page)) {
ksm_scan.address += PAGE_SIZE;
cond_resched();
continue;
diff --git a/mm/migrate.c b/mm/migrate.c
index 6a1597c92261..3d5f0262ab60 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1672,7 +1672,7 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
goto out;
err = -ENOENT;
- if (!page || is_zone_device_page(page))
+ if (!page)
goto out;
err = 0;
@@ -1863,7 +1863,7 @@ static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages,
if (IS_ERR(page))
goto set_status;
- if (page && !is_zone_device_page(page)) {
+ if (page) {
err = page_to_nid(page);
put_page(page);
} else {
--
2.37.2
Powered by blists - more mailing lists