[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230117231632.2734737-2-minchan@kernel.org>
Date: Tue, 17 Jan 2023 15:16:31 -0800
From: Minchan Kim <minchan@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Suren Baghdasaryan <surenb@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Michal Hocko <mhocko@...e.com>, SeongJae Park <sj@...nel.org>,
Minchan Kim <minchan@...nel.org>
Subject: [PATCH 2/3] mm: return boolean for deactivate_page
Returns true if the page was successfully deactivated. The return value will
be used for statistics in next patch.
Signed-off-by: Minchan Kim <minchan@...nel.org>
---
include/linux/swap.h | 2 +-
mm/swap.c | 6 ++++--
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 0ada46b595cd..803e5fa4cd86 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -409,7 +409,7 @@ extern void lru_add_drain(void);
extern void lru_add_drain_cpu(int cpu);
extern void lru_add_drain_cpu_zone(struct zone *zone);
extern void lru_add_drain_all(void);
-extern void deactivate_page(struct page *page);
+extern bool deactivate_page(struct page *page);
extern void mark_page_lazyfree(struct page *page);
extern void swap_setup(void);
diff --git a/mm/swap.c b/mm/swap.c
index 955930f41d20..52532859c05b 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -725,9 +725,9 @@ void deactivate_file_folio(struct folio *folio)
*
* deactivate_page() moves @page to the inactive list if @page was on the active
* list and was not an unevictable page. This is done to accelerate the reclaim
- * of @page.
+ * of @page. If page was deactivated successfully, returns true.
*/
-void deactivate_page(struct page *page)
+bool deactivate_page(struct page *page)
{
struct folio *folio = page_folio(page);
@@ -740,7 +740,9 @@ void deactivate_page(struct page *page)
fbatch = this_cpu_ptr(&cpu_fbatches.lru_deactivate);
folio_batch_add_and_move(fbatch, folio, lru_deactivate_fn);
local_unlock(&cpu_fbatches.lock);
+ return true;
}
+ return false;
}
/**
--
2.39.0.314.g84b9a713c41-goog
Powered by blists - more mailing lists