[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200918030051.650890-4-yuzhao@google.com>
Date: Thu, 17 Sep 2020 21:00:41 -0600
From: Yu Zhao <yuzhao@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>
Cc: Alex Shi <alex.shi@...ux.alibaba.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Roman Gushchin <guro@...com>,
Shakeel Butt <shakeelb@...gle.com>,
Chris Down <chris@...isdown.name>,
Yafang Shao <laoar.shao@...il.com>,
Vlastimil Babka <vbabka@...e.cz>,
Huang Ying <ying.huang@...el.com>,
Pankaj Gupta <pankaj.gupta.linux@...il.com>,
Matthew Wilcox <willy@...radead.org>,
Konstantin Khlebnikov <koct9i@...il.com>,
Minchan Kim <minchan@...nel.org>,
Jaewon Kim <jaewon31.kim@...sung.com>, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Yu Zhao <yuzhao@...gle.com>
Subject: [PATCH 03/13] mm: move __ClearPageLRU() into page_off_lru()
Now we have a total of three places that free lru pages when their
references become zero (after we drop the reference from isolation).
Before this patch, they all do:
__ClearPageLRU()
page_off_lru()
del_page_from_lru_list()
After this patch, they become:
page_off_lru()
__ClearPageLRU()
del_page_from_lru_list()
This change should have no side effects.
Signed-off-by: Yu Zhao <yuzhao@...gle.com>
---
include/linux/mm_inline.h | 1 +
mm/swap.c | 2 --
mm/vmscan.c | 1 -
3 files changed, 1 insertion(+), 3 deletions(-)
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 8fc71e9d7bb0..be9418425e41 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -92,6 +92,7 @@ static __always_inline enum lru_list page_off_lru(struct page *page)
{
enum lru_list lru;
+ __ClearPageLRU(page);
if (PageUnevictable(page)) {
__ClearPageUnevictable(page);
lru = LRU_UNEVICTABLE;
diff --git a/mm/swap.c b/mm/swap.c
index 40bf20a75278..8362083f00c9 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -86,7 +86,6 @@ static void __page_cache_release(struct page *page)
spin_lock_irqsave(&pgdat->lru_lock, flags);
lruvec = mem_cgroup_page_lruvec(page, pgdat);
VM_BUG_ON_PAGE(!PageLRU(page), page);
- __ClearPageLRU(page);
del_page_from_lru_list(page, lruvec, page_off_lru(page));
spin_unlock_irqrestore(&pgdat->lru_lock, flags);
}
@@ -895,7 +894,6 @@ void release_pages(struct page **pages, int nr)
lruvec = mem_cgroup_page_lruvec(page, locked_pgdat);
VM_BUG_ON_PAGE(!PageLRU(page), page);
- __ClearPageLRU(page);
del_page_from_lru_list(page, lruvec, page_off_lru(page));
}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index f257d2f61574..f9a186a96410 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1862,7 +1862,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
add_page_to_lru_list(page, lruvec, lru);
if (put_page_testzero(page)) {
- __ClearPageLRU(page);
del_page_from_lru_list(page, lruvec, page_off_lru(page));
if (unlikely(PageCompound(page))) {
--
2.28.0.681.g6f77f65b4e-goog
Powered by blists - more mailing lists