[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211123174658.1728753-1-shakeelb@google.com>
Date: Tue, 23 Nov 2021 09:46:58 -0800
From: Shakeel Butt <shakeelb@...gle.com>
To: David Hildenbrand <david@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Yang Shi <shy828301@...il.com>, Zi Yan <ziy@...dia.com>,
Matthew Wilcox <willy@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Shakeel Butt <shakeelb@...gle.com>
Subject: [PATCH] mm: thp: update split_queue_len correctly
The deferred THPs are split on memory pressure through shrinker
callback and splitting of THP during reclaim can fail for several
reasons like unable to lock the THP, under writeback or unexpected
number of pins on the THP. Such pages are put back on the deferred split
list for consideration later. However kernel does not update the
deferred queue size on putting back the pages whose split was failed.
This patch fixes that.
Fixes: 364c1eebe453 ("mm: thp: extract split_queue_* into a struct")
Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
---
mm/huge_memory.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e5483347291c..4fff9584815b 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2809,7 +2809,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
unsigned long flags;
LIST_HEAD(list), *pos, *next;
struct page *page;
- int split = 0;
+ unsigned long split = 0, num = 0;
#ifdef CONFIG_MEMCG
if (sc->memcg)
@@ -2823,6 +2823,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
page = compound_head(page);
if (get_page_unless_zero(page)) {
list_move(page_deferred_list(page), &list);
+ num++;
} else {
/* We lost race with put_compound_page() */
list_del_init(page_deferred_list(page));
@@ -2847,6 +2848,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
list_splice_tail(&list, &ds_queue->split_queue);
+ ds_queue->split_queue_len += (num - split);
spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
/*
--
2.34.0.rc2.393.gf8c9666880-goog
Powered by blists - more mailing lists