[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211123190916.1738458-1-shakeelb@google.com>
Date: Tue, 23 Nov 2021 11:09:16 -0800
From: Shakeel Butt <shakeelb@...gle.com>
To: David Hildenbrand <david@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Yang Shi <shy828301@...il.com>, Zi Yan <ziy@...dia.com>,
Matthew Wilcox <willy@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Shakeel Butt <shakeelb@...gle.com>
Subject: [PATCH v2] mm: thp: update split_queue_len correctly
The deferred THPs are split on memory pressure through shrinker
callback and splitting of THP during reclaim can fail for several
reasons like unable to lock the THP, under writeback or unexpected
number of pins on the THP. Such pages are put back on the deferred split
list for consideration later. However kernel does not update the
deferred queue size on putting back the pages whose split was failed.
This patch fixes that.
Without this patch the split_queue_len can underflow. Shrinker will
always get that there are some THPs to split even if there are not and
waste some cpu to scan the empty list.
Fixes: 364c1eebe453 ("mm: thp: extract split_queue_* into a struct")
Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
---
Changes since v1:
- updated commit message
- incorporated Yang Shi's suggestion
mm/huge_memory.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e5483347291c..d393028681e2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2809,7 +2809,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
unsigned long flags;
LIST_HEAD(list), *pos, *next;
struct page *page;
- int split = 0;
+ unsigned long split = 0;
#ifdef CONFIG_MEMCG
if (sc->memcg)
@@ -2847,6 +2847,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
list_splice_tail(&list, &ds_queue->split_queue);
+ ds_queue->split_queue_len -= split;
spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
/*
--
2.34.0.rc2.393.gf8c9666880-goog
Powered by blists - more mailing lists