[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210930163258.3114404-1-willy@infradead.org>
Date: Thu, 30 Sep 2021 17:32:58 +0100
From: "Matthew Wilcox (Oracle)" <willy@...radead.org>
To: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>
Subject: [RFC] mm: Optimise put_pages_list()
Instead of calling put_page() one page at a time, pop pages off
the list if there are other refcounts and pass the remainder
to free_unref_page_list(). This should be a speed improvement,
but I have no measurements to support that. It's also not very
widely used today, so I can't say I've really tested it. I'm only
bothering with this patch because I'd like the IOMMU code to use it
https://lore.kernel.org/lkml/20210930162043.3111119-1-willy@infradead.org/
Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
---
mm/swap.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index af3cad4e5378..f6b38398fa6f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -139,13 +139,14 @@ EXPORT_SYMBOL(__put_page);
*/
void put_pages_list(struct list_head *pages)
{
- while (!list_empty(pages)) {
- struct page *victim;
+ struct page *page, *next;
- victim = lru_to_page(pages);
- list_del(&victim->lru);
- put_page(victim);
+ list_for_each_entry_safe(page, next, pages, lru) {
+ if (!put_page_testzero(page))
+ list_del(&page->lru);
}
+
+ free_unref_page_list(pages);
}
EXPORT_SYMBOL(put_pages_list);
--
2.32.0
Powered by blists - more mailing lists