[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230329011712.3242298-7-zi.yan@sent.com>
Date: Tue, 28 Mar 2023 21:17:11 -0400
From: Zi Yan <zi.yan@...t.com>
To: "Matthew Wilcox (Oracle)" <willy@...radead.org>,
Yang Shi <shy828301@...il.com>, Yu Zhao <yuzhao@...gle.com>,
linux-mm@...ck.org
Cc: Zi Yan <ziy@...dia.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Ryan Roberts <ryan.roberts@....com>,
Michal Koutný <mkoutny@...e.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
"Zach O'Keefe" <zokeefe@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: [PATCH v2 6/7] mm: truncate: split huge page cache page to a non-zero order if possible.
From: Zi Yan <ziy@...dia.com>
To minimize the number of pages after a huge page truncation, we do not
need to split it all the way down to order-0. The huge page has at most
three parts, the part before offset, the part to be truncated, the part
remaining at the end. Find the greatest common divisor of them to
calculate the new page order from it, so we can split the huge
page to this order and keep the remaining pages as large and as few as
possible.
Signed-off-by: Zi Yan <ziy@...dia.com>
---
mm/truncate.c | 21 +++++++++++++++++++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/mm/truncate.c b/mm/truncate.c
index 86de31ed4d32..817efd5e94b4 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -22,6 +22,7 @@
#include <linux/buffer_head.h> /* grr. try_to_release_page */
#include <linux/shmem_fs.h>
#include <linux/rmap.h>
+#include <linux/gcd.h>
#include "internal.h"
/*
@@ -211,7 +212,8 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio)
bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
{
loff_t pos = folio_pos(folio);
- unsigned int offset, length;
+ unsigned int offset, length, remaining;
+ unsigned int new_order = folio_order(folio);
if (pos < start)
offset = start - pos;
@@ -222,6 +224,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
length = length - offset;
else
length = end + 1 - pos - offset;
+ remaining = folio_size(folio) - offset - length;
folio_wait_writeback(folio);
if (length == folio_size(folio)) {
@@ -236,11 +239,25 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
*/
folio_zero_range(folio, offset, length);
+ /*
+ * Use the greatest common divisor of offset, length, and remaining
+ * as the smallest page size and compute the new order from it. So we
+ * can truncate a subpage as large as possible. Round up gcd to
+ * PAGE_SIZE, otherwise ilog2 can give -1 when gcd/PAGE_SIZE is 0.
+ */
+ new_order = ilog2(round_up(gcd(gcd(offset, length), remaining),
+ PAGE_SIZE) / PAGE_SIZE);
+
+ /* order-1 THP not supported, downgrade to order-0 */
+ if (new_order == 1)
+ new_order = 0;
+
+
if (folio_has_private(folio))
folio_invalidate(folio, offset, length);
if (!folio_test_large(folio))
return true;
- if (split_folio(folio) == 0)
+ if (split_huge_page_to_list_to_order(&folio->page, NULL, new_order) == 0)
return true;
if (folio_test_dirty(folio))
return false;
--
2.39.2
Powered by blists - more mailing lists