[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1C274CBB-C809-442E-9575-858460C0F62D@nvidia.com>
Date: Wed, 14 Feb 2024 11:19:18 -0500
From: Zi Yan <ziy@...dia.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: "\"Pankaj Raghav (Samsung)\"" <kernel@...kajraghav.com>,
linux-mm@...ck.org, "\"Matthew Wilcox (Oracle)\"" <willy@...radead.org>,
David Hildenbrand <david@...hat.com>, Yang Shi <shy828301@...il.com>,
Yu Zhao <yuzhao@...gle.com>,
"\"Kirill A . Shutemov\"" <kirill.shutemov@...ux.intel.com>,
"Michal Koutný" <mkoutny@...e.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
"\"Zach O'Keefe\"" <zokeefe@...gle.com>, Hugh Dickins <hughd@...gle.com>,
Mcgrof Chamberlain <mcgrof@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kselftest@...r.kernel.org
Subject: Re: [PATCH v4 6/7] mm: truncate: split huge page cache page to a
non-zero order if possible.
On 14 Feb 2024, at 5:43, Ryan Roberts wrote:
> On 13/02/2024 21:55, Zi Yan wrote:
>> From: Zi Yan <ziy@...dia.com>
>>
>> To minimize the number of pages after a huge page truncation, we do not
>> need to split it all the way down to order-0. The huge page has at most
>> three parts, the part before offset, the part to be truncated, the part
>> remaining at the end. Find the greatest common divisor of them to
>> calculate the new page order from it, so we can split the huge
>> page to this order and keep the remaining pages as large and as few as
>> possible.
>>
>> Signed-off-by: Zi Yan <ziy@...dia.com>
>> ---
>> mm/truncate.c | 21 +++++++++++++++++++--
>> 1 file changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/truncate.c b/mm/truncate.c
>> index 725b150e47ac..49ddbbf7a617 100644
>> --- a/mm/truncate.c
>> +++ b/mm/truncate.c
>> @@ -21,6 +21,7 @@
>> #include <linux/task_io_accounting_ops.h>
>> #include <linux/shmem_fs.h>
>> #include <linux/rmap.h>
>> +#include <linux/gcd.h>
>> #include "internal.h"
>>
>> /*
>> @@ -210,7 +211,8 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio)
>> bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
>> {
>> loff_t pos = folio_pos(folio);
>> - unsigned int offset, length;
>> + unsigned int offset, length, remaining;
>> + unsigned int new_order = folio_order(folio);
>>
>> if (pos < start)
>> offset = start - pos;
>> @@ -221,6 +223,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
>> length = length - offset;
>> else
>> length = end + 1 - pos - offset;
>> + remaining = folio_size(folio) - offset - length;
>>
>> folio_wait_writeback(folio);
>> if (length == folio_size(folio)) {
>> @@ -235,11 +238,25 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
>> */
>> folio_zero_range(folio, offset, length);
>>
>> + /*
>> + * Use the greatest common divisor of offset, length, and remaining
>> + * as the smallest page size and compute the new order from it. So we
>> + * can truncate a subpage as large as possible. Round up gcd to
>> + * PAGE_SIZE, otherwise ilog2 can give -1 when gcd/PAGE_SIZE is 0.
>> + */
>> + new_order = ilog2(round_up(gcd(gcd(offset, length), remaining),
>> + PAGE_SIZE) / PAGE_SIZE);
>
> Given you have up to 2 regions remaining, isn't it possible that you want a
> different order for both those regions (or even multiple orders within the same
> region)? I guess you just choose gcd for simplicity?
Right. You raise the same concern as Hugh[1]. I am minimizing the call of
split_huge_page_to_list_to_order() and you and Hugh want to minimize the
number of folios after the split. Yours will give better outcome after split,
but requires either multiple calls or a more sophisticated implementation
of page split[2]. We probably can revisit this once splitting to any order
gets wider use.
>> +
>> + /* order-1 THP not supported, downgrade to order-0 */
>> + if (new_order == 1)
>> + new_order = 0;
>
> I guess this would need to change if supporting order-1 file folios?
Right.
>> +
>> +
>> if (folio_has_private(folio))
>> folio_invalidate(folio, offset, length);
>> if (!folio_test_large(folio))
>> return true;
>> - if (split_folio(folio) == 0)
>> + if (split_huge_page_to_list_to_order(&folio->page, NULL, new_order) == 0)
>
> I know you are discussing removing this patch, but since you created
> split_folio_to_order() wouldn't that be better here?
Sure. Will change the patch locally.
[1] https://lore.kernel.org/linux-mm/9dd96da-efa2-5123-20d4-4992136ef3ad@google.com/
[2] https://lore.kernel.org/linux-mm/0AC0520E-1BD2-497E-A7ED-05394400BFC9@nvidia.com/
--
Best Regards,
Yan, Zi
Download attachment "signature.asc" of type "application/pgp-signature" (855 bytes)
Powered by blists - more mailing lists