[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241008223748.555845-1-ziy@nvidia.com>
Date: Tue, 8 Oct 2024 18:37:47 -0400
From: Zi Yan <ziy@...dia.com>
To: linux-mm@...ck.org,
"Matthew Wilcox (Oracle)" <willy@...radead.org>
Cc: Ryan Roberts <ryan.roberts@....com>,
Hugh Dickins <hughd@...gle.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
David Hildenbrand <david@...hat.com>,
Yang Shi <yang@...amperecomputing.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
Yu Zhao <yuzhao@...gle.com>,
John Hubbard <jhubbard@...dia.com>,
linux-kernel@...r.kernel.org,
Zi Yan <ziy@...dia.com>
Subject: [RFC PATCH 0/1] Buddy allocator like folio split
Hi all,
Matthew and I have discussed about a different way of splitting large
folios. Instead of split one folio uniformly into the same order smaller
ones, doing buddy allocator like split can reduce the total number of
resulting folios, the amount of memory needed for multi-index xarray
split, and keep more large folios after a split. In addition, both
Hugh[1] and Ryan[2] had similar suggestions before.
The patch is an initial implementation. It passes simple order-9 to
lower order split tests for anonymous folios and pagecache folios.
There are still a lot of TODOs to make it upstream. But I would like to gather
feedbacks before that.
Design
===
folio_split() splits a large folio in the same way as buddy allocator
splits a large free page for allocation. The purpose is to minimize the
number of folios after the split. For example, if user wants to free the
3rd subpage in a order-9 folio, folio_split() will split the order-9 folio
as:
O-0, O-0, O-0, O-0, O-2, O-3, O-4, O-5, O-6, O-7, O-8 if it is anon
O-1, O-0, O-0, O-2, O-3, O-4, O-5, O-6, O-7, O-9 if it is pagecache
Since anon folio does not support order-1 yet.
The split process is similar to existing approach:
1. Unmap all page mappings (split PMD mappings if exist);
2. Split meta data like memcg, page owner, page alloc tag;
3. Copy meta data in struct folio to sub pages, but instead of spliting
the whole folio into multiple smaller ones with the same order in a
shot, this approach splits the folio iteratively. Taking the example
above, this approach first splits the original order-9 into two order-8,
then splits left part of order-8 to two order-7 and so on;
4. Post-process split folios, like write mapping->i_pages for pagecache,
adjust folio refcounts, add split folios to corresponding list;
5. Remap split folios
6. Unlock split folios.
TODOs
===
1. For anon folios, the code needs to check enabled mTHP orders and only
split a large folios to enabled orders. But this might be up to debate
since if no mTHP order is enabled, the folio will be split to order-0
ones.
2. Use xas_nomem() instead of xas_split_alloc() as Matthew suggested. The issue
I am having is that when I use xas_set_order(), xas_store(), xas_nomem(),
xas_split() pattern, xas->xa_alloc is NULL instead of some allocated
memory. I must get something wrong and need some help from Matthew about
it.
3. Use folio_split() in pagecache truncate and do more testing.
4. Need to add shmem support if this is needed.
5. Currently, the inputs of folio_split() are original folio, new order,
and a page pointer that tells where to split to new order. For truncate,
better inputs might be two page pointers on the start and end of the
split and the folio_split() figures out the new order.
Any comments and/or suggestions are welcome. Thanks.
[1] https://lore.kernel.org/linux-mm/9dd96da-efa2-5123-20d4-4992136ef3ad@google.com/
[2] https://lore.kernel.org/linux-mm/cbb1d6a0-66dd-47d0-8733-f836fe050374@arm.com/
Zi Yan (1):
mm/huge_memory: buddy allocator like folio_split()
mm/huge_memory.c | 648 ++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 647 insertions(+), 1 deletion(-)
--
2.45.2
Powered by blists - more mailing lists