[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2AE055C4-BFFE-4B61-A96A-6DE227422C7B@nvidia.com>
Date: Fri, 11 Jul 2025 10:37:08 -0400
From: Zi Yan <ziy@...dia.com>
To: David Hildenbrand <david@...hat.com>
Cc: Balbir Singh <balbirs@...dia.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/huge_memory: move unrelated code out of
__split_unmapped_folio()
On 11 Jul 2025, at 2:41, David Hildenbrand wrote:
> On 11.07.25 05:02, Zi Yan wrote:
>> remap(), folio_ref_unfreeze(), lru_add_split_folio() are not related to
>> splitting unmapped folio operations. Move them out to the caller, so that
>> __split_unmapped_folio() only splits unmapped folios. This makes
>> __split_unmapped_folio() reusable.
>>
>> Convert VM_BUG_ON(mapping) to use VM_WARN_ON_ONCE_FOLIO().
>>
>> Signed-off-by: Zi Yan <ziy@...dia.com>
>> ---
>> Based on the prior discussion[1], this patch makes
>> __split_unmapped_folio() reusable for splitting unmapped folios without
>> adding a new boolean unmapped parameter to guard mapping related code.
>>
>> Another potential benefit is that __split_unmapped_folio() could be
>> called on after-split folios by __folio_split() to perform new split
>> methods. For example, at deferred split time, unmapped subpages can
>> scatter arbitrarily within a large folio, neither uniform nor non-uniform
>> split can maximize after-split folio orders for mapped subpages.
>> Hopefully, performing __split_unmapped_folio() multiple times can
>> achieve the optimal split result.
>>
>> It passed mm selftests.
>>
>> [1] https://lore.kernel.org/linux-mm/94D8C1A4-780C-4BEC-A336-7D3613B54845@nvidia.com/
>> ---
>>
>> mm/huge_memory.c | 275 ++++++++++++++++++++++++-----------------------
>> 1 file changed, 139 insertions(+), 136 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 3eb1c34be601..d97145dfa6c8 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3396,10 +3396,6 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>> * order - 1 to new_order).
>> * @split_at: in buddy allocator like split, the folio containing @split_at
>> * will be split until its order becomes @new_order.
>> - * @lock_at: the folio containing @lock_at is left locked for caller.
>> - * @list: the after split folios will be added to @list if it is not NULL,
>> - * otherwise to LRU lists.
>> - * @end: the end of the file @folio maps to. -1 if @folio is anonymous memory.
>> * @xas: xa_state pointing to folio->mapping->i_pages and locked by caller
>> * @mapping: @folio->mapping
>> * @uniform_split: if the split is uniform or not (buddy allocator like split)
>> @@ -3425,51 +3421,27 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>> * @page, which is split in next for loop.
>> *
>> * After splitting, the caller's folio reference will be transferred to the
>> - * folio containing @page. The other folios may be freed if they are not mapped.
>> - *
>> - * In terms of locking, after splitting,
>> - * 1. uniform split leaves @page (or the folio contains it) locked;
>> - * 2. buddy allocator like (non-uniform) split leaves @folio locked.
>> - *
>> + * folio containing @page. The caller needs to unlock and/or free after-split
>> + * folios if necessary.
>> *
>> * For !uniform_split, when -ENOMEM is returned, the original folio might be
>> * split. The caller needs to check the input folio.
>> */
>> static int __split_unmapped_folio(struct folio *folio, int new_order,
>> - struct page *split_at, struct page *lock_at,
>> - struct list_head *list, pgoff_t end,
>> - struct xa_state *xas, struct address_space *mapping,
>> - bool uniform_split)
>> + struct page *split_at, struct xa_state *xas,
>> + struct address_space *mapping,
>> + bool uniform_split)
>
> Use two-tabs indent please (like we already do, I assume).
OK. I was using clang-format. It gave me this indentation.
>
> [...]
>
>> @@ -3706,11 +3599,14 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>> {
>> struct deferred_split *ds_queue = get_deferred_split_queue(folio);
>> XA_STATE(xas, &folio->mapping->i_pages, folio->index);
>> + struct folio *next_folio = folio_next(folio);
>> bool is_anon = folio_test_anon(folio);
>> struct address_space *mapping = NULL;
>> struct anon_vma *anon_vma = NULL;
>> int order = folio_order(folio);
>> + struct folio *new_folio, *next;
>> int extra_pins, ret;
>> + int nr_shmem_dropped = 0;
>> pgoff_t end;
>> bool is_hzp;
>> @@ -3833,13 +3729,18 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>> */
>> xas_lock(&xas);
>> xas_reset(&xas);
>> - if (xas_load(&xas) != folio)
>> + if (xas_load(&xas) != folio) {
>> + ret = -EAGAIN;
>> goto fail;
>> + }
>> }
>> /* Prevent deferred_split_scan() touching ->_refcount */
>> spin_lock(&ds_queue->split_queue_lock);
>> if (folio_ref_freeze(folio, 1 + extra_pins)) {
>> + struct address_space *swap_cache = NULL;
>> + struct lruvec *lruvec;
>> +
>> if (folio_order(folio) > 1 &&
>> !list_empty(&folio->_deferred_list)) {
>> ds_queue->split_queue_len--;
>> @@ -3873,18 +3774,120 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>> }
>> }
>> - ret = __split_unmapped_folio(folio, new_order,
>> - split_at, lock_at, list, end, &xas, mapping,
>> - uniform_split);
>> + if (folio_test_swapcache(folio)) {
>> + if (mapping) {
>> + VM_WARN_ON_ONCE_FOLIO(mapping, folio);
>> + ret = -EINVAL;
>> + goto fail;
>> + }
>> +
>> + /*
>> + * a swapcache folio can only be uniformly split to
>> + * order-0
>> + */
>> + if (!uniform_split || new_order != 0) {
>> + ret = -EINVAL;
>> + goto fail;
>> + }
>> +
>> + swap_cache = swap_address_space(folio->swap);
>> + xa_lock(&swap_cache->i_pages);
>> + }
>> +
>> + /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
>> + lruvec = folio_lruvec_lock(folio);
>> +
>> + ret = __split_unmapped_folio(folio, new_order, split_at, &xas,
>> + mapping, uniform_split);
>> +
>> + /*
>> + * Unfreeze after-split folios and put them back to the right
>> + * list. @folio should be kept frozon until page cache entries
>> + * are updated with all the other after-split folios to prevent
>> + * others seeing stale page cache entries.
>> + */
>> + for (new_folio = folio_next(folio); new_folio != next_folio;
>> + new_folio = next) {
>> + next = folio_next(new_folio);
>> +
>> + folio_ref_unfreeze(
>> + new_folio,
>> + 1 + ((mapping || swap_cache) ?
>> + folio_nr_pages(new_folio) :
>> + 0));
>
> While we are at it, is a way to make this look less than an artistic masterpiece? :)
>
> expected_refs = ...
> folio_ref_unfreeze(new_folio, expected_refs).
>
>
> Can we already make use of folio_expected_ref_count() at that point? Mapcount should be 0 and the folio should be properly setup (e.g., anon, swapcache) IIRC.
>
> So maybe
>
> expected_refs = folio_expected_ref_count(new_folio) + 1;
> folio_ref_unfreeze(new_folio, expected_refs).
>
> Would do?
I think so. Even further, I think we probably can get rid of can_split_folio()’s
pextra_pins and use folio_expected_ref_count() too.
Before split:
if (!can_split_folio(folio, 1))
unmap_folio();
extra_pins = folio_expected_ref_count(folio) + 1;
After split:
1. new folio:
expected_refs = folio_expected_ref_count(new_folio) + 1;
folio_ref_unfreeze(new_folio, expected_refs).
2: original folio (it can be split, so need to check ref again):
expected_refs = folio_expected_ref_count(folio) + 1;
folio_ref_unfreeze(folio, expected_refs).
>
>> +
>> + lru_add_split_folio(folio, new_folio, lruvec, list);
>> +
>> + /* Some pages can be beyond EOF: drop them from cache */
>> + if (new_folio->index >= end) {
>> + if (shmem_mapping(mapping))
>> + nr_shmem_dropped +=
>> + folio_nr_pages(new_folio);
>
> Keep that on a single line.
OK.
>
>> + else if (folio_test_clear_dirty(new_folio))
>> + folio_account_cleaned(
>> + new_folio,
>> + inode_to_wb(mapping->host));
>> + __filemap_remove_folio(new_folio, NULL);
>> + folio_put_refs(new_folio,
>> + folio_nr_pages(new_folio));
>> + } else if (mapping) {
>> + __xa_store(&mapping->i_pages, new_folio->index,
>> + new_folio, 0);
>> + } else if (swap_cache) {
>> + __xa_store(&swap_cache->i_pages,
>> + swap_cache_index(new_folio->swap),
>> + new_folio, 0);
>> + }
>> + }
>> + /*
>> + * Unfreeze @folio only after all page cache entries, which
>> + * used to point to it, have been updated with new folios.
>> + * Otherwise, a parallel folio_try_get() can grab origin_folio
>> + * and its caller can see stale page cache entries.
>> + */
>> + folio_ref_unfreeze(folio, 1 +
>> + ((mapping || swap_cache) ? folio_nr_pages(folio) : 0));
>
> Same as above probably.
Sure.
Thank you for the feedback. Will make all these changes and send v2.
Best Regards,
Yan, Zi
Powered by blists - more mailing lists