[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <65b6c32a-7eb4-4023-94c0-968735b784f6@bytedance.com>
Date: Mon, 22 Sep 2025 19:36:08 +0800
From: Qi Zheng <zhengqi.arch@...edance.com>
To: David Hildenbrand <david@...hat.com>, hannes@...xchg.org,
hughd@...gle.com, mhocko@...e.com, roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev, muchun.song@...ux.dev, lorenzo.stoakes@...cle.com,
ziy@...dia.com, baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com,
npache@...hat.com, ryan.roberts@....com, dev.jain@....com,
baohua@...nel.org, lance.yang@...ux.dev, akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, Muchun Song <songmuchun@...edance.com>
Subject: Re: [PATCH 3/4] mm: thp: use folio_batch to handle THP splitting in
deferred_split_scan()
Hi David,
On 9/22/25 4:43 PM, David Hildenbrand wrote:
> On 19.09.25 05:46, Qi Zheng wrote:
>> From: Muchun Song <songmuchun@...edance.com>
>>
>> The maintenance of the folio->_deferred_list is intricate because it's
>> reused in a local list.
>>
>> Here are some peculiarities:
>>
>> 1) When a folio is removed from its split queue and added to a local
>> on-stack list in deferred_split_scan(), the ->split_queue_len
>> isn't
>> updated, leading to an inconsistency between it and the actual
>> number of folios in the split queue.
>
> deferred_split_count() will now return "0" even though there might be
> concurrent scanning going on. I assume that's okay because we are not
> returning SHRINK_EMPTY (which is a difference).
>
>>
>> 2) When the folio is split via split_folio() later, it's removed from
>> the local list while holding the split queue lock. At this time,
>> this lock protects the local list, not the split queue.
>>
>> 3) To handle the race condition with a third-party freeing or
>> migrating
>> the preceding folio, we must ensure there's always one safe (with
>> raised refcount) folio before by delaying its folio_put(). More
>> details can be found in commit e66f3185fa04 ("mm/thp: fix deferred
>> split queue not partially_mapped"). It's rather tricky.
>>
>> We can use the folio_batch infrastructure to handle this clearly. In this
>> case, ->split_queue_len will be consistent with the real number of folios
>> in the split queue. If list_empty(&folio->_deferred_list) returns false,
>> it's clear the folio must be in its split queue (not in a local list
>> anymore).
>>
>> In the future, we will reparent LRU folios during memcg offline to
>> eliminate dying memory cgroups, which requires reparenting the split
>> queue
>> to its parent first. So this patch prepares for using
>> folio_split_queue_lock_irqsave() as the memcg may change then.
>>
>> Signed-off-by: Muchun Song <songmuchun@...edance.com>
>> Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com>
>> ---
>> mm/huge_memory.c | 88 +++++++++++++++++++++++-------------------------
>> 1 file changed, 42 insertions(+), 46 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index d34516a22f5bb..ab16da21c94e0 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3760,21 +3760,22 @@ static int __folio_split(struct folio *folio,
>> unsigned int new_order,
>> struct lruvec *lruvec;
>> int expected_refs;
>> - if (folio_order(folio) > 1 &&
>> - !list_empty(&folio->_deferred_list)) {
>> - ds_queue->split_queue_len--;
>> + if (folio_order(folio) > 1) {
>> + if (!list_empty(&folio->_deferred_list)) {
>> + ds_queue->split_queue_len--;
>> + /*
>> + * Reinitialize page_deferred_list after removing the
>> + * page from the split_queue, otherwise a subsequent
>> + * split will see list corruption when checking the
>> + * page_deferred_list.
>> + */
>> + list_del_init(&folio->_deferred_list);
>> + }
>> if (folio_test_partially_mapped(folio)) {
>> folio_clear_partially_mapped(folio);
>> mod_mthp_stat(folio_order(folio),
>> MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
>> }
>> - /*
>> - * Reinitialize page_deferred_list after removing the
>> - * page from the split_queue, otherwise a subsequent
>> - * split will see list corruption when checking the
>> - * page_deferred_list.
>> - */
>> - list_del_init(&folio->_deferred_list);
>> }
>
> BTW I am not sure about holding the split_queue_lock before freezing the
> refcount (comment above the freeze):
>
> freezing should properly sync against the folio_try_get(): one of them
> would fail.
>
> So not sure if that is still required. But I recall something nasty
> regarding that :)
I'm not sure either, need some investigation.
>
>
>> split_queue_unlock(ds_queue);
>> if (mapping) {
>> @@ -4173,40 +4174,48 @@ static unsigned long
>> deferred_split_scan(struct shrinker *shrink,
>> struct pglist_data *pgdata = NODE_DATA(sc->nid);
>> struct deferred_split *ds_queue = &pgdata->deferred_split_queue;
>> unsigned long flags;
>> - LIST_HEAD(list);
>> - struct folio *folio, *next, *prev = NULL;
>> - int split = 0, removed = 0;
>> + struct folio *folio, *next;
>> + int split = 0, i;
>> + struct folio_batch fbatch;
>> + bool done;
>
> Is "done" really required? Can't we just use sc->nr_to_scan tos ee if
> there is work remaining to be done so we retry?
I think you are right, will do in the next version.
>
>> #ifdef CONFIG_MEMCG
>> if (sc->memcg)
>> ds_queue = &sc->memcg->deferred_split_queue;
>> #endif
>> + folio_batch_init(&fbatch);
>> +retry:
>> + done = true;
>> spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
>> /* Take pin on all head pages to avoid freeing them under us */
>> list_for_each_entry_safe(folio, next, &ds_queue->split_queue,
>> _deferred_list) {
>> if (folio_try_get(folio)) {
>> - list_move(&folio->_deferred_list, &list);
>> - } else {
>> + folio_batch_add(&fbatch, folio);
>> + } else if (folio_test_partially_mapped(folio)) {
>> /* We lost race with folio_put() */
>> - if (folio_test_partially_mapped(folio)) {
>> - folio_clear_partially_mapped(folio);
>> - mod_mthp_stat(folio_order(folio),
>> - MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
>> - }
>> - list_del_init(&folio->_deferred_list);
>> - ds_queue->split_queue_len--;
>> + folio_clear_partially_mapped(folio);
>> + mod_mthp_stat(folio_order(folio),
>> + MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
>> }
>> + list_del_init(&folio->_deferred_list);
>> + ds_queue->split_queue_len--;
>> if (!--sc->nr_to_scan)
>> break;
>> + if (folio_batch_space(&fbatch) == 0) {
>
> Nit: if (!folio_batch_space(&fbatch)) {
OK, will do.
Thanks,
Qi
>
>
Powered by blists - more mailing lists