[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251019081954.luz3mp5ghdhii3vr@master>
Date: Sun, 19 Oct 2025 08:19:54 +0000
From: Wei Yang <richard.weiyang@...il.com>
To: Balbir Singh <balbirs@...dia.com>
Cc: linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
linux-mm@...ck.org, akpm@...ux-foundation.org,
David Hildenbrand <david@...hat.com>, Zi Yan <ziy@...dia.com>,
Joshua Hahn <joshua.hahnjy@...il.com>, Rakie Kim <rakie.kim@...com>,
Byungchul Park <byungchul@...com>,
Gregory Price <gourry@...rry.net>,
Ying Huang <ying.huang@...ux.alibaba.com>,
Alistair Popple <apopple@...dia.com>,
Oscar Salvador <osalvador@...e.de>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Nico Pache <npache@...hat.com>, Ryan Roberts <ryan.roberts@....com>,
Dev Jain <dev.jain@....com>, Barry Song <baohua@...nel.org>,
Lyude Paul <lyude@...hat.com>, Danilo Krummrich <dakr@...nel.org>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Ralph Campbell <rcampbell@...dia.com>,
Mika Penttilä <mpenttil@...hat.com>,
Matthew Brost <matthew.brost@...el.com>,
Francois Dugast <francois.dugast@...el.com>
Subject: Re: [v7 11/16] mm/migrate_device: add THP splitting during migration
On Wed, Oct 01, 2025 at 04:57:02PM +1000, Balbir Singh wrote:
[...]
> static int __folio_split(struct folio *folio, unsigned int new_order,
> struct page *split_at, struct page *lock_at,
>- struct list_head *list, bool uniform_split)
>+ struct list_head *list, bool uniform_split, bool unmapped)
> {
> struct deferred_split *ds_queue = get_deferred_split_queue(folio);
> XA_STATE(xas, &folio->mapping->i_pages, folio->index);
>@@ -3765,13 +3757,15 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> * is taken to serialise against parallel split or collapse
> * operations.
> */
>- anon_vma = folio_get_anon_vma(folio);
>- if (!anon_vma) {
>- ret = -EBUSY;
>- goto out;
>+ if (!unmapped) {
>+ anon_vma = folio_get_anon_vma(folio);
>+ if (!anon_vma) {
>+ ret = -EBUSY;
>+ goto out;
>+ }
>+ anon_vma_lock_write(anon_vma);
> }
> mapping = NULL;
>- anon_vma_lock_write(anon_vma);
> } else {
> unsigned int min_order;
> gfp_t gfp;
>@@ -3838,7 +3832,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> goto out_unlock;
> }
>
>- unmap_folio(folio);
>+ if (!unmapped)
>+ unmap_folio(folio);
>
> /* block interrupt reentry in xa_lock and spinlock */
> local_irq_disable();
>@@ -3925,10 +3920,13 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>
> next = folio_next(new_folio);
>
>+ zone_device_private_split_cb(folio, new_folio);
>+
> expected_refs = folio_expected_ref_count(new_folio) + 1;
> folio_ref_unfreeze(new_folio, expected_refs);
>
>- lru_add_split_folio(folio, new_folio, lruvec, list);
>+ if (!unmapped)
>+ lru_add_split_folio(folio, new_folio, lruvec, list);
>
> /*
> * Anonymous folio with swap cache.
>@@ -3959,6 +3957,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> __filemap_remove_folio(new_folio, NULL);
> folio_put_refs(new_folio, nr_pages);
> }
>+
>+ zone_device_private_split_cb(folio, NULL);
> /*
> * Unfreeze @folio only after all page cache entries, which
> * used to point to it, have been updated with new folios.
>@@ -3982,6 +3982,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>
> local_irq_enable();
>
>+ if (unmapped)
>+ return ret;
As the comment of __folio_split() and __split_huge_page_to_list_to_order()
mentioned:
* The large folio must be locked
* After splitting, the after-split folio containing @lock_at remains locked
But here we seems to change the prerequisites.
Hmm.. I am not sure this is correct.
>+
> if (nr_shmem_dropped)
> shmem_uncharge(mapping->host, nr_shmem_dropped);
>
--
Wei Yang
Help you, Help me
Powered by blists - more mailing lists