lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aHfSTdoi/M9ORrXE@lstrano-desk.jf.intel.com>
Date: Wed, 16 Jul 2025 09:24:45 -0700
From: Matthew Brost <matthew.brost@...el.com>
To: Zi Yan <ziy@...dia.com>
CC: Balbir Singh <balbirs@...dia.com>, <linux-mm@...ck.org>,
	<akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>, Karol Herbst
	<kherbst@...hat.com>, Lyude Paul <lyude@...hat.com>, Danilo Krummrich
	<dakr@...nel.org>, David Airlie <airlied@...il.com>, Simona Vetter
	<simona@...ll.ch>, Jérôme Glisse <jglisse@...hat.com>,
	Shuah Khan <shuah@...nel.org>, David Hildenbrand <david@...hat.com>, "Barry
 Song" <baohua@...nel.org>, Baolin Wang <baolin.wang@...ux.alibaba.com>, "Ryan
 Roberts" <ryan.roberts@....com>, Matthew Wilcox <willy@...radead.org>, "Peter
 Xu" <peterx@...hat.com>, Kefeng Wang <wangkefeng.wang@...wei.com>, Jane Chu
	<jane.chu@...cle.com>, Alistair Popple <apopple@...dia.com>, Donet Tom
	<donettom@...ux.ibm.com>
Subject: Re: [v1 resend 08/12] mm/thp: add split during migration support

On Wed, Jul 16, 2025 at 07:19:10AM -0400, Zi Yan wrote:
> On 16 Jul 2025, at 1:34, Matthew Brost wrote:
> 
> > On Sun, Jul 06, 2025 at 11:47:10AM +1000, Balbir Singh wrote:
> >> On 7/6/25 11:34, Zi Yan wrote:
> >>> On 5 Jul 2025, at 21:15, Balbir Singh wrote:
> >>>
> >>>> On 7/5/25 11:55, Zi Yan wrote:
> >>>>> On 4 Jul 2025, at 20:58, Balbir Singh wrote:
> >>>>>
> >>>>>> On 7/4/25 21:24, Zi Yan wrote:
> >>>>>>>
> >>>>>>> s/pages/folio
> >>>>>>>
> >>>>>>
> >>>>>> Thanks, will make the changes
> >>>>>>
> >>>>>>> Why name it isolated if the folio is unmapped? Isolated folios often mean
> >>>>>>> they are removed from LRU lists. isolated here causes confusion.
> >>>>>>>
> >>>>>>
> >>>>>> Ack, will change the name
> >>>>>>
> >>>>>>
> >>>>>>>>   *
> >>>>>>>>   * It calls __split_unmapped_folio() to perform uniform and non-uniform split.
> >>>>>>>>   * It is in charge of checking whether the split is supported or not and
> >>>>>>>> @@ -3800,7 +3799,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
> >>>>>>>>   */
> >>>>>>>>  static int __folio_split(struct folio *folio, unsigned int new_order,
> >>>>>>>>  		struct page *split_at, struct page *lock_at,
> >>>>>>>> -		struct list_head *list, bool uniform_split)
> >>>>>>>> +		struct list_head *list, bool uniform_split, bool isolated)
> >>>>>>>>  {
> >>>>>>>>  	struct deferred_split *ds_queue = get_deferred_split_queue(folio);
> >>>>>>>>  	XA_STATE(xas, &folio->mapping->i_pages, folio->index);
> >>>>>>>> @@ -3846,14 +3845,16 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> >>>>>>>>  		 * is taken to serialise against parallel split or collapse
> >>>>>>>>  		 * operations.
> >>>>>>>>  		 */
> >>>>>>>> -		anon_vma = folio_get_anon_vma(folio);
> >>>>>>>> -		if (!anon_vma) {
> >>>>>>>> -			ret = -EBUSY;
> >>>>>>>> -			goto out;
> >>>>>>>> +		if (!isolated) {
> >>>>>>>> +			anon_vma = folio_get_anon_vma(folio);
> >>>>>>>> +			if (!anon_vma) {
> >>>>>>>> +				ret = -EBUSY;
> >>>>>>>> +				goto out;
> >>>>>>>> +			}
> >>>>>>>> +			anon_vma_lock_write(anon_vma);
> >>>>>>>>  		}
> >>>>>>>>  		end = -1;
> >>>>>>>>  		mapping = NULL;
> >>>>>>>> -		anon_vma_lock_write(anon_vma);
> >>>>>>>>  	} else {
> >>>>>>>>  		unsigned int min_order;
> >>>>>>>>  		gfp_t gfp;
> >>>>>>>> @@ -3920,7 +3921,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> >>>>>>>>  		goto out_unlock;
> >>>>>>>>  	}
> >>>>>>>>
> >>>>>>>> -	unmap_folio(folio);
> >>>>>>>> +	if (!isolated)
> >>>>>>>> +		unmap_folio(folio);
> >>>>>>>>
> >>>>>>>>  	/* block interrupt reentry in xa_lock and spinlock */
> >>>>>>>>  	local_irq_disable();
> >>>>>>>> @@ -3973,14 +3975,15 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> >>>>>>>>
> >>>>>>>>  		ret = __split_unmapped_folio(folio, new_order,
> >>>>>>>>  				split_at, lock_at, list, end, &xas, mapping,
> >>>>>>>> -				uniform_split);
> >>>>>>>> +				uniform_split, isolated);
> >>>>>>>>  	} else {
> >>>>>>>>  		spin_unlock(&ds_queue->split_queue_lock);
> >>>>>>>>  fail:
> >>>>>>>>  		if (mapping)
> >>>>>>>>  			xas_unlock(&xas);
> >>>>>>>>  		local_irq_enable();
> >>>>>>>> -		remap_page(folio, folio_nr_pages(folio), 0);
> >>>>>>>> +		if (!isolated)
> >>>>>>>> +			remap_page(folio, folio_nr_pages(folio), 0);
> >>>>>>>>  		ret = -EAGAIN;
> >>>>>>>>  	}
> >>>>>>>
> >>>>>>> These "isolated" special handlings does not look good, I wonder if there
> >>>>>>> is a way of letting split code handle device private folios more gracefully.
> >>>>>>> It also causes confusions, since why does "isolated/unmapped" folios
> >>>>>>> not need to unmap_page(), remap_page(), or unlock?
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>> There are two reasons for going down the current code path
> >>>>>
> >>>>> After thinking more, I think adding isolated/unmapped is not the right
> >>>>> way, since unmapped folio is a very generic concept. If you add it,
> >>>>> one can easily misuse the folio split code by first unmapping a folio
> >>>>> and trying to split it with unmapped = true. I do not think that is
> >>>>> supported and your patch does not prevent that from happening in the future.
> >>>>>
> >>>>
> >>>> I don't understand the misuse case you mention, I assume you mean someone can
> >>>> get the usage wrong? The responsibility is on the caller to do the right thing
> >>>> if calling the API with unmapped
> >>>
> >>> Before your patch, there is no use case of splitting unmapped folios.
> >>> Your patch only adds support for device private page split, not any unmapped
> >>> folio split. So using a generic isolated/unmapped parameter is not OK.
> >>>
> >>
> >> There is a use for splitting unmapped folios (see below)
> >>
> >>>>
> >>>>> You should teach different parts of folio split code path to handle
> >>>>> device private folios properly. Details are below.
> >>>>>
> >>>>>>
> >>>>>> 1. if the isolated check is not present, folio_get_anon_vma will fail and cause
> >>>>>>    the split routine to return with -EBUSY
> >>>>>
> >>>>> You do something below instead.
> >>>>>
> >>>>> if (!anon_vma && !folio_is_device_private(folio)) {
> >>>>> 	ret = -EBUSY;
> >>>>> 	goto out;
> >>>>> } else if (anon_vma) {
> >>>>> 	anon_vma_lock_write(anon_vma);
> >>>>> }
> >>>>>
> >>>>
> >>>> folio_get_anon() cannot be called for unmapped folios. In our case the page has
> >>>> already been unmapped. Is there a reason why you mix anon_vma_lock_write with
> >>>> the check for device private folios?
> >>>
> >>> Oh, I did not notice that anon_vma = folio_get_anon_vma(folio) is also
> >>> in if (!isolated) branch. In that case, just do
> >>>
> >>> if (folio_is_device_private(folio) {
> >>> ...
> >>> } else if (is_anon) {
> >>> ...
> >>> } else {
> >>> ...
> >>> }
> >>>
> >>>>
> >>>>> People can know device private folio split needs a special handling.
> >>>>>
> >>>>> BTW, why a device private folio can also be anonymous? Does it mean
> >>>>> if a page cache folio is migrated to device private, kernel also
> >>>>> sees it as both device private and file-backed?
> >>>>>
> >>>>
> >>>> FYI: device private folios only work with anonymous private pages, hence
> >>>> the name device private.
> >>>
> >>> OK.
> >>>
> >>>>
> >>>>>
> >>>>>> 2. Going through unmap_page(), remap_page() causes a full page table walk, which
> >>>>>>    the migrate_device API has already just done as a part of the migration. The
> >>>>>>    entries under consideration are already migration entries in this case.
> >>>>>>    This is wasteful and in some case unexpected.
> >>>>>
> >>>>> unmap_folio() already adds TTU_SPLIT_HUGE_PMD to try to split
> >>>>> PMD mapping, which you did in migrate_vma_split_pages(). You probably
> >>>>> can teach either try_to_migrate() or try_to_unmap() to just split
> >>>>> device private PMD mapping. Or if that is not preferred,
> >>>>> you can simply call split_huge_pmd_address() when unmap_folio()
> >>>>> sees a device private folio.
> >>>>>
> >>>>> For remap_page(), you can simply return for device private folios
> >>>>> like it is currently doing for non anonymous folios.
> >>>>>
> >>>>
> >>>> Doing a full rmap walk does not make sense with unmap_folio() and
> >>>> remap_folio(), because
> >>>>
> >>>> 1. We need to do a page table walk/rmap walk again
> >>>> 2. We'll need special handling of migration <-> migration entries
> >>>>    in the rmap handling (set/remove migration ptes)
> >>>> 3. In this context, the code is already in the middle of migration,
> >>>>    so trying to do that again does not make sense.
> >>>
> >>> Why doing split in the middle of migration? Existing split code
> >>> assumes to-be-split folios are mapped.
> >>>
> >>> What prevents doing split before migration?
> >>>
> >>
> >> The code does do a split prior to migration if THP selection fails
> >>
> >> Please see https://lore.kernel.org/lkml/20250703233511.2028395-5-balbirs@nvidia.com/
> >> and the fallback part which calls split_folio()
> >>
> >> But the case under consideration is special since the device needs to allocate
> >> corresponding pfn's as well. The changelog mentions it:
> >>
> >> "The common case that arises is that after setup, during migrate
> >> the destination might not be able to allocate MIGRATE_PFN_COMPOUND
> >> pages."
> >>
> >> I can expand on it, because migrate_vma() is a multi-phase operation
> >>
> >> 1. migrate_vma_setup()
> >> 2. migrate_vma_pages()
> >> 3. migrate_vma_finalize()
> >>
> >> It can so happen that when we get the destination pfn's allocated the destination
> >> might not be able to allocate a large page, so we do the split in migrate_vma_pages().
> >>
> >> The pages have been unmapped and collected in migrate_vma_setup()
> >>
> >> The next patch in the series 9/12 (https://lore.kernel.org/lkml/20250703233511.2028395-10-balbirs@nvidia.com/)
> >> tests the split and emulates a failure on the device side to allocate large pages
> >> and tests it in 10/12 (https://lore.kernel.org/lkml/20250703233511.2028395-11-balbirs@nvidia.com/)
> >>
> >
> > Another use case I’ve seen is when a previously allocated high-order
> > folio, now in the free memory pool, is reallocated as a lower-order
> > page. For example, a 2MB fault allocates a folio, the memory is later
> 
> That is different. If the high-order folio is free, it should be split
> using split_page() from mm/page_alloc.c.
> 

Ah, ok. Let me see if that works - it would easier.

> > freed, and then a 4KB fault reuses a page from that previously allocated
> > folio. This will be actually quite common in Xe / GPU SVM. In such
> > cases, the folio in an unmapped state needs to be split. I’d suggest a
> 
> This folio is unused, so ->flags, ->mapping, and etc. are not set,
> __split_unmapped_folio() is not for it, unless you mean free folio
> differently.
> 

This is right, those fields should be clear.

Thanks for the tip.

Matt

> > migrate_device_* helper built on top of the core MM __split_folio
> > function add here.
> >
> 
> --
> Best Regards,
> Yan, Zi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ