lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0D97A437-56A9-4C1D-9759-EAF1F7DA5AE7@nvidia.com>
Date: Tue, 04 Mar 2025 12:18:49 -0500
From: Zi Yan <ziy@...dia.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: Liu Shixin <liushixin2@...wei.com>,
 Baolin Wang <baolin.wang@...ux.alibaba.com>, linux-mm@...ck.org,
 Andrew Morton <akpm@...ux-foundation.org>, Barry Song <baohua@...nel.org>,
 David Hildenbrand <david@...hat.com>,
 Kefeng Wang <wangkefeng.wang@...wei.com>, Lance Yang <ioworker0@...il.com>,
 Ryan Roberts <ryan.roberts@....com>, Matthew Wilcox <willy@...radead.org>,
 Charan Teja Kalla <quic_charante@...cinc.com>, linux-kernel@...r.kernel.org,
 Shivank Garg <shivankg@....com>
Subject: Re: [PATCH v2] mm/migrate: fix shmem xarray update during migration

On 4 Mar 2025, at 4:47, Hugh Dickins wrote:

> On Fri, 28 Feb 2025, Zi Yan wrote:
>
>> Pagecache uses multi-index entries for large folio, so does shmem. Only
>> swap cache still stores multiple entries for a single large folio.
>> Commit fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly")
>> fixed swap cache but got shmem wrong by storing multiple entries for
>> a large shmem folio. Fix it by storing a single entry for a shmem
>> folio.
>>
>> Fixes: fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly")
>> Reported-by: Liu Shixin <liushixin2@...wei.com>
>> Closes: https://lore.kernel.org/all/28546fb4-5210-bf75-16d6-43e1f8646080@huawei.com/
>> Signed-off-by: Zi Yan <ziy@...dia.com>
>> Reviewed-by: Shivank Garg <shivankg@....com>
>
> It's a great find (I think), and your commit message is okay:
> but unless I'm much mistaken, NAK to the patch itself.

Got it. Thank you for the review.

>
> First, I say "(I think)" there, because I don't actually know what the
> loop writing the same folio nr times to the multi-index entry does to
> the xarray: I can imagine it as being completely harmless, just nr
> times more work than was needed.
>
> But I guess it does something bad, since Matthew was horrified,
> and we have all found that your patch appears to improve behaviour
> (or at least improve behaviour in the context of your folio_split()
> series: none of us noticed a problem before that, but it may be
> that your new series is widening our exposure to existing bugs).
>
> Maybe your orginal patch, with the shmem_mapping(mapping) check there,
> was good, and it's only wrong when changed to !folio_test_anon(folio);
> but TBH I find it too confusing, with the conditionals the way they are.
> See my preferred alternative below.
>
> The vital point is that multi-index entries are not used in swap cache:
> whether the folio in question orginates from anon or from shmem.  And
> it's easier to understand once you remember that a shmem folio is never
> in both page cache and swap cache at the same time (well, there may be an
> instant of transition from one to other while that folio is held locked) -
> once it's in swap cache, folio->mapping is NULL and it's no longer
> recognizable as from a shmem mapping.

Got it. Now it all makes sense to me. Thank you for the explanation.

>
> The way I read your patch originally, I thought it meant that shmem
> folios go into the swap cache as multi-index, but anon folios do not;
> which seemed a worrying mixture to me.  But crashes on the
> VM_BUG_ON_PAGE(entry != folio, entry) in __delete_from_swap_cache()
> yesterday (with your patch in) led me to see how add_to_swap_cache()
> inserts multiple non-multi-index entries, whether for anon or for shmem.

Thanks for the pointer.

>
> If this patch really is needed in old releases, then I suspect that
> mm/huge_memory.c needs correction there too; but let me explain in
> a response to your folio_split() series.
>
>> ---
>>  mm/migrate.c | 6 +++++-
>>  1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 365c6daa8d1b..2c9669135a38 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -524,7 +524,11 @@ static int __folio_migrate_mapping(struct address_space *mapping,
>>  			folio_set_swapcache(newfolio);
>>  			newfolio->private = folio_get_private(folio);
>>  		}
>> -		entries = nr;
>> +		/* shmem uses high-order entry */
>> +		if (!folio_test_anon(folio))
>> +			entries = 1;
>> +		else
>> +			entries = nr;
>>  	} else {
>>  		VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
>>  		entries = 1;
>> -- 
>> 2.47.2
>
> NAK to that patch above, here's how I think it should be:

OK. I will resend your fix with __split_huge_page() fixes against Linus’s tree.
My folio_split() will conflict with the fix, but the merge fix should be
simple, since the related patch just deletes __split_huge_page() entirely.

>
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
> ---
>  mm/migrate.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index fb19a18892c8..822776819ca6 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -518,12 +518,12 @@ static int __folio_migrate_mapping(struct address_space *mapping,
>  	if (folio_test_anon(folio) && folio_test_large(folio))
>  		mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, 1);
>  	folio_ref_add(newfolio, nr); /* add cache reference */
> -	if (folio_test_swapbacked(folio)) {
> +	if (folio_test_swapbacked(folio))
>  		__folio_set_swapbacked(newfolio);
> -		if (folio_test_swapcache(folio)) {
> -			folio_set_swapcache(newfolio);
> -			newfolio->private = folio_get_private(folio);
> -		}
> +
> +	if (folio_test_swapcache(folio)) {
> +		folio_set_swapcache(newfolio);
> +		newfolio->private = folio_get_private(folio);
>  		entries = nr;
>  	} else {
>  		VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
> -- 
> 2.43.0


Best Regards,
Yan, Zi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ