lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <655d2d29-8fe6-4684-aba4-4803bed0d4d0@redhat.com>
Date: Tue, 9 Sep 2025 13:50:51 +0200
From: David Hildenbrand <david@...hat.com>
To: Will Deacon <will@...nel.org>, Hugh Dickins <hughd@...gle.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 Keir Fraser <keirf@...gle.com>, Jason Gunthorpe <jgg@...pe.ca>,
 John Hubbard <jhubbard@...dia.com>, Frederick Mayle <fmayle@...gle.com>,
 Andrew Morton <akpm@...ux-foundation.org>, Peter Xu <peterx@...hat.com>,
 Rik van Riel <riel@...riel.com>, Vlastimil Babka <vbabka@...e.cz>,
 Ge Yang <yangge1116@....com>
Subject: Re: [PATCH] mm/gup: Drain batched mlock folio processing before
 attempting migration

On 09.09.25 13:39, Will Deacon wrote:
> On Fri, Aug 29, 2025 at 08:46:52AM -0700, Hugh Dickins wrote:
>> On Fri, 29 Aug 2025, Will Deacon wrote:
>>> On Thu, Aug 28, 2025 at 01:47:14AM -0700, Hugh Dickins wrote:
>>>> diff --git a/mm/gup.c b/mm/gup.c
>>>> index adffe663594d..9f7c87f504a9 100644
>>>> --- a/mm/gup.c
>>>> +++ b/mm/gup.c
>>>> @@ -2291,6 +2291,8 @@ static unsigned long collect_longterm_unpinnable_folios(
>>>>   	struct folio *folio;
>>>>   	long i = 0;
>>>>   
>>>> +	lru_add_drain();
>>>> +
>>>>   	for (folio = pofs_get_folio(pofs, i); folio;
>>>>   	     folio = pofs_next_folio(folio, pofs, &i)) {
>>>>   
>>>> @@ -2307,7 +2309,8 @@ static unsigned long collect_longterm_unpinnable_folios(
>>>>   			continue;
>>>>   		}
>>>>   
>>>> -		if (!folio_test_lru(folio) && drain_allow) {
>>>> +		if (drain_allow && folio_ref_count(folio) !=
>>>> +				   folio_expected_ref_count(folio) + 1) {
>>>>   			lru_add_drain_all();
>>>
>>> How does this synchronise with the folio being added to the mlock batch
>>> on another CPU?
>>>
>>> need_mlock_drain(), which is what I think lru_add_drain_all() ends up
>>> using to figure out which CPU batches to process, just looks at the
>>> 'nr' field in the batch and I can't see anything in mlock_folio() to
>>> ensure any ordering between adding the folio to the batch and
>>> incrementing its refcount.
>>>
>>> Then again, my hack to use folio_test_mlocked() would have a similar
>>> issue because the flag is set (albeit with barrier semantics) before
>>> adding the folio to the batch, meaning the drain could miss the folio.
>>>
>>> I guess there's some higher-level synchronisation making this all work,
>>> but it would be good to understand that as I can't see that
>>> collect_longterm_unpinnable_folios() can rely on much other than the pin.
>>
>> No such strict synchronization: you've been misled if people have told
>> you that this pinning migration stuff is deterministically successful:
>> it's best effort - or will others on the Cc disagree?
>>
>> Just as there's no synchronization between the calculation inside
>> folio_expected_ref_count() and the reading of folio's refcount.
>>
>> It wouldn't make sense for this unpinnable collection to anguish over
>> such synchronization, when a moment later the migration is liable to
>> fail (on occasion) for other transient reasons.  All ending up reported
>> as -ENOMEM apparently? that looks unhelpful.
> 
> I see this was tangentially discussed with David on the patches you sent
> and I agree that it's a distinct issue from what we're solving here,
> however, -ENOMEM is a particularly problematic way to report transient
> errors with migration due to a race. For KVM, the -ENOMEM will bubble
> back up to userspace and the VMM is likely to destroy the VM altogether
> whereas -EAGAIN would return back to the guest and retry the faulting
> instruction.

Migration code itself will retry multiple times, which usually takes 
care of most races.

Not all of course.

Now, I recall John was working on that at some point (I recall an RFC 
patch, but I might be daydreaming), and I recall discussions at LSF/MM 
around improving handling when we are mixing a flood of short-therm gup 
with a single long-term gup that wants to migrate these (short-term 
pinned) pages.

Essentially, we would have to temporarily prevent new short-term GUP 
pins in order to make the long-term GUP-pin succeed in migrating the folio.

-- 
Cheers

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ