lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eaba0af3-c531-47f4-bc4e-714a25dfc686@redhat.com>
Date: Fri, 29 Aug 2025 10:56:09 +0200
From: David Hildenbrand <david@...hat.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: Will Deacon <will@...nel.org>, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, Keir Fraser <keirf@...gle.com>,
 Jason Gunthorpe <jgg@...pe.ca>, John Hubbard <jhubbard@...dia.com>,
 Frederick Mayle <fmayle@...gle.com>,
 Andrew Morton <akpm@...ux-foundation.org>, Peter Xu <peterx@...hat.com>,
 Rik van Riel <riel@...riel.com>, Vlastimil Babka <vbabka@...e.cz>,
 Ge Yang <yangge1116@....com>
Subject: Re: [PATCH] mm/gup: Drain batched mlock folio processing before
 attempting migration

>>>
>>>> :/ If so, I really rally
>>>> hope that we can find another way and not go back to that old handling.
>>>
>>> For what reason?  It sounded like a nice "invariant" to keep in mind,
>>> but it's a limitation, and  what purpose was it actually serving?
>>
>> I liked the semantics that if !lru, there could be at most one reference from
>> the LRU caches.
>>
>> That is, if there are two references, you don't even have to bother with
>> flushing anything.
> 
> If that assumption is being put into practice anywhere (not that I know of),
> then it's currently wrong and needs currecting.

We do make use of that property in wp_can_reuse_anon_folio(), to just 
fail fast -- that's what Linus argued for back then: give up fast and 
just create a page copy.

Essentially hidden in the "folio_ref_count(folio) > 3" and 
"!folio_test_lru(folio)" code path. mlock caches would not be considered 
right now.

Now, in contrast to page pinning, that code is pure best effort, and we 
don't care that much about always getting it right (e.g., no remote 
draining, no draining if there is a clear indication that it might help).

> 
> It would be nice and simple if it worked, but I couldn't get it to work
> with mlock/munlock batching, so it seemed better to give up on the
> pretence.

Thanks for trying!

[...]

>> Regarding the issue at hand:
>>
>> There were discussions at LSF/MM about also putting (some) large folios onto
>> the LRU caches.
>>
>> In that context, GUP could take multiple references on the same folio, and a
>> simple folio_expected_ref_count() + 1 would no longer do the trick.
>>
>> I thought about this today, and likely it could be handled by scanning the
>> page array for same folios etc. A bit nasty when wanting to cover all corner
>> cases (folio pages must not be consecutive in the passed array ) ...
> 
> I haven't thought about that problem at all (except when working around
> a similar issue in mm/mempolicy.c's folio queueing), but can sympathize.
> 
> It had irritated me to notice how the flush-immediately code for 512-page
> folios now extends to 2-page folios: you've enlightened me why that remains
> so, I hadn't thought of the implications.  Perhaps even more reason to
> allow for multiple references on the pagevecs/batches?

Not sure TBH if multiple references from pagevecs/batches really play a 
role here :)

> 
>>
>> Apart from that issue, I liked the idea of a "single entry in the cache" for
>> other reasons: it gives clear semantics. We cannot end up in a scenario where
>> someone performs OPX and later someone OPY on a folio, but the way the lru
>> caches are flushed we might end up processing OPX after OPY -- this should be
>> able to happen in case of local or remote flushes IIRC.
> 
> It's been that way for many years, that's how they are.

Yeah, but I enjoyed clearer semantics: like a folio_activate() actually 
resulting in an activation, not getting swallowed by a 
folio_deactivate() stranded somewhere.

>>
>> Maybe we just want to limit it to !folio_test_large(), because flushing there
>> really doesn't give us any chance in succeeding right now? Not sure if it
>> makes any difference in practice, though, we'll likely be flushing remote LRU
>> caches now more frequently either way.
> 
> Ah, good idea. with or without my changes.  Maybe material for a separate
> patch.  I wonder if we would do better to add a folio_is_batchable() and
> use that consistently in all of the places which are refusing to batch
> when folio_test_large() - I wonder if a !folio_test_large() here will
> get missed if there's a change there.

You read my mind.

-- 
Cheers

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ