lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9e7d31b9-1eaf-4599-ce42-b80c0c4bb25d@google.com>
Date: Sun, 24 Aug 2025 18:25:14 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Will Deacon <will@...nel.org>, David Hildenbrand <david@...hat.com>
cc: Hugh Dickins <hughd@...gle.com>, linux-mm@...ck.org, 
    linux-kernel@...r.kernel.org, Keir Fraser <keirf@...gle.com>, 
    Jason Gunthorpe <jgg@...pe.ca>, John Hubbard <jhubbard@...dia.com>, 
    Frederick Mayle <fmayle@...gle.com>, 
    Andrew Morton <akpm@...ux-foundation.org>, Peter Xu <peterx@...hat.com>, 
    Rik van Riel <riel@...riel.com>, Vlastimil Babka <vbabka@...e.cz>, 
    Ge Yang <yangge1116@....com>
Subject: Re: [PATCH] mm/gup: Drain batched mlock folio processing before
 attempting migration

On Mon, 18 Aug 2025, Will Deacon wrote:
> On Mon, Aug 18, 2025 at 02:31:42PM +0100, Will Deacon wrote:
> > On Fri, Aug 15, 2025 at 09:14:48PM -0700, Hugh Dickins wrote:
> > > I think replace the folio_test_mlocked(folio) part of it by
> > > (folio_test_mlocked(folio) && !folio_test_unevictable(folio)).
> > > That should reduce the extra calls to a much more reasonable
> > > number, while still solving your issue.
> > 
> > Alas, I fear that the folio may be unevictable by this point (which
> > seems to coincide with the readahead fault adding it to the LRU above)
> > but I can try it out.
> 
> I gave this a spin but I still see failures with this change.

Many thanks, Will, for the precisely relevant traces (in which,
by the way, mapcount=0 really means _mapcount=0 hence mapcount=1).

Yes, those do indeed illustrate a case which my suggested
(folio_test_mlocked(folio) && !folio_test_unevictable(folio))
failed to cover.  Very helpful to have an example of that.

And many thanks, David, for your reminder of commit 33dfe9204f29
("mm/gup: clear the LRU flag of a page before adding to LRU batch").

Yes, I strongly agree with your suggestion that the mlock batch
be brought into line with its change to the ordinary LRU batches,
and agree that doing so will be likely to solve Will's issue
(and similar cases elsewhere, without needing to modify them).

Now I just have to cool my head and get back down into those
mlock batches.  I am fearful that making a change there to suit
this case will turn out later to break another case (and I just
won't have time to redevelop as thorough a grasp of the races as
I had back then).  But if we're lucky, applying that "one batch
at a time" rule will actually make it all more comprehensible.

(I so wish we had spare room in struct page to keep the address
of that one batch entry, or the CPU to which that one batch
belongs: then, although that wouldn't eliminate all uses of
lru_add_drain_all(), it would allow us to efficiently extract
a target page from its LRU batch without a remote drain.)

I have not yet begun to write such a patch, and I'm not yet sure
that it's even feasible: this mail sent to get the polite thank
yous out of my mind, to help clear it for getting down to work.

Hugh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ