[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240601140917.43562-1-ioworker0@gmail.com>
Date: Sat, 1 Jun 2024 22:09:17 +0800
From: Lance Yang <ioworker0@...il.com>
To: david@...hat.com,
akpm@...ux-foundation.org,
yjnworkstation@...il.com
Cc: linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
willy@...radead.org,
00107082@....com,
libang.li@...group.com,
Lance Yang <ioworker0@...il.com>
Subject: Re: [PATCH] mm: init_mlocked_on_free_v3
Completely agree with David's point[1]. I'm also not convinced that this is the
right approach :)
It seems like this patch won't handle all cases, as David mentioned[1] before.
folio_remove_rmap_ptes() will immediately munlock a large folio (as large folios
are not allowed to be batch-added to the LRU list) via munlock_vma_folio() when
it is fully unmapped, so this patch does not work in this case. Even worse, if
we encounter a COW mlocked folio, we would run into trouble (data corruption).
Hi Andrew, I just noticed that this patch has become part of v6.10-rc1, but it
has not been acked/reviewed yet. Is there any chance to revert it?
[1] https://lore.kernel.org/linux-mm/8118eabf-de9c-41a7-9892-3a1a5bd2071c@redhat.com/
[2] https://lore.kernel.org/linux-mm/20240517192239.9285edd85f8ef893bb508a61@linux-foundation.org/
Thanks,
Lance
Powered by blists - more mailing lists