[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aD79vg-jQQU69raX@localhost.localdomain>
Date: Tue, 3 Jun 2025 15:50:54 +0200
From: Oscar Salvador <osalvador@...e.de>
To: Peter Xu <peterx@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <muchun.song@...ux.dev>,
David Hildenbrand <david@...hat.com>,
James Houghton <jthoughton@...gle.com>,
Gavin Guo <gavinguo@...lia.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 1/3] mm, hugetlb: Clean up locking in hugetlb_fault
and hugetlb_wp
On Mon, Jun 02, 2025 at 05:30:19PM -0400, Peter Xu wrote:
> Right, and thanks for the git digging as usual. I would agree hugetlb is
> more challenge than many other modules on git archaeology. :)
>
> Even if I mentioned the invalidate_lock, I don't think I thought deeper
> than that. I just wished whenever possible we still move hugetlb code
> closer to generic code, so if that's the goal we may still want to one day
> have a closer look at whether hugetlb can also use invalidate_lock. Maybe
> it isn't worthwhile at last: invalidate_lock is currently a rwsem, which
> normally at least allows concurrent fault, but that's currently what isn't
> allowed in hugetlb anyway..
>
> If we start to remove finer grained locks that work will be even harder,
> and removing folio lock in this case in fault path also brings hugetlbfs
> even further from other file systems. That might be slightly against what
> we used to wish to do, which is to make it closer to others. Meanwhile I'm
> also not yet sure the benefit of not taking folio lock all across, e.g. I
> don't expect perf would change at all even if lock is avoided. We may want
> to think about that too when doing so.
Ok, I have to confess I was not looking things from this perspective,
but when doing so, yes, you are right, we should strive to find
replacements wherever we can for not using hugetlb-specific code.
I do not know about this case though, not sure what other options do we
have when trying to shut concurrent faults while doing other operation.
But it is something we should definitely look at.
Wrt. to the lock.
There were two locks, old_folio (taken in hugetlb_fault) and
pagecache_folio one.
The thing was not about worry as how much perf we leave on the table
because of these locks, as I am pretty sure is next to 0, but my drive
was to understand what are protection and why, because as the discussion
showed, none of us really had a good idea about it and it turns out that this
goes back more than ~20 years ago.
Another topic for the lock (old_folio, so the one we copy from),
when we compare it to generic code, we do not take the lock there.
Looking at do_wp_page(), we do __get__ a reference on the folio we copy
from, but not the lock, so AFAIU, the lock seems only to please
folio_move_anon_rmap() from hugetlb_wp.
Taking a look at do_wp_page()->wp_can_reuse_anon_folio() which also
calls folio_move_anon_rmap() in case we can re-use the folio, it only
takes the lock before the call to folio_move_anon_rmap(), and then
unlocks it.
Which, I think, hugetlb should also do.
do_wp_page
wp_can_reuse_anon_folio ?
: yes: folio_lock ; folio_move_anon_rmap ; folio_unlock
bail out
: no: get a reference on the folio and call wp_page_copy
So, this should be the lead that hugetlb follows.
As I said, it is not about the performance, and I agree that relying on
finer granularity locks is the way to go, but we need to understand
where and why, and with the current code from upstream, that is not
clear at all.
That is why I wanted to reduce the scope of old_folio to what is
actually needed, which is the snippet:
if (folio_mapcount(old_folio) == 1 && folio_test_anon(old_folio)) {
if (!PageAnonExclusive(&old_folio->page)) {
folio_move_anon_rmap(old_folio, vma);
SetPageAnonExclusive(&old_folio->page);
}
if (likely(!unshare))
set_huge_ptep_maybe_writable(vma, vmf->address,
vmf->pte);
delayacct_wpcopy_end();
return 0;
}
I think it is important to 1) reduce it to wrap what actually needs to
be within the lock and 2) document why, so no one has to put the gloves
and start digging in the history again.
> Thanks! I hope that'll also help whatever patch to land sooner, after it
> can be verified to fix the issue.
So, my plan is:
1) Fix pagecache folio issue in one patch (test for anon, still need to
check but it should work)
2) implement the 'filemap_get_hugetlb_folio' thing to get a reference and not
lock it
3) reduce scope of old_folio
I want to make it clear that while I still want to add filemap_get_hugetlb_folio
and stop using the lock version, the reason is not to give more power to the mutex,
but to bring it closer to what do_wp_page does.
What do you think about it?
--
Oscar Salvador
SUSE Labs
Powered by blists - more mailing lists