[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2ae8be1f-7b07-4967-b40c-2e4a85080639@redhat.com>
Date: Fri, 26 Jul 2024 16:45:53 +0200
From: David Hildenbrand <david@...hat.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Muchun Song <muchun.song@...ux.dev>, Peter Xu <peterx@...hat.com>,
Oscar Salvador <osalvador@...e.de>
Subject: Re: [PATCH v1 0/2] mm/hugetlb: fix hugetlb vs. core-mm PT locking
On 26.07.24 11:19, David Hildenbrand wrote:
> On 25.07.24 22:41, Andrew Morton wrote:
>> On Thu, 25 Jul 2024 20:39:53 +0200 David Hildenbrand <david@...hat.com> wrote:
>>
>>> Working on another generic page table walker that tries to avoid
>>> special-casing hugetlb, I found a page table locking issue with hugetlb
>>> folios that are not mapped using a single PMD/PUD.
>>>
>>> For some hugetlb folio sizes, GUP will take different page table locks
>>> when walking the page tables than hugetlb when modifying the page tables.
>>>
>>> I did not actually try reproducing an issue, but looking at
>>> follow_pmd_mask() where we might be rereading a PMD value multiple times
>>> it's rather clear that concurrent modifications are rather unpleasant.
>>>
>>> In follow_page_pte() we might be better in that regard -- ptep_get() does
>>> a READ_ONCE() -- but who knows what else could happen concurrently in
>>> some weird corner cases (e.g., hugetlb folio getting unmapped and freed).
>>>
>>> Did some basic sanity testing with various hugetlb sizes on x86-64 and
>>> arm64. Maybe I'll find some time to actually write a simple reproducer in
>>> the common weeks, so this wouldn't have to be all-theoretical for now.
>>
>> When can we be confident that this change is merge-worthy?
>
> I'm convinced that it is the right thing to do, but I don't think we
> have to rush this.
>
> As Baolin notes, we fixed the same issue in the past, unfortunately also
> without a reproducer IIUC, so I'll try to reproduce the race, but I'm
> not 100% sure if I'll manage to do so..
Okay, so running this series against the reproducer I pulled out of my
magic hat, this series seems to properly fix it.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists