lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <da433043-d17c-43e0-ab6f-c4897061b4a1@redhat.com>
Date: Mon, 29 Jul 2024 19:46:26 +0200
From: David Hildenbrand <david@...hat.com>
To: Peter Xu <peterx@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
 Andrew Morton <akpm@...ux-foundation.org>,
 Muchun Song <muchun.song@...ux.dev>, Oscar Salvador <osalvador@...e.de>,
 Qi Zheng <zhengqi.arch@...edance.com>, Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH v1 1/2] mm: let pte_lockptr() consume a pte_t pointer

On 29.07.24 18:39, Peter Xu wrote:
> On Mon, Jul 29, 2024 at 12:26:00PM -0400, Peter Xu wrote:
>> On Fri, Jul 26, 2024 at 11:48:01PM +0200, David Hildenbrand wrote:
>>> On 26.07.24 23:28, Peter Xu wrote:
>>>> On Fri, Jul 26, 2024 at 06:02:17PM +0200, David Hildenbrand wrote:
>>>>> On 26.07.24 17:36, Peter Xu wrote:
>>>>>> On Thu, Jul 25, 2024 at 08:39:54PM +0200, David Hildenbrand wrote:
>>>>>>> pte_lockptr() is the only *_lockptr() function that doesn't consume
>>>>>>> what would be expected: it consumes a pmd_t pointer instead of a pte_t
>>>>>>> pointer.
>>>>>>>
>>>>>>> Let's change that. The two callers in pgtable-generic.c are easily
>>>>>>> adjusted. Adjust khugepaged.c:retract_page_tables() to simply do a
>>>>>>> pte_offset_map_nolock() to obtain the lock, even though we won't actually
>>>>>>> be traversing the page table.
>>>>>>>
>>>>>>> This makes the code more similar to the other variants and avoids other
>>>>>>> hacks to make the new pte_lockptr() version happy. pte_lockptr() users
>>>>>>> reside now only in  pgtable-generic.c.
>>>>>>>
>>>>>>> Maybe, using pte_offset_map_nolock() is the right thing to do because
>>>>>>> the PTE table could have been removed in the meantime? At least it sounds
>>>>>>> more future proof if we ever have other means of page table reclaim.
>>>>>>
>>>>>> I think it can't change, because anyone who wants to race against this
>>>>>> should try to take the pmd lock first (which was held already)?
>>>>>
>>>>> That doesn't explain why it is safe for us to assume that after we took the
>>>>> PMD lock that the PMD actually still points at a completely empty page
>>>>> table. Likely it currently works by accident, because we only have a single
>>>>> such user that makes this assumption. It might certainly be a different once
>>>>> we asynchronously reclaim page tables.
>>>>
>>>> I think it's safe because find_pmd_or_thp_or_none() returned SUCCEED, and
>>>> we're holding i_mmap lock for read.  I don't see any way that this pmd can
>>>> become a non-pgtable-page.
>>>>
>>>> I meant, AFAIU tearing down pgtable in whatever sane way will need to at
>>>> least take both mmap write lock and i_mmap write lock (in this case, a file
>>>> mapping), no?
>>>
>>> Skimming over [1] where I still owe a review I think we can now do it now
>>> purely under the read locks, with the PMD lock held.
>>
>> Err, how I missed that.. yeah you're definitely right, and that's the
>> context here where we're collapsing.  I think I somehow forgot all Hugh's
>> work when I replied there, sorry.
>>
>>>
>>> I think this is also what collapse_pte_mapped_thp() ends up doing: replace a
>>> PTE table that maps a folio by a PMD (present or none, depends) that maps a
>>> folio only while holding the mmap lock in read mode. Of course, here the
>>> table is not empty but we need similar ways of making PT walkers aware of
>>> concurrent page table retraction.
>>>
>>> IIRC, that was the magic added to __pte_offset_map(), such that
>>> pte_offset_map_nolock/pte_offset_map_lock can fail on races.
>>
>> Said that, I still think current code (before this patch) is safe, same to
>> a hard-coded line to lock the pte pgtable lock.  Again, I'm fine if you
>> prefer pte_offset_map_nolock() but I just think the rcu read lock and stuff
>> can be avoided.
>>
>> I think it's because such collapse so far can only happen in such path
>> where we need to hold the large folio (PMD-level) lock first.  It means
>> anyone who could change this pmd entry to be not a pte pgtable is blocked
>> already, hence it must keeping being a pte pgtable page even if we don't
>> take any rcu.
>>
>>>
>>>
>>> But if we hold the PMD lock, nothing should actually change (so far my
>>> understanding) -- we cannot suddenly rip out a page table.
>>>
>>> [1]
>>> https://lkml.kernel.org/r/cover.1719570849.git.zhengqi.arch@bytedance.com
>>>
>>>>
>>>>>
>>>>> But yes, the PMD cannot get modified while we hold the PMD lock, otherwise
>>>>> we'd be in trouble
>>>>>
>>>>>>
>>>>>> I wonder an open coded "ptlock_ptr(page_ptdesc(pmd_page(*pmd)))" would be
>>>>>> nicer here, but only if my understanding is correct.
>>>>>
>>>>> I really don't like open-coding that. Fortunately we were able to limit the
>>>>> use of ptlock_ptr to a single user outside of arch/x86/xen/mmu_pv.c so far.
>>>>
>>>> I'm fine if you prefer like that; I don't see it a huge deal to me.
>>>
>>> Let's keep it like that, unless we can come up with something neater. At
>>> least it makes the code also more consistent with similar code in that file
>>> and the overhead should be  minimal.
>>>
>>> I was briefly thinking about actually testing if the PT is full of
>>> pte_none(), either as a debugging check or to also handle what is currently
>>> handled via:
>>>
>>> if (likely(!vma->anon_vma && !userfaultfd_wp(vma))) {
>>>
>>> Seems wasteful just because some part of a VMA might have a private page
>>> mapped / uffd-wp active to let all other parts suffer.
>>>
>>> Will think about if that is really worth it.
>>>
>>> ... also because I still want to understand why the PTL of the PMD table is
>>> required at all. What if we lock it first and somebody else wants to lock it
>>> after us while we already ripped it out? Sure there must be some reason for
>>> the lock, I just don't understand it yet :/.
>>
>> IIUC the pte pgtable lock will be needed for checking anon_vma safely.
>>
>> e.g., consider if we don't take the pte pgtable lock, I think it's possible
>> some thread tries to inject a private pte (and prepare anon_vma before
>> that) concurrently with this thread trying to collapse the pgtable into a
>> huge pmd.  I mean, when without the pte pgtable lock held, I think it's
>> racy to check this line:
>>
>>          if (unlikely(vma->anon_vma || userfaultfd_wp(vma))) {
>>                  ...
>>          }
>>
>> On the 1st condition.
> 
> Hmm, right after I replied, I think it also guarantees safety on the 2nd
> condition..
> 
> Note that one thing I still prefer a hard-coded line over
> pte_offset_map_nolock() is that, the new code seems to say we can treat the
> pte pgtable page differently from the pte pgtable lock.  But I think
> they're really in the same realm.
> 
> In short, AFAIU the rcu lock not only protects the pte pgtable's existance,
> but also protects the pte lock.
> 
>  From that POV, below new code in this patch:
> 
> -               ptl = pte_lockptr(mm, pmd);
> +
> +               /*
> +                * No need to check the PTE table content, but we'll grab the
> +                * PTE table lock while we zap it.
> +                */
> +               pte = pte_offset_map_nolock(mm, pmd, addr, &ptl);
> +               if (!pte)
> +                       goto unlock_pmd;
>                  if (ptl != pml)
>                          spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
> +               pte_unmap(pte);
> 
> Could be very misleading, where it seems to say that it's fine to use the
> pte pgtable lock even after rcu unlock.  It might make the code harder to
> understand.

I see what you mean but this is a very similar pattern as used in 
collapse_pte_mapped_thp(), no? There we have

start_pte = pte_offset_map_nolock(mm, pmd, haddr, &ptl);
...
if (!pml)
	spin_lock(ptl);
...
pte_unmap(start_pte);
if (!pml)
	spin_unlock(ptl);


Again, I don't have a strong opinion on this, but doing it more similar 
to collapse_pte_mapped_thp() to obtain locks makes it clearer to me. But 
if I am missing something obvious please shout and I'll change it.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ