lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e2917ef8-43bb-4f85-8f0f-712133b88481@redhat.com>
Date: Mon, 24 Feb 2025 19:03:58 +0100
From: David Hildenbrand <david@...hat.com>
To: Peter Xu <peterx@...hat.com>, Barry Song <21cnbao@...il.com>
Cc: Liam.Howlett@...cle.com, aarcange@...hat.com, akpm@...ux-foundation.org,
 axelrasmussen@...gle.com, bgeffon@...gle.com, brauner@...nel.org,
 hughd@...gle.com, jannh@...gle.com, kaleshsingh@...gle.com,
 linux-kernel@...r.kernel.org, linux-mm@...ck.org, lokeshgidra@...gle.com,
 mhocko@...e.com, ngeoffray@...gle.com, rppt@...nel.org,
 ryan.roberts@....com, shuah@...nel.org, surenb@...gle.com,
 v-songbaohua@...o.com, viro@...iv.linux.org.uk, willy@...radead.org,
 zhangpeng362@...wei.com, zhengtangquan@...o.com, yuzhao@...gle.com,
 stable@...r.kernel.org
Subject: Re: [PATCH RFC] mm: Fix kernel BUG when userfaultfd_move encounters
 swapcache

On 24.02.25 18:50, Peter Xu wrote:
> On Sun, Feb 23, 2025 at 10:31:37AM +1300, Barry Song wrote:
>> On Fri, Feb 21, 2025 at 2:49 PM Peter Xu <peterx@...hat.com> wrote:
>>>
>>> On Fri, Feb 21, 2025 at 01:07:24PM +1300, Barry Song wrote:
>>>> On Fri, Feb 21, 2025 at 12:32 PM Peter Xu <peterx@...hat.com> wrote:
>>>>>
>>>>> On Thu, Feb 20, 2025 at 10:21:01PM +1300, Barry Song wrote:
>>>>>> 2. src_anon_vma and its lock – swapcache doesn’t require it(folio is not mapped)
>>>>>
>>>>> Could you help explain what guarantees the rmap walk not happen on a
>>>>> swapcache page?
>>>>>
>>>>> I'm not familiar with this path, though at least I see damon can start a
>>>>> rmap walk on PageAnon almost with no locking..  some explanations would be
>>>>> appreciated.
>>>>
>>>> I am observing the following in folio_referenced(), which the anon_vma lock
>>>> was originally intended to protect.
>>>>
>>>>          if (!pra.mapcount)
>>>>                  return 0;
>>>>
>>>> I assume all other rmap walks should do the same?
>>>
>>> Yes normally there'll be a folio_mapcount() check, however..
>>>
>>>>
>>>> int folio_referenced(struct folio *folio, int is_locked,
>>>>                       struct mem_cgroup *memcg, unsigned long *vm_flags)
>>>> {
>>>>
>>>>          bool we_locked = false;
>>>>          struct folio_referenced_arg pra = {
>>>>                  .mapcount = folio_mapcount(folio),
>>>>                  .memcg = memcg,
>>>>          };
>>>>
>>>>          struct rmap_walk_control rwc = {
>>>>                  .rmap_one = folio_referenced_one,
>>>>                  .arg = (void *)&pra,
>>>>                  .anon_lock = folio_lock_anon_vma_read,
>>>>                  .try_lock = true,
>>>>                  .invalid_vma = invalid_folio_referenced_vma,
>>>>          };
>>>>
>>>>          *vm_flags = 0;
>>>>          if (!pra.mapcount)
>>>>                  return 0;
>>>>          ...
>>>> }
>>>>
>>>> By the way, since the folio has been under reclamation in this case and
>>>> isn't in the lru, this should also prevent the rmap walk, right?
>>>
>>> .. I'm not sure whether it's always working.
>>>
>>> The thing is anon doesn't even require folio lock held during (1) checking
>>> mapcount and (2) doing the rmap walk, in all similar cases as above.  I see
>>> nothing blocks it from a concurrent thread zapping that last mapcount:
>>>
>>>                 thread 1                         thread 2
>>>                 --------                         --------
>>>          [whatever scanner]
>>>             check folio_mapcount(), non-zero
>>>                                                  zap the last map.. then mapcount==0
>>>             rmap_walk()
>>>
>>> Not sure if I missed something.
>>>
>>> The other thing is IIUC swapcache page can also have chance to be faulted
>>> in but only if a read not write.  I actually had a feeling that your
>>> reproducer triggered that exact path, causing a read swap in, reusing the
>>> swapcache page, and hit the sanity check there somehow (even as mentioned
>>> in the other reply, I don't yet know why the 1st check didn't seem to
>>> work.. as we do check folio->index twice..).
>>>
>>> Said that, I'm not sure if above concern will happen in this specific case,
>>> as UIFFDIO_MOVE is pretty special, that we check exclusive bit first in swp
>>> entry so we know it's definitely not mapped elsewhere, meanwhile if we hold
>>> pgtable lock so maybe it can't get mapped back.. it is just still tricky,
>>> at least we do some dances all over releasing and retaking locks.
>>>
>>> We could either justify that's safe, or maybe still ok and simpler if we
>>> could take anon_vma write lock, making sure nobody will be able to read the
>>> folio->index when it's prone to an update.
>>
>> What prompted me to do the former is that folio_get_anon_vma() returns
>> NULL for an unmapped folio. As for the latter, we need to carefully evaluate
>> whether the change below is safe.
>>
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -505,7 +505,7 @@ struct anon_vma *folio_get_anon_vma(const struct
>> folio *folio)
>>          anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
>>          if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
>>                  goto out;
>>
>> -       if (!folio_mapped(folio))
>> +       if (!folio_mapped(folio) && !folio_test_swapcache(folio))
>>                  goto out;
>>
>>          anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
>> @@ -521,7 +521,7 @@ struct anon_vma *folio_get_anon_vma(const struct
>> folio *folio)
>>           * SLAB_TYPESAFE_BY_RCU guarantees that - so the atomic_inc_not_zero()
>>           * above cannot corrupt).
>>           */
> 
> [1]
> 
>>
>> -       if (!folio_mapped(folio)) {
>> +       if (!folio_mapped(folio) && !folio_test_swapcache(folio)) {
>>                  rcu_read_unlock();
>>                  put_anon_vma(anon_vma);
>>                  return NULL;
> 
> Hmm, this let me go back read again on how we manage anon_vma lifespan,
> then I just noticed this may not work.
> 
> See the comment right above [1], here's a full version:
> 
> 	/*
> 	 * If this folio is still mapped, then its anon_vma cannot have been
> 	 * freed.  But if it has been unmapped, we have no security against the
> 	 * anon_vma structure being freed and reused (for another anon_vma:
> 	 * SLAB_TYPESAFE_BY_RCU guarantees that - so the atomic_inc_not_zero()
> 	 * above cannot corrupt).
> 	 */
> 
> So afaiu that means we pretty much very rely upon folio_mapped() check to
> make sure anon_vma being valid at all that we fetched from folio->mapping,
> not to mention the rmap walk later afterwards.
> 
> Then above diff in folio_get_anon_vma() should be problematic, as when
> "folio_mapped()==false && folio_test_swapcache()==true", above change will
> start to return anon_vma pointer even if the anon_vma could have been freed
> and reused by other VMAs.

When splitting a folio, we use folio_get_anon_vma(). That seems to work 
as long as we have the folio locked.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ