lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 22 Mar 2024 09:52:36 +0800
From: Zhaoyang Huang <huangzhaoyang@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: 黄朝阳 (Zhaoyang Huang) <zhaoyang.huang@...soc.com>, 
	Andrew Morton <akpm@...ux-foundation.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>, 
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, 
	康纪滨 (Steve Kang) <Steve.Kang@...soc.com>
Subject: Re: summarize all information again at bottom//reply: reply: [PATCH]
 mm: fix a race scenario in folio_isolate_lru

On Thu, Mar 21, 2024 at 8:36 PM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Thu, Mar 21, 2024 at 04:25:07PM +0800, Zhaoyang Huang wrote:
> > ok. Could the scenario below be suspicious on leaving an orphan folio
> > in step 7 and introduce the bug in step 8. In the scenario,
> > Thread_filemap behaves as a backdoor for Thread_madv by creating the
> > pte after Thread_truncate finishes cleaning all page tables.
> >
> > 0. Thread_bad gets the folio by folio_get_entry and stores it in its
> > local fbatch_bad and go to sleep
>
> There's no function called folio_get_entry(), but clearly thread_bad
> should have a refcount on it at this point.
>
> > 1. Thread_filemap get the folio via
> > filemap_map_pages->next_uptodate_folio->xas_next_entry and gets
> > preempted
> >     refcnt == 1(page_cache), PG_lru == true
>
> so the refcount should be 2 here.
>
> > 2. Thread_truncate get the folio via
> > truncate_inode_pages_range->find_lock_entries
> >     refcnt == 2(fbatch_trunc, page_cache), PG_lru == true
> >
> > 3. Thread_truncate proceed to truncate_cleanup_folio
> >     refcnt == 2(fbatch_trunc, page_cache), PG_lru == true
> >
> > 4. Thread_truncate proceed to delete_from_page_cache_batch
> >     refcnt == 1(fbatch_trunc), PG_lru == true
> >
> > 5. Thread_filemap schedule back and proceed to setup a pte and have
> > folio->_mapcnt = 0 & folio->refcnt += 1
> >     refcnt == 2(pte, fbatch_temp), PG_lru == true
> >
> > 6. Thread_madv clear folio's PG_lru by
> > madvise_xxx_pte_range->folio_isolate_lru->folio_test_clear_lru
> >     refcnt == 2(pte,fbatch_temp), PG_lru == false
> >
> > 7. Thread_truncate call folio_fbatch_release and failed in freeing
> > folio as refcnt not reach 0
> >     refcnt == 1(pte), PG_lru == false
> > ********folio becomes an orphan here which is not on the page cache
> > but on the task's VM**********
> >
> > 8. Thread_xxx scheduled back from 0 to do release_pages(fbatch_bad)
> > and have the folio introduce the bug.
>
> ... because if these steps happen as 7, 8, 6, you hit the BUG in
> folio_isolate_lru().
Thanks for the comments. fix the typo and update the timing sequence
by amending possible preempt points to have the refcnt make sense.

0. Thread_bad gets the folio by find_get_entry and preempted before
take refcnt(could be the second round scan of
truncate_inode_pages_range)
    refcnt == 1(page_cache), PG_lru == true, PG_lock == false
    find_get_entry
        folio = xas_find
        <preempted>
        folio_try_get_rcu

1. Thread_filemap get the folio via
filemap_map_pages->next_uptodate_folio->xas_next_entry and gets preempted
    refcnt == 1(page_cache), PG_lru == true, PG_lock == false
    filemap_map_pages
        next_uptodate_folio
           xas_next_entry
           <preempted>
           folio_try_get_rcu

2. Thread_truncate get the folio via
truncate_inode_pages_range->find_lock_entries
    refcnt == 2(page_cache, fbatch_truncate), PG_lru == true, PG_lock == true

3. Thread_truncate proceed to truncate_cleanup_folio
    refcnt == 2(page_cache, fbatch_truncate), PG_lru == true, PG_lock == true

4. Thread_truncate proceed to delete_from_page_cache_batch
    refcnt == 1(fbatch_truncate), PG_lru == true, PG_lock == true

4.1 folio_unlock
    refcnt == 1(fbatch_truncate), PG_lru == true, PG_lock == false

5. Thread_filemap schedule back from '1' and proceed to setup a pte
and have folio->_mapcnt = 0 & folio->refcnt += 1
    refcnt == 1->2(+fbatch_filemap)->3->2(pte, fbatch_truncate),
PG_lru == true, PG_lock == true->false

6. Thread_madv clear folio's PG_lru by
madvise_xxx_pte_range->folio_isolate_lru->folio_test_clear_lru
    refcnt == 2(pte,fbatch_truncate), PG_lru == false, PG_lock == false

7. Thread_truncate call folio_fbatch_release and failed in freeing
folio as refcnt not reach 0
    refcnt == 1(pte), PG_lru == false, PG_lock == false
********folio becomes an orphan here which is not on the page cache
but on the task's VM**********

8. Thread_bad scheduled back from '0' to be collected in fbatch_bad
    refcnt == 2(pte, fbatch_bad), PG_lru == false, PG_lock == true

9. Thread_bad clear one refcnt wrongly when doing filemap_remove_folio
as it take this refcnt as the page cache one
    refcnt == 1(fbatch_bad), PG_lru == false, PG_lock == true->false
    truncate_inode_folio
        filemap_remove_folio
             filemap_free_folio
******refcnt decreased wrongly here by being taken as the page cache one ******

10. Thread_bad calls release_pages(fbatch_bad) and has the folio
introduce the bug.
    release_pages
        folio_put_testzero == true
        folio_test_lru == false
        list_add(folio->lru, pages_to_free)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ