lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGWkznHQLoU48Wx5kP64LN-ord6J2kvopBzpOLno4PDKTnQsiQ@mail.gmail.com>
Date: Mon, 18 Mar 2024 14:15:56 +0800
From: Zhaoyang Huang <huangzhaoyang@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: 黄朝阳 (Zhaoyang Huang) <zhaoyang.huang@...soc.com>, 
	Andrew Morton <akpm@...ux-foundation.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>, 
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, 
	康纪滨 (Steve Kang) <Steve.Kang@...soc.com>
Subject: Re: reply: [PATCH] mm: fix a race scenario in folio_isolate_lru

On Mon, Mar 18, 2024 at 11:28 AM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Mon, Mar 18, 2024 at 01:37:04AM +0000, 黄朝阳 (Zhaoyang Huang) wrote:
> > >On Sun, Mar 17, 2024 at 12:07:40PM +0800, Zhaoyang Huang wrote:
> > >> Could it be this scenario, where folio comes from pte(thread 0), local
> > >> fbatch(thread 1) and page cache(thread 2) concurrently and proceed
> > >> intermixed without lock's protection? Actually, IMO, thread 1 also
> > >> could see the folio with refcnt==1 since it doesn't care if the page
> > >> is on the page cache or not.
> > >>
> > >> madivise_cold_and_pageout does no explicit folio_get thing since the
> > >> folio comes from pte which implies it has one refcnt from pagecache
> > >
> > >Mmm, no.  It's implicit, but madvise_cold_or_pageout_pte_range()
> > >does guarantee that the folio has at least one refcount.
> > >
> > >Since we get the folio from vm_normal_folio(vma, addr, ptent); we know that
> > >there is at least one mapcount on the folio.  refcount is always >= mapcount.
> > >Since we hold pte_offset_map_lock(), we know that mapcount (and therefore
> > >refcount) cannot be decremented until we call pte_unmap_unlock(), which we
> > >don't do until we have called folio_isolate_lru().
> > >
> > >Good try though, took me a few minutes of looking at it to convince myself that
> > >it was safe.
> > >
> > >Something to bear in mind is that if the race you outline is real, failing to hold a
> > >refcount on the folio leaves the caller susceptible to the
> > >VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio); if the other thread calls
> > >folio_put().
> > Resend the chart via outlook.
> > I think the problem rely on an special timing which is rare, I would like to list them below in timing sequence.
> >
> > 1. thread 0 calls folio_isolate_lru with refcnt == 1
>
> (i assume you mean refcnt == 2 here, otherwise none of this makes sense)
>
> > 2. thread 1 calls release_pages with refcnt == 2.(IMO, it could be 1 as release_pages doesn't care if the folio is used by page cache or fs)
> > 3. thread 2 decrease refcnt to 1 by calling filemap_free_folio.(as I mentioned in 2, thread 2 is not mandatary here)
> > 4. thread 1 calls folio_put_testzero and pass.(lruvec->lock has not been take here)
>
> But there's already a bug here.
>
> Rearrange the order of this:
>
> 2. thread 1 calls release_pages with refcount == 2 (decreasing refcount to 1)
> 3. thread 2 decrease refcount to 0 by calling filemap_free_folio
> 1. thread 0 calls folio_isolate_lru() and hits the BUG().
>
> > 5. thread 0 clear folio's PG_lru by calling folio_test_clear_lru. The folio_get behind has no meaning there.
> > 6. thread 1 failed in folio_test_lru and leave the folio on the LRU.
> > 7. thread 1 add folio to pages_to_free wrongly which could break the LRU's->list and will have next folio experience list_del_invalid
> >
> > #thread 0(madivise_cold_and_pageout)        #1(lru_add_drain->fbatch_release_pages)       #2(read_pages->filemap_remove_folios)
> > refcnt == 1(represent page cache)             refcnt==2(another one represent LRU)          folio comes from page cache
>
> This is still illegible.  Try it this way:
>
> Thread 0        Thread 1        Thread 2
> madvise_cold_or_pageout_pte_range
>                 lru_add_drain
>                 fbatch_release_pages
>                                 read_pages
>                                 filemap_remove_folio
Thread 0        Thread 1        Thread 2
madvise_cold_or_pageout_pte_range
                truncate_inode_pages_range
                fbatch_release_pages
                                truncate_inode_pages_range
                                filemap_remove_folio
Sorry for the confusion. Rearrange the timing chart like above
according to the real panic's stacktrace. Thread 1&2 are all from
truncate_inode_pages_range(I think thread2(read_pages) is not
mandatory here as thread 0&1 could rely on the same refcnt==1).
>
> Some accuracy in your report would also be appreciated.  There's no
> function called madivise_cold_and_pageout, nor is there a function called
> filemap_remove_folios().  It's a little detail, but it's annoying for
> me to try to find which function you're actually referring to.  I have
> to guess, and it puts me in a bad mood.
>
> At any rate, these three functions cannot do what you're proposing.
> In read_page(), when we call filemap_remove_folio(), the folio in
> question will not have the uptodate flag set, so can never have been
> put in the page tables, so cannot be found by madvise().
>
> Also, as I said in my earlier email, madvise_cold_or_pageout_pte_range()
> does guarantee that the refcount on the folio is held and can never
> decrease to zero while folio_isolate_lru() is running.  So that's two
> ways this scenario cannot happen.
The madivse_xxx comes from my presumption which has any proof.
Whereas, It looks like truncate_inode_pages_range just cares about
page cache refcnt by folio_put_testzero without noticing any task's VM
stuff. Furthermore, I notice that move_folios_to_lru is safe as it
runs with holding lruvec->lock.
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ