lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 17 Mar 2024 12:07:40 +0800
From: Zhaoyang Huang <huangzhaoyang@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: "zhaoyang.huang" <zhaoyang.huang@...soc.com>, Andrew Morton <akpm@...ux-foundation.org>, 
	linux-mm@...ck.org, linux-kernel@...r.kernel.org, steve.kang@...soc.com
Subject: Re: [PATCH] mm: fix a race scenario in folio_isolate_lru

On Sat, Mar 16, 2024 at 10:59 PM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Sat, Mar 16, 2024 at 04:53:09PM +0800, Zhaoyang Huang wrote:
> > On Fri, Mar 15, 2024 at 8:46 PM Matthew Wilcox <willy@...radeadorg> wrote:
> > >
> > > On Thu, Mar 14, 2024 at 04:39:21PM +0800, zhaoyang.huang wrote:
> > > > From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> > > >
> > > > Panic[1] reported which is caused by lruvec->list break. Fix the race
> > > > between folio_isolate_lru and release_pages.
> > > >
> > > > race condition:
> > > > release_pages could meet a non-refered folio which escaped from being
> > > > deleted from LRU but add to another list_head
> > >
> > > I don't think the bug is in folio_isolate_lru() but rather in its
> > > caller.
> > >
> > >  * Context:
> > >  *
> > >  * (1) Must be called with an elevated refcount on the folio. This is a
> > >  *     fundamental difference from isolate_lru_folios() (which is called
> > >  *     without a stable reference).
> > >
> > > So when release_pages() runs, it must not see a refcount decremented to
> > > zero, because the caller of folio_isolate_lru() is supposed to hold one.
> > >
> > > Your stack trace is for the thread which is calling release_pages(), not
> > > the one calling folio_isolate_lru(), so I can't help you debug further.
> > Thanks for the comments.  According to my understanding,
> > folio_put_testzero does the decrement before test which makes it
> > possible to have release_pages see refcnt equal zero and proceed
> > further(folio_get in folio_isolate_lru has not run yet).
>
> No, that's not possible.
>
> In the scenario below, at entry to folio_isolate_lru(), the folio has
> refcount 2.  It has one refcount from thread 0 (because it must own one
> before calling folio_isolate_lru()) and it has one refcount from thread 1
> (because it's about to call release_pages()).  If release_pages() were
> not running, the folio would have refcount 3 when folio_isolate_lru()
> returned.
Could it be this scenario, where folio comes from pte(thread 0), local
fbatch(thread 1) and page cache(thread 2) concurrently and proceed
intermixed without lock's protection? Actually, IMO, thread 1 also
could see the folio with refcnt==1 since it doesn't care if the page
is on the page cache or not.

madivise_cold_and_pageout does no explicit folio_get thing since the
folio comes from pte which implies it has one refcnt from pagecache

#thread 0(madivise_cold_and_pageout)        #1
(lru_add_drain->fbatch_release_pages)
#2(read_pages->filemap_remove_folios)
refcnt == 1(represent page cache)

refcnt==2(another one represent LRU)
   folio comes from page cache
folio_isolate_lru
release_pages
                 filemap_free_folio


                             refcnt==1(decrease the one of page cache)

 folio_put_testzero == true

  <No lruvec_del_folio>

 list_add(folio->lru, pages_to_free) //current folio will break LRU's
integrity since it has not been deleted

In case of gmail's wrap, split above chart to two parts

#thread 0(madivise_cold_and_pageout)        #1
(lru_add_drain->fbatch_release_pages)
refcnt == 1(represent page cache)

refcnt==2(another one represent LRU)
folio_isolate_lru                                               release_pages

 folio_put_testzero == true

  <No lruvec_del_folio>

 list_add(folio->lru, pages_to_free)

 //current folio will break LRU's integrity since it has not been
deleted

#1 (lru_add_drain->fbatch_release_pages)
#2(read_pages->filemap_remove_folios)
refcnt==2(another one represent LRU)
   folio comes from page cache
release_pages
                 filemap_free_folio

                            refcnt==1(decrease the one of page cache)
 folio_put_testzero == true
 <No lruvec_del_folio>
 list_add(folio->lru, pages_to_free)
//current folio will break LRU's integrity since it has not been deleted
>
> >    #0 folio_isolate_lru          #1 release_pages
> > BUG_ON(!folio_refcnt)
> >                                          if (folio_put_testzero())
> >    folio_get(folio)
> >    if (folio_test_clear_lru())

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ