lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YN1ZjHx74KUzA4Rs@kroah.com>
Date:   Thu, 1 Jul 2021 07:58:36 +0200
From:   "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>
To:     권오훈 <ohoono.kwon@...sung.com>
Cc:     Matthew Wilcox <willy@...radead.org>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
        "ohkwon1043@...il.com" <ohkwon1043@...il.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: cleancache: fix potential race in cleancache apis

On Thu, Jul 01, 2021 at 02:06:44PM +0900, 권오훈 wrote:
> On Thu, Jul 1, 2021 at 02:06:45PM +0900, 권오훈 wrote:
> > On Wed, Jun 30, 2021 at 12:26:57PM +0100, Matthew Wilcox wrote:
> > > On Wed, Jun 30, 2021 at 10:13:28AM +0200, gregkh@...uxfoundation.org wrote:
> > > > On Wed, Jun 30, 2021 at 04:33:10PM +0900, 권오훈 wrote:
> > > > > Current cleancache api implementation has potential race as follows,
> > > > > which might lead to corruption in filesystems using cleancache.
> > > > > 
> > > > > thread 0                thread 1                        thread 2
> > > > > 
> > > > >                         in put_page
> > > > >                         get pool_id K for fs1
> > > > > invalidate_fs on fs1
> > > > > frees pool_id K
> > > > >                                                         init_fs for fs2
> > > > >                                                         allocates pool_id K
> > > > >                         put_page puts page
> > > > >                         which belongs to fs1
> > > > >                         into cleancache pool for fs2
> > > > > 
> > > > > At this point, a file cache which originally belongs to fs1 might be
> > > > > copied back to cleancache pool of fs2, which might be later used as if
> > > > > it were normal cleancache of fs2, and could eventually corrupt fs2 when
> > > > > flushed back.
> > > > > 
> > > > > Add rwlock in order to synchronize invalidate_fs with other cleancache
> > > > > operations.
> > > > > 
> > > > > In normal situations where filesystems are not frequently mounted or
> > > > > unmounted, there will be little performance impact since
> > > > > read_lock/read_unlock apis are used.
> > > > > 
> > > > > Signed-off-by: Ohhoon Kwon <ohoono.kwon@...sung.com>
> > > > 
> > > > What commit does this fix?  Should it go to stable kernels?
> > > 
> > > I have a commit I haven't submitted yet with this changelog:
> > > 
> > >     Remove cleancache
> > > 
> > >     The last cleancache backend was deleted in v5.3 ("xen: remove tmem
> > >     driver"), so it has been unused since.  Remove all its filesystem hooks.
> > > 
> > >     Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> >  
> > That's even better!
> >  
> > But if so, how is the above reported problem even a problem if no one is
> > using cleancache?
> >  
> > thanks,
> >  
> > greg k-h
> > 
> Dear all.
> 
> We are using Cleancache APIs for our proprietary feature in Samsung.
> As Wilcox mentioned, however, there is no cleancache backend in current kernel
> mainline.
> So if the race patch shall be accepted, then it seems unnecessary to patch 
> previous stable kernels.
> 
> Meanwhile, I personally think cleancache API still has potential to be a good
> material when used with new arising technologies such as pmem or NVMe.
> 
> So I suggest to postpone removing cleancache for a while.

If there are no in-kernel users, it needs to be removed.  If you rely on
this, wonderful, please submit your code as soon as possible.

thanks,

greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ