[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210701085650epcms1p381d6d9c0052408c2ba011777fe3e74ba@epcms1p3>
Date: Thu, 01 Jul 2021 17:56:50 +0900
From: 권오훈 <ohoono.kwon@...sung.com>
To: "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
권오훈 <ohoono.kwon@...sung.com>
CC: Matthew Wilcox <willy@...radead.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
"ohkwon1043@...il.com" <ohkwon1043@...il.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] mm: cleancache: fix potential race in cleancache apis
On Thu, Jul 01, 2021 at 07:58:36AM +0200, gregkh@...uxfoundation.org wrote:
> On Thu, Jul 01, 2021 at 02:06:44PM +0900, 권오훈 wrote:
> > On Wed, Jun 30, 2021 at 02:29:23PM +0200, gregkh@...uxfoundation.org wrote:
> > > On Wed, Jun 30, 2021 at 12:26:57PM +0100, Matthew Wilcox wrote:
> > > > On Wed, Jun 30, 2021 at 10:13:28AM +0200, gregkh@...uxfoundation.org wrote:
> > > > > On Wed, Jun 30, 2021 at 04:33:10PM +0900, 권오훈 wrote:
> > > > > > Current cleancache api implementation has potential race as follows,
> > > > > > which might lead to corruption in filesystems using cleancache.
> > > > > >
> > > > > > thread 0 thread 1 thread 2
> > > > > >
> > > > > > in put_page
> > > > > > get pool_id K for fs1
> > > > > > invalidate_fs on fs1
> > > > > > frees pool_id K
> > > > > > init_fs for fs2
> > > > > > allocates pool_id K
> > > > > > put_page puts page
> > > > > > which belongs to fs1
> > > > > > into cleancache pool for fs2
> > > > > >
> > > > > > At this point, a file cache which originally belongs to fs1 might be
> > > > > > copied back to cleancache pool of fs2, which might be later used as if
> > > > > > it were normal cleancache of fs2, and could eventually corrupt fs2 when
> > > > > > flushed back.
> > > > > >
> > > > > > Add rwlock in order to synchronize invalidate_fs with other cleancache
> > > > > > operations.
> > > > > >
> > > > > > In normal situations where filesystems are not frequently mounted or
> > > > > > unmounted, there will be little performance impact since
> > > > > > read_lock/read_unlock apis are used.
> > > > > >
> > > > > > Signed-off-by: Ohhoon Kwon <ohoono.kwon@...sung.com>
> > > > >
> > > > > What commit does this fix? Should it go to stable kernels?
> > > >
> > > > I have a commit I haven't submitted yet with this changelog:
> > > >
> > > > Remove cleancache
> > > >
> > > > The last cleancache backend was deleted in v5.3 ("xen: remove tmem
> > > > driver"), so it has been unused since. Remove all its filesystem hooks.
> > > >
> > > > Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> > >
> > > That's even better!
> > >
> > > But if so, how is the above reported problem even a problem if no one is
> > > using cleancache?
> > >
> > > thanks,
> > >
> > > greg k-h
> > >
> > Dear all.
> >
> > We are using Cleancache APIs for our proprietary feature in Samsung.
> > As Wilcox mentioned, however, there is no cleancache backend in current kernel
> > mainline.
> > So if the race patch shall be accepted, then it seems unnecessary to patch
> > previous stable kernels.
> >
> > Meanwhile, I personally think cleancache API still has potential to be a good
> > material when used with new arising technologies such as pmem or NVMe.
> >
> > So I suggest to postpone removing cleancache for a while.
>
> If there are no in-kernel users, it needs to be removed. If you rely on
> this, wonderful, please submit your code as soon as possible.
>
> thanks,
>
> greg k-h
>
We will discuss internally and see if we can submit our feature.
Thanks,
Ohhoon Kwon
Powered by blists - more mailing lists