[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220923021012.GZ3600936@dread.disaster.area>
Date: Fri, 23 Sep 2022 12:10:12 +1000
From: Dave Chinner <david@...morbit.com>
To: Dan Williams <dan.j.williams@...el.com>
Cc: Jason Gunthorpe <jgg@...dia.com>, akpm@...ux-foundation.org,
Matthew Wilcox <willy@...radead.org>, Jan Kara <jack@...e.cz>,
"Darrick J. Wong" <djwong@...nel.org>,
Christoph Hellwig <hch@....de>,
John Hubbard <jhubbard@...dia.com>,
linux-fsdevel@...r.kernel.org, nvdimm@...ts.linux.dev,
linux-xfs@...r.kernel.org, linux-mm@...ck.org,
linux-ext4@...r.kernel.org
Subject: Re: [PATCH v2 05/18] xfs: Add xfs_break_layouts() to the inode
eviction path
On Thu, Sep 22, 2022 at 05:41:08PM -0700, Dan Williams wrote:
> Dave Chinner wrote:
> > On Wed, Sep 21, 2022 at 07:28:51PM -0300, Jason Gunthorpe wrote:
> > > On Thu, Sep 22, 2022 at 08:14:16AM +1000, Dave Chinner wrote:
> > >
> > > > Where are these DAX page pins that don't require the pin holder to
> > > > also hold active references to the filesystem objects coming from?
> > >
> > > O_DIRECT and things like it.
> >
> > O_DIRECT IO to a file holds a reference to a struct file which holds
> > an active reference to the struct inode. Hence you can't reclaim an
> > inode while an O_DIRECT IO is in progress to it.
> >
> > Similarly, file-backed pages pinned from user vmas have the inode
> > pinned by the VMA having a reference to the struct file passed to
> > them when they are instantiated. Hence anything using mmap() to pin
> > file-backed pages (i.e. applications using FSDAX access from
> > userspace) should also have a reference to the inode that prevents
> > the inode from being reclaimed.
> >
> > So I'm at a loss to understand what "things like it" might actually
> > mean. Can you actually describe a situation where we actually permit
> > (even temporarily) these use-after-free scenarios?
>
> Jason mentioned a scenario here:
>
> https://lore.kernel.org/all/YyuoE8BgImRXVkkO@nvidia.com/
>
> Multi-thread process where thread1 does open(O_DIRECT)+mmap()+read() and
> thread2 does memunmap()+close() while the read() is inflight.
And, ah, what production application does this and expects to be
able to process the result of the read() operation without getting a
SEGV?
There's a huge difference between an unlikely scenario which we need
to work (such as O_DIRECT IO to/from a mmap() buffer at a different
offset on the same file) and this sort of scenario where even if we
handle it correctly, the application can't do anything with the
result and will crash immediately....
> Sounds plausible to me, but I have not tried to trigger it with a focus
> test.
If there really are applications this .... broken, then it's not the
responsibility of the filesystem to paper over the low level page
reference tracking issues that cause it.
i.e. The underlying problem here is that memunmap() frees the VMA
while there are still active task-based references to the pages in
that VMA. IOWs, the VMA should not be torn down until the O_DIRECT
read has released all the references to the pages mapped into the
task address space.
This just doesn't seem like an issue that we should be trying to fix
by adding band-aids to the inode life-cycle management.
-Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists