[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZqsN5ouQTEc1KAzV@casper.infradead.org>
Date: Thu, 1 Aug 2024 05:24:06 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Zhang Yi <yi.zhang@...weicloud.com>
Cc: linux-xfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, djwong@...nel.org, hch@...radead.org,
brauner@...nel.org, david@...morbit.com, jack@...e.cz,
yi.zhang@...wei.com, chengzhihao1@...wei.com, yukuai3@...wei.com
Subject: Re: [PATCH 5/6] iomap: drop unnecessary state_lock when setting ifs
uptodate bits
On Thu, Aug 01, 2024 at 09:52:49AM +0800, Zhang Yi wrote:
> On 2024/8/1 0:52, Matthew Wilcox wrote:
> > On Wed, Jul 31, 2024 at 05:13:04PM +0800, Zhang Yi wrote:
> >> Commit '1cea335d1db1 ("iomap: fix sub-page uptodate handling")' fix a
> >> race issue when submitting multiple read bios for a page spans more than
> >> one file system block by adding a spinlock(which names state_lock now)
> >> to make the page uptodate synchronous. However, the race condition only
> >> happened between the read I/O submitting and completeing threads, it's
> >> sufficient to use page lock to protect other paths, e.g. buffered write
> >> path. After large folio is supported, the spinlock could affect more
> >> about the buffered write performance, so drop it could reduce some
> >> unnecessary locking overhead.
> >
> > This patch doesn't work. If we get two read completions at the same
> > time for blocks belonging to the same folio, they will both write to
> > the uptodate array at the same time.
> >
> This patch just drop the state_lock in the buffered write path, doesn't
> affect the read path, the uptodate setting in the read completion path
> is still protected the state_lock, please see iomap_finish_folio_read().
> So I think this patch doesn't affect the case you mentioned, or am I
> missing something?
Oh, I see. So the argument for locking correctness is that:
A. If ifs_set_range_uptodate() is called from iomap_finish_folio_read(),
the state_lock is held.
B. If ifs_set_range_uptodate() is called from iomap_set_range_uptodate(),
either we know:
B1. The caller of iomap_set_range_uptodate() holds the folio lock, and this
is the only place that can call ifs_set_range_uptodate() for this folio
B2. The caller of iomap_set_range_uptodate() holds the state lock
But I think you've assigned iomap_read_inline_data() to case B1 when I
think it's B2. erofs can certainly have a file which consists of various
blocks elsewhere in the file and then a tail that is stored inline.
__iomap_write_begin() is case B1 because it holds the folio lock, and
submits its read(s) sychronously. Likewise __iomap_write_end() is
case B1.
But, um. Why do we need to call iomap_set_range_uptodate() in both
write_begin() and write_end()?
And I think this is actively buggy:
if (iomap_block_needs_zeroing(iter, block_start)) {
if (WARN_ON_ONCE(iter->flags & IOMAP_UNSHARE))
return -EIO;
folio_zero_segments(folio, poff, from, to, poff + plen);
...
iomap_set_range_uptodate(folio, poff, plen);
because we zero from 'poff' to 'from', then from 'to' to 'poff+plen',
but mark the entire range as uptodate. And once a range is marked
as uptodate, it can be read from.
So we can do this:
- Get a write request for bytes 1-4094 over a hole
- allocate single page folio
- zero bytes 0 and 4095
- mark 0-4095 as uptodate
- take page fault while trying to access the user address
- read() to bytes 0-4095 now succeeds even though we haven't written
1-4094 yet
And that page fault can be uffd or a buffer that's in an mmap that's
out on disc. Plenty of time to make this race happen, and we leak
4094/4096 bytes of the previous contents of that folio to userspace.
Or did I miss something?
Powered by blists - more mailing lists