lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Wed, 5 Jun 2024 09:38:18 +1000
From: Dave Chinner <david@...morbit.com>
To: Brian Foster <bfoster@...hat.com>
Cc: Zhang Yi <yi.zhang@...weicloud.com>, linux-xfs@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	djwong@...nel.org, hch@...radead.org, brauner@...nel.org,
	chandanbabu@...nel.org, jack@...e.cz, willy@...radead.org,
	yi.zhang@...wei.com, chengzhihao1@...wei.com, yukuai3@...wei.com
Subject: Re: [RFC PATCH v4 1/8] iomap: zeroing needs to be pagecache aware

On Mon, Jun 03, 2024 at 10:37:48AM -0400, Brian Foster wrote:
> On Mon, Jun 03, 2024 at 05:07:02PM +0800, Zhang Yi wrote:
> > On 2024/6/2 19:04, Brian Foster wrote:
> > > On Wed, May 29, 2024 at 05:51:59PM +0800, Zhang Yi wrote:
> > >> From: Dave Chinner <dchinner@...hat.com>
> > >>
> > >> Unwritten extents can have page cache data over the range being
> > >> zeroed so we can't just skip them entirely. Fix this by checking for
> > >> an existing dirty folio over the unwritten range we are zeroing
> > >> and only performing zeroing if the folio is already dirty.
> > >>
> > >> XXX: how do we detect a iomap containing a cow mapping over a hole
> > >> in iomap_zero_iter()? The XFS code implies this case also needs to
> > >> zero the page cache if there is data present, so trigger for page
> > >> cache lookup only in iomap_zero_iter() needs to handle this case as
> > >> well.
> > >>
> > >> Before:
> > >>
> > >> $ time sudo ./pwrite-trunc /mnt/scratch/foo 50000
> > >> path /mnt/scratch/foo, 50000 iters
> > >>
> > >> real    0m14.103s
> > >> user    0m0.015s
> > >> sys     0m0.020s
> > >>
> > >> $ sudo strace -c ./pwrite-trunc /mnt/scratch/foo 50000
> > >> path /mnt/scratch/foo, 50000 iters
> > >> % time     seconds  usecs/call     calls    errors syscall
> > >> ------ ----------- ----------- --------- --------- ----------------
> > >>  85.90    0.847616          16     50000           ftruncate
> > >>  14.01    0.138229           2     50000           pwrite64
> > >> ....
> > >>
> > >> After:
> > >>
> > >> $ time sudo ./pwrite-trunc /mnt/scratch/foo 50000
> > >> path /mnt/scratch/foo, 50000 iters
> > >>
> > >> real    0m0.144s
> > >> user    0m0.021s
> > >> sys     0m0.012s
> > >>
> > >> $ sudo strace -c ./pwrite-trunc /mnt/scratch/foo 50000
> > >> path /mnt/scratch/foo, 50000 iters
> > >> % time     seconds  usecs/call     calls    errors syscall
> > >> ------ ----------- ----------- --------- --------- ----------------
> > >>  53.86    0.505964          10     50000           ftruncate
> > >>  46.12    0.433251           8     50000           pwrite64
> > >> ....
> > >>
> > >> Yup, we get back all the performance.
> > >>
> > >> As for the "mmap write beyond EOF" data exposure aspect
> > >> documented here:
> > >>
> > >> https://lore.kernel.org/linux-xfs/20221104182358.2007475-1-bfoster@redhat.com/
> > >>
> > >> With this command:
> > >>
> > >> $ sudo xfs_io -tfc "falloc 0 1k" -c "pwrite 0 1k" \
> > >>   -c "mmap 0 4k" -c "mwrite 3k 1k" -c "pwrite 32k 4k" \
> > >>   -c fsync -c "pread -v 3k 32" /mnt/scratch/foo
> > >>
> > >> Before:
> > >>
> > >> wrote 1024/1024 bytes at offset 0
> > >> 1 KiB, 1 ops; 0.0000 sec (34.877 MiB/sec and 35714.2857 ops/sec)
> > >> wrote 4096/4096 bytes at offset 32768
> > >> 4 KiB, 1 ops; 0.0000 sec (229.779 MiB/sec and 58823.5294 ops/sec)
> > >> 00000c00:  58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58
> > >> XXXXXXXXXXXXXXXX
> > >> 00000c10:  58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58
> > >> XXXXXXXXXXXXXXXX
> > >> read 32/32 bytes at offset 3072
> > >> 32.000000 bytes, 1 ops; 0.0000 sec (568.182 KiB/sec and 18181.8182
> > >>    ops/sec
> > >>
> > >> After:
> > >>
> > >> wrote 1024/1024 bytes at offset 0
> > >> 1 KiB, 1 ops; 0.0000 sec (40.690 MiB/sec and 41666.6667 ops/sec)
> > >> wrote 4096/4096 bytes at offset 32768
> > >> 4 KiB, 1 ops; 0.0000 sec (150.240 MiB/sec and 38461.5385 ops/sec)
> > >> 00000c00:  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > >> ................
> > >> 00000c10:  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > >> ................
> > >> read 32/32 bytes at offset 3072
> > >> 32.000000 bytes, 1 ops; 0.0000 sec (558.036 KiB/sec and 17857.1429
> > >>    ops/sec)
> > >>
> > >> We see that this post-eof unwritten extent dirty page zeroing is
> > >> working correctly.
> > >>
> > > 
> > > I've pointed this out in the past, but IIRC this implementation is racy
> > > vs. reclaim. Specifically, relying on folio lookup after mapping lookup
> > > doesn't take reclaim into account, so if we look up an unwritten mapping
> > > and then a folio flushes and reclaims by the time the scan reaches that
> > > offset, it incorrectly treats that subrange as already zero when it
> > > actually isn't (because the extent is actually stale by that point, but
> > > the stale extent check is skipped).
> > > 
> > 
> > Hello, Brian!
> > 
> > I'm confused, how could that happen? We do stale check under folio lock,
> > if the folio flushed and reclaimed before we get&lock that folio in
> > iomap_zero_iter()->iomap_write_begin(), the ->iomap_valid() would check
> > this stale out and zero again in the next iteration. Am I missing
> > something?
> > 
> 
> Hi Yi,
> 
> Yep, that is my understanding of how the revalidation thing works in
> general as well. The nuance in this particular case is that no folio
> exists at the associated offset. Therefore, the reval is skipped in
> iomap_write_begin(), iomap_zero_iter() skips over the range as well, and
> the operation carries on as normal.
>> 
> Have you tried the test sequence above? I just retried on latest master
> plus this series and it still trips for me. I haven't redone the low
> level analysis, but in general what this is trying to show is something
> like the following...
> 
> Suppose we start with an unwritten block on disk with a dirty folio in
> cache:
> 
> - iomap looks up the extent and finds the unwritten mapping.
> - Reclaim kicks in and writes back the page and removes it from cache.

To be pedantic, reclaim doesn't write folios back - we haven't done
that for the best part of a decade on XFS. We don't even have a
->writepage method for reclaim to write back pages anymore.

Hence writeback has to come from background flusher threads hitting
that specific folio, then IO completion running and converting the
unwritten extent, then reclaim hitting that folio, all while the
zeroing of the current iomap is being walked and zeroed.

That makes it an extremely rare and difficult condition to hit. Yes,
it's possible, but it's also something we can easily detect. So as
long as detection is low cost, the cost of resolution when such a
rare event is detected isn't going to be noticed by anyone.

>   The underlying block is no longer unwritten (current mapping is now
>   stale).
> - iomap_zero_iter() processes the associated offset. iomap_get_folio()
>   clears FGP_CREAT, no folio is found.

Actually, this is really easy to fix - we simply revalidate the
mapping at this point rather than just skipping the folio range. If
the mapping has changed because it's now written, the zeroing code
backs out and gets a new mapping that reflects the changed state of
this range.

However, with the above cost analysis in mind, a lower overhead
common case alternative might be to only revalidate the mapping at
->iomap_end() time. If the mapping has changed while zeroing, we
return -EBUSY/-ESTALE and that triggers the zeroing to restart from
the offset at the beginning of the "stale" iomap.  The runtime cost
is one extra mapping revalidation call per mapping, and the
resolution cost is refetching and zeroing the range of a single
unwritten iomap.

-Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ