lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 3 Jun 2024 17:07:02 +0800
From: Zhang Yi <yi.zhang@...weicloud.com>
To: Brian Foster <bfoster@...hat.com>
Cc: linux-xfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
 linux-kernel@...r.kernel.org, djwong@...nel.org, hch@...radead.org,
 brauner@...nel.org, david@...morbit.com, chandanbabu@...nel.org,
 jack@...e.cz, willy@...radead.org, yi.zhang@...wei.com,
 chengzhihao1@...wei.com, yukuai3@...wei.com
Subject: Re: [RFC PATCH v4 1/8] iomap: zeroing needs to be pagecache aware

On 2024/6/2 19:04, Brian Foster wrote:
> On Wed, May 29, 2024 at 05:51:59PM +0800, Zhang Yi wrote:
>> From: Dave Chinner <dchinner@...hat.com>
>>
>> Unwritten extents can have page cache data over the range being
>> zeroed so we can't just skip them entirely. Fix this by checking for
>> an existing dirty folio over the unwritten range we are zeroing
>> and only performing zeroing if the folio is already dirty.
>>
>> XXX: how do we detect a iomap containing a cow mapping over a hole
>> in iomap_zero_iter()? The XFS code implies this case also needs to
>> zero the page cache if there is data present, so trigger for page
>> cache lookup only in iomap_zero_iter() needs to handle this case as
>> well.
>>
>> Before:
>>
>> $ time sudo ./pwrite-trunc /mnt/scratch/foo 50000
>> path /mnt/scratch/foo, 50000 iters
>>
>> real    0m14.103s
>> user    0m0.015s
>> sys     0m0.020s
>>
>> $ sudo strace -c ./pwrite-trunc /mnt/scratch/foo 50000
>> path /mnt/scratch/foo, 50000 iters
>> % time     seconds  usecs/call     calls    errors syscall
>> ------ ----------- ----------- --------- --------- ----------------
>>  85.90    0.847616          16     50000           ftruncate
>>  14.01    0.138229           2     50000           pwrite64
>> ....
>>
>> After:
>>
>> $ time sudo ./pwrite-trunc /mnt/scratch/foo 50000
>> path /mnt/scratch/foo, 50000 iters
>>
>> real    0m0.144s
>> user    0m0.021s
>> sys     0m0.012s
>>
>> $ sudo strace -c ./pwrite-trunc /mnt/scratch/foo 50000
>> path /mnt/scratch/foo, 50000 iters
>> % time     seconds  usecs/call     calls    errors syscall
>> ------ ----------- ----------- --------- --------- ----------------
>>  53.86    0.505964          10     50000           ftruncate
>>  46.12    0.433251           8     50000           pwrite64
>> ....
>>
>> Yup, we get back all the performance.
>>
>> As for the "mmap write beyond EOF" data exposure aspect
>> documented here:
>>
>> https://lore.kernel.org/linux-xfs/20221104182358.2007475-1-bfoster@redhat.com/
>>
>> With this command:
>>
>> $ sudo xfs_io -tfc "falloc 0 1k" -c "pwrite 0 1k" \
>>   -c "mmap 0 4k" -c "mwrite 3k 1k" -c "pwrite 32k 4k" \
>>   -c fsync -c "pread -v 3k 32" /mnt/scratch/foo
>>
>> Before:
>>
>> wrote 1024/1024 bytes at offset 0
>> 1 KiB, 1 ops; 0.0000 sec (34.877 MiB/sec and 35714.2857 ops/sec)
>> wrote 4096/4096 bytes at offset 32768
>> 4 KiB, 1 ops; 0.0000 sec (229.779 MiB/sec and 58823.5294 ops/sec)
>> 00000c00:  58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58
>> XXXXXXXXXXXXXXXX
>> 00000c10:  58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58
>> XXXXXXXXXXXXXXXX
>> read 32/32 bytes at offset 3072
>> 32.000000 bytes, 1 ops; 0.0000 sec (568.182 KiB/sec and 18181.8182
>>    ops/sec
>>
>> After:
>>
>> wrote 1024/1024 bytes at offset 0
>> 1 KiB, 1 ops; 0.0000 sec (40.690 MiB/sec and 41666.6667 ops/sec)
>> wrote 4096/4096 bytes at offset 32768
>> 4 KiB, 1 ops; 0.0000 sec (150.240 MiB/sec and 38461.5385 ops/sec)
>> 00000c00:  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>> ................
>> 00000c10:  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>> ................
>> read 32/32 bytes at offset 3072
>> 32.000000 bytes, 1 ops; 0.0000 sec (558.036 KiB/sec and 17857.1429
>>    ops/sec)
>>
>> We see that this post-eof unwritten extent dirty page zeroing is
>> working correctly.
>>
> 
> I've pointed this out in the past, but IIRC this implementation is racy
> vs. reclaim. Specifically, relying on folio lookup after mapping lookup
> doesn't take reclaim into account, so if we look up an unwritten mapping
> and then a folio flushes and reclaims by the time the scan reaches that
> offset, it incorrectly treats that subrange as already zero when it
> actually isn't (because the extent is actually stale by that point, but
> the stale extent check is skipped).
> 

Hello, Brian!

I'm confused, how could that happen? We do stale check under folio lock,
if the folio flushed and reclaimed before we get&lock that folio in
iomap_zero_iter()->iomap_write_begin(), the ->iomap_valid() would check
this stale out and zero again in the next iteration. Am I missing
something?

Thanks,
Yi.

> A simple example to demonstrate this is something like the following:
> 
> # looping truncate zeroing
> while [ true ]; do
> 	xfs_io -fc "truncate 0" -c "falloc 0 32K" -c "pwrite 0 4k" -c "truncate 2k" <file>
> 	xfs_io -c "mmap 0 4k" -c "mread -v 2k 16" <file> | grep cd && break
> done
> 
> vs.
> 
> # looping writeback and reclaim
> while [ true ]; do
> 	xfs_io -c "sync_range -a 0 0" -c "fadvise -d 0 0" <file>
> done
> 
> If I ran that against this patch, the first loop will eventually detect
> stale data exposed past eof.
> 
> Brian
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ