lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon,  7 Aug 2023 10:38:23 +0800
From:   Wu Bo <bo.wu@...o.com>
To:     chao@...nel.org
Cc:     bo.wu@...o.com, daehojeong@...gle.com, jaegeuk@...nel.org,
        linux-f2fs-devel@...ts.sourceforge.net,
        linux-kernel@...r.kernel.org, wubo.oduw@...il.com
Subject: Re: [f2fs-dev] [PATCH 1/1] f2fs: move fiemap to use iomap framework

On 2023/8/6 10:05, Chao Yu wrote:

> On 2023/7/31 9:26, Wu Bo wrote:
>> This patch has been tested with xfstests by running 'kvm-xfstests -c
>> f2fs -g auto' with and without this patch; no regressions were seen.
>>
>> Some tests fail both before and after, and the test results are:
>> f2fs/default: 683 tests, 9 failures, 226 skipped, 30297 seconds
>>    Failures: generic/050 generic/064 generic/250 generic/252 generic/459
>>        generic/506 generic/563 generic/634 generic/635
>
> Can you please take a look at gerneic/473 ?

This generic/473 case is failed on xfs too. It's an issue of iomap.

>
> generic/473 1s ... - output mismatch (see
> /media/fstests/results//generic/473.out.bad)
>     --- tests/generic/473.out    2022-11-10 08:42:19.231395230 +0000
>     +++ /media/fstests/results//generic/473.out.bad    2023-08-04
> 02:02:01.000000000 +0000
>     @@ -6,7 +6,7 @@
>      1: [256..287]: hole
>      Hole + Data
>      0: [0..127]: hole
>     -1: [128..255]: data
>     +1: [128..135]: data
>      Hole + Data + Hole
>      0: [0..127]: hole
>     ...
>     (Run 'diff -u /media/fstests/tests/generic/473.out
> /media/fstests/results//generic/473.out.bad'  to see the entire diff)

The layout of the test file is:
fiemap.473:
 EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
   0: [0..127]:        hole               128
   1: [128..255]:      5283840..5283967   128 0x1000
   2: [256..383]:      hole               128
   3: [384..511]:      5283968..5284095   128 0x1000

And the test command is:
xfs_io -c "fiemap -v 0 65k" fiemap.473

So the difference is about when to stop traversal the extents.
The iomap stop when the length beyond it is requested from fiemap command:
...
xfs_io-7399    [001] .....  1385.656328: f2fs_map_blocks: dev = (254,48), ino = 5, file offset = 15, start blkaddr = 0x0, len = 0x0, flags = 0, seg_type = 8, may_create = 0, multidevice = 0, flag = 1, err = 0                                                                                       
xfs_io-7399    [001] .....  1385.656328: f2fs_map_blocks: dev = (254,48), ino = 5, file offset = 16, start blkaddr = 0x3400, len = 0x1, flags = 2, seg_type = 8, may_create = 0, multidevice = 0, flag = 1, err = 0            

While previous logic is that stop traversal until next data extent is found:
...
xfs_io-2194    [000] .....   116.046690: f2fs_map_blocks: dev = (254,48), ino = 5, file offset = 15, start blkaddr = 0x0, len = 0x0, flags = 0, seg_type = 8, may_create = 0, multidevice = 0, flag = 1, err = 0
xfs_io-2194    [000] .....   116.046690: f2fs_map_blocks: dev = (254,48), ino = 5, file offset = 16, start blkaddr = 0xa1400, len = 0x10, flags = 2, seg_type = 8, may_create = 0, multidevice = 0, flag = 1, err = 0
xfs_io-2194    [000] .....   116.046691: f2fs_map_blocks: dev = (254,48), ino = 5, file offset = 32, start blkaddr = 0x0, len = 0x0, flags = 0, seg_type = 8, may_create = 0, multidevice = 0, flag = 1, err = 0
...
xfs_io-2194    [000] .....   116.046706: f2fs_map_blocks: dev = (254,48), ino = 5, file offset = 48, start blkaddr = 0xa1410, len = 0x10, flags = 2, seg_type = 8, may_create = 0, multidevice = 0, flag = 1, err = 0

>
> Other concern is, it needs to test this implementation on compressed
> file,
> since the logic is a little bit complicated.

To be honest, all the complex logic is try to handle compressed file situation.

I used enwiki8 dataset to test compressed file:
    mkfs.f2fs -f -O extra_attr,compression f2fs.img
    mount f2fs.img f2fs -o compress_algorithm=lz4,compress_log_size=3,compress_mode=user
    touch compressed_file
    f2fs_io setflags compression compressed_file
    cat enwiki8 > compressed_file
    f2fs_io compress compressed_file
    f2fs_io release_cblocks compressed_file
    xfs_io -c fiemap compressed_file | awk '{print $2 $3}'

enwiki8 download url: http://mattmahoney.net/dc/enwik8.zip

And the result is:
--- a/orig
+++ b/new
@@ -1750,8 +1750,8 @@
 [111872..111935]:323448..323511
 [111936..111999]:323488..323551
 [112000..112063]:323520..323583
-[112064..112087]:323560..323583
-[112088..112127]:53248..53287
+[112064..112095]:323560..323591
+[112096..112127]:53248..53279
 [112128..112191]:53256..53319
 [112192..112255]:53288..53351
 [112256..112319]:53328..53391
@@ -2078,10 +2078,8 @@
 [132800..132863]:65408..65471
 [132864..132927]:65448..65511
 [132928..132991]:65488..65551
-[132992..132999]:65528..65535
-[133000..133007]:65528..65535
-[133008..133039]:69632..69663
-[133040..133055]:hole
+[132992..133007]:65528..65543
+[133008..133055]:69632..69679
 [133056..133119]:69664..69727
 [133120..133183]:69704..69767
 [133184..133247]:69744..69807

The first diff is I count the space of COMPRESS_ADDR belong to the head of one
compressed cluster while previous count at the rear of cluster.
The secound diff show the previous print a 'hole' in one cluster. I think a
compressed cluster should not include a 'hole', so there may have a bug before.

Also, as discussed in this thread:
https://lore.kernel.org/linux-f2fs-devel/ZJmBmt3WmUpWR3+2@casper.infradead.org/T/#t
If f2fs can support async buffer write, the performance can be greatly improved
when using io_uring. 

I think it's time to move f2fs to iomap framework. And really looking forward
to hearing your opinion on this.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ