lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZNB2kORYiKdl3vSq@fedora19.localdomain>
Date:   Mon, 7 Aug 2023 14:44:00 +1000
From:   Ian Wienand <iwienand@...hat.com>
To:     Minchan Kim <minchan@...nel.org>
Cc:     Petr Vorel <pvorel@...e.cz>, ltp@...ts.linux.it,
        linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-kselftest@...r.kernel.org, Nitin Gupta <ngupta@...are.org>,
        Sergey Senozhatsky <senozhatsky@...omium.org>,
        Jens Axboe <axboe@...nel.dk>,
        OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>,
        Martin Doucha <mdoucha@...e.cz>,
        Yang Xu <xuyang2018.jy@...itsu.com>
Subject: Re: [PATCH 0/1] Possible bug in zram on ppc64le on vfat

After thinking it through, I think I might have a explanation...

On Fri, Aug 04, 2023 at 04:37:11PM +1000, Ian Wienand wrote:
> To recap; this test [1] creates a zram device, makes a filesystem on
> it, and fills it with sequential 1k writes from /dev/zero via dd.  The
> problem is that it sees the mem_used_total for the zram device as zero
> in the sysfs stats after the writes; this causes a divide by zero
> error in the script calculation.
> 
> An annoted extract:
> 
>  zram01 3 TINFO: /sys/block/zram1/disksize = '26214400'
>  zram01 3 TPASS: test succeeded
>  zram01 4 TINFO: set memory limit to zram device(s)
>  zram01 4 TINFO: /sys/block/zram1/mem_limit = '25M'
>  zram01 4 TPASS: test succeeded
>  zram01 5 TINFO: make vfat filesystem on /dev/zram1
> 
>  >> at this point a cat of /sys/block/zram1/mm_stat shows
>  >>   65536      527    65536 26214400    65536        0        0        0
> 
>  zram01 5 TPASS: zram_makefs succeeded

So I think the thing to note is that mem_used_total is the current
number of pages (reported * PAGE_SIZE) used by the zsmalloc allocator
to store compressed data.

So we have made the file system, which is now quiescent and just has
basic vfat data; this is compressed and stored and there's one page
allocated for this (arm64, 64k pages).

>  zram01 6 TINFO: mount /dev/zram1
>  zram01 6 TPASS: mount of zram device(s) succeeded
>  zram01 7 TINFO: filling zram1 (it can take long time)
>  zram01 7 TPASS: zram1 was filled with '25568' KB
>
>  >> however, /sys/block/zram1/mm_stat shows
>  >>   9502720        0        0 26214400   196608      145        0        0
>  >> the script reads this zero value and tries to calculate the
>  >> compression ratio
> 
>  ./zram01.sh: line 145: 100 * 1024 * 25568 / 0: division by 0 (error token is "0")

At this point, because this test fills from /dev/zero, the zsmalloc
pool doesn't actually have anything in it.  The filesystem metadata is
in-use from the writes, and is not written out as compressed data.
The zram page de-duplication has kicked in, and instead of handles to
zsmalloc areas for data we just have "this is a page of zeros"
recorded.  So this is correctly reflecting that fact that we don't
actually have anything compressed stored at this time.

>  >> If we do a "sync" then redisply the mm_stat after, we get
>  >>   26214400     2842    65536 26214400   196608      399        0        0

Now we've finished writing all our zeros and have synced, we would
have finished updating vfat allocations, etc.  So this gets compressed
and written, and we're back to have some small FS metadata compressed
in our 1 page of zsmalloc allocations.

I think what is probably "special" about this reproducer system is
that it is slow enough to allow the zero allocation to persist between
the end of the test writes and examining the stats.

I'd be happy for any thoughts on the likelyhood of this!

If we think this is right; then the point of the end of this test [1]
is ensure a high reported compression ratio on the device, presumably
to ensure the compression is working.  Filling it with urandom would
be unreliable in this regard.  I think what we want to do is something
highly compressable like alternate lengths of 0x00 and 0xFF.  This
will avoid the same-page detection and ensure we actually have
compressed data, and we can continue to assert on the high compression
ratio reliably.  I'm happy to propose this if we generally agree.

Thanks,

-i

> [1] https://github.com/linux-test-project/ltp/blob/8c201e55f684965df2ae5a13ff439b28278dec0d/testcases/kernel/device-drivers/zram/zram01.sh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ