lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 7 Aug 2023 14:19:17 +0900
From:   Sergey Senozhatsky <senozhatsky@...omium.org>
To:     Ian Wienand <iwienand@...hat.com>
Cc:     Minchan Kim <minchan@...nel.org>, Petr Vorel <pvorel@...e.cz>,
        ltp@...ts.linux.it, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
        Nitin Gupta <ngupta@...are.org>,
        Sergey Senozhatsky <senozhatsky@...omium.org>,
        Jens Axboe <axboe@...nel.dk>,
        OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>,
        Martin Doucha <mdoucha@...e.cz>,
        Yang Xu <xuyang2018.jy@...itsu.com>
Subject: Re: [PATCH 0/1] Possible bug in zram on ppc64le on vfat

On (23/08/07 14:44), Ian Wienand wrote:
[..]
> 
> At this point, because this test fills from /dev/zero, the zsmalloc
> pool doesn't actually have anything in it.  The filesystem metadata is
> in-use from the writes, and is not written out as compressed data.
> The zram page de-duplication has kicked in, and instead of handles to
> zsmalloc areas for data we just have "this is a page of zeros"
> recorded.  So this is correctly reflecting that fact that we don't
> actually have anything compressed stored at this time.
> 
> >  >> If we do a "sync" then redisply the mm_stat after, we get
> >  >>   26214400     2842    65536 26214400   196608      399        0        0
> 
> Now we've finished writing all our zeros and have synced, we would
> have finished updating vfat allocations, etc.  So this gets compressed
> and written, and we're back to have some small FS metadata compressed
> in our 1 page of zsmalloc allocations.
> 
> I think what is probably "special" about this reproducer system is
> that it is slow enough to allow the zero allocation to persist between
> the end of the test writes and examining the stats.
> 
> I'd be happy for any thoughts on the likelyhood of this!

Thanks for looking into this.

Yes, the fact that /dev/urandom shows non-zero values in mm_stat means
that we don't have any fishy going on in zram but instead very likely
have ZRAM_SAME pages, which don't reach zsmalloc pool and don't use any
physical pages.

And this is what 145 is in mm_stat that was posted earlier. We have 145
pages that are filled with the same bytes pattern:

> >  >> however, /sys/block/zram1/mm_stat shows
> >  >>   9502720        0        0 26214400   196608      145        0        0

> If we think this is right; then the point of the end of this test [1]
> is ensure a high reported compression ratio on the device, presumably
> to ensure the compression is working.  Filling it with urandom would
> be unreliable in this regard.  I think what we want to do is something
> highly compressable like alternate lengths of 0x00 and 0xFF.

So var-lengths 0x00/0xff should work, just make sure that you don't have
a pattern of sizeof(unsigned long) length.

I think fio had an option to generate bin data with a certain level
of compress-ability. If that option works then maybe you can just use
fio with some static buffer_compress_percentage configuration.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ