lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9489dd1c-012c-8b5d-b670-a27213da287a@suse.cz>
Date:   Tue, 22 Nov 2022 15:56:25 +0100
From:   Martin Doucha <mdoucha@...e.cz>
To:     Sergey Senozhatsky <senozhatsky@...omium.org>
Cc:     Minchan Kim <minchan@...nel.org>, Petr Vorel <pvorel@...e.cz>,
        ltp@...ts.linux.it, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
        Nitin Gupta <ngupta@...are.org>, Jens Axboe <axboe@...nel.dk>,
        OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>,
        Yang Xu <xuyang2018.jy@...itsu.com>
Subject: Re: [PATCH 0/1] Possible bug in zram on ppc64le on vfat

On 11. 11. 22 1:48, Sergey Senozhatsky wrote:
> On (22/11/10 15:29), Martin Doucha wrote:
>> I've tried to debug the issue and collected some interesting data (all
>> values come from zram device with 25M size limit and zstd compression
>> algorithm):
>> - mm_stat values are correct after mkfs.vfat:
>> 65536      220    65536 26214400    65536        0        0        0
>>
>> - mm_stat values stay correct after mount:
>> 65536      220    65536 26214400    65536        0        0        0
>>
>> - the bug is triggered by filling the filesystem to capacity (using dd):
>> 4194304        0        0 26214400   327680       64        0        0
> 
> Can you try using /dev/urandom for dd, not /dev/zero?
> Do you still see zeroes in sysfs output or some random values?

After 50 test runs on a kernel where the issue is confirmed, I could not 
reproduce the failure while filling the device from /dev/urandom instead 
of /dev/zero. The test reported compression ratio around 1.8-2.5 which 
means the memory usage reported by mm_stat was 10-13MB.

Note that I had to disable the other filesystems in the test because 
some of them kept failing with compression ratio <1.

-- 
Martin Doucha   mdoucha@...e.cz
QA Engineer for Software Maintenance
SUSE LINUX, s.r.o.
CORSO IIa
Krizikova 148/34
186 00 Prague 8
Czech Republic

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ