lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y3tHuWygsBqmmpwV@pevik>
Date:   Mon, 21 Nov 2022 10:41:13 +0100
From:   Petr Vorel <pvorel@...e.cz>
To:     Sergey Senozhatsky <senozhatsky@...omium.org>
Cc:     Martin Doucha <mdoucha@...e.cz>, Minchan Kim <minchan@...nel.org>,
        ltp@...ts.linux.it, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
        Nitin Gupta <ngupta@...are.org>, Jens Axboe <axboe@...nel.dk>,
        OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>,
        Yang Xu <xuyang2018.jy@...itsu.com>
Subject: Re: [PATCH 0/1] Possible bug in zram on ppc64le on vfat

Hi Sergey,

> On (22/11/10 15:29), Martin Doucha wrote:
> > New version of LTP test zram01 found a sysfile issue with zram devices
> > mounted using VFAT filesystem. When when all available space is filled, e.g.
> > by `dd if=/dev/zero of=/mnt/zram0/file`, the corresponding sysfile
> > /sys/block/zram0/mm_stat will report that the compressed data size on the
> > device is 0 and total memory usage is also 0. LTP test zram01 uses these
> > values to calculate compression ratio, which results in division by zero.

> > The issue is specific to PPC64LE architecture and the VFAT filesystem. No
> > other tested filesystem has this issue and I could not reproduce it on other
> > archs (s390 not tested). The issue appears randomly about every 3 test runs
> > on SLE-15SP2 and 15SP3 (kernel 5.3). It appears less frequently on SLE-12SP5
> > (kernel 4.12). Other SLE version were not tested with the new test version
> > yet. The previous version of the test did not check the VFAT filesystem on
> > zram devices.

> Whoooaa...

> > I've tried to debug the issue and collected some interesting data (all
> > values come from zram device with 25M size limit and zstd compression
> > algorithm):
> > - mm_stat values are correct after mkfs.vfat:
> > 65536      220    65536 26214400    65536        0        0        0

> > - mm_stat values stay correct after mount:
> > 65536      220    65536 26214400    65536        0        0        0

> > - the bug is triggered by filling the filesystem to capacity (using dd):
> > 4194304        0        0 26214400   327680       64        0        0

> Can you try using /dev/urandom for dd, not /dev/zero?
> Do you still see zeroes in sysfs output or some random values?

I'm not sure if Martin had time to rerun the test. I was not able to reproduce
the problem any more on machine where the test was failing. But I'll have look
into this during this week.

Kind regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ