lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 31 Mar 2020 21:45:28 -0700
From:   "Darrick J. Wong" <darrick.wong@...cle.com>
To:     Qian Cai <cai@....pw>
Cc:     Chandan Rajendra <chandan@...ux.ibm.com>,
        Christoph Hellwig <hch@....de>, linux-xfs@...r.kernel.org,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: linux-next: xfs metadata corruption since 30 March

On Wed, Apr 01, 2020 at 12:15:32AM -0400, Qian Cai wrote:
> 
> 
> > On Apr 1, 2020, at 12:14 AM, Chandan Rajendra <chandan@...ux.ibm.com> wrote:
> > 
> > On Wednesday, April 1, 2020 3:27 AM Qian Cai wrote: 
> >> Ever since two days ago, linux-next starts to trigger xfs metadata corruption
> >> during compilation workloads on both powerpc and arm64,
> > 
> > Can you please provide the filesystem geometry information?
> > You can get that by executing "xfs_info <mount-point>" command.
> > 

Hmm.   Do the arm/ppc systems have 64k pages?  kconfigs might be a good
starting place.  Also, does the xfs for-next branch exhibit this
problem, or is it just the big -next branch that Stephen Rothwell puts
out?

--D

> == arm64 ==
> # xfs_info /home/
> meta-data=/dev/mapper/rhel_hpe--apollo--cn99xx--11-home isize=512    agcount=4, agsize=113568256 blks
>          =                       sectsz=4096  attr=2, projid32bit=1
>          =                       crc=1        finobt=1, sparse=1, rmapbt=0
>          =                       reflink=1
> data     =                       bsize=4096   blocks=454273024, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
> log      =internal log           bsize=4096   blocks=221813, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> 
> == powerpc ==
> # xfs_info /home/
> meta-data=/dev/mapper/rhel_ibm--p9wr--01-home isize=512    agcount=4, agsize=118489856 blks
>          =                       sectsz=4096  attr=2, projid32bit=1
>          =                       crc=1        finobt=1, sparse=1, rmapbt=0
>          =                       reflink=1
> data     =                       bsize=4096   blocks=473959424, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
> log      =internal log           bsize=4096   blocks=231425, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> == x86 (not yet reproduced)  ==
> meta-data=/dev/mapper/rhel_hpe--dl380gen9--01-home isize=512    agcount=16, agsize=3283776 blks
>          =                       sectsz=512   attr=2, projid32bit=1
>          =                       crc=1        finobt=1, sparse=1, rmapbt=0
>          =                       reflink=1
> data     =                       bsize=4096   blocks=52540416, imaxpct=25
>          =                       sunit=64     swidth=64 blks
> naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
> log      =internal log           bsize=4096   blocks=25664, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ