lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4198657.JbNDGbLXiX@h2o.as.studentenwerk.mhn.de>
Date:   Tue, 03 Sep 2019 23:37:30 +0200
From:   Wolfgang Walter <linux@...m.de>
To:     Jason L Tibbitts III <tibbs@...h.uh.edu>
Cc:     "J. Bruce Fields" <bfields@...ldses.org>,
        linux-nfs@...r.kernel.org, km@...all.com,
        linux-kernel@...r.kernel.org
Subject: Re: Regression in 5.1.20: Reading long directory fails

Am Dienstag, 3. September 2019, 14:06:33 schrieb Jason L Tibbitts III:
> >>>>> "WW" == Wolfgang Walter <linux@...m.de> writes:
> WW> What filesystem do you use on the server? xfs?
> 
> Yeah, it's XFS.
> 
> WW> If yes, does it use 64bit inodes (or started to use them)?
> 
> These filesystems aren't super old, and were all created with the
> default RHEL7 options.  I'm not sure how to check that 64 bit inodes are
> being used, though.  xfs_info says:
> 
> meta-data=/dev/mapper/nas-faculty--08 isize=256    agcount=4, agsize=3276800
> blks =                       sectsz=512   attr=2, projid32bit=1 =          
>             crc=0        finobt=0 spinodes=0
> data     =                       bsize=4096   blocks=13107200, imaxpct=25
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =internal               bsize=4096   blocks=6400, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> WW> Do you set a fsid when you export the filesystem?
> 
> I have never done so on any server.
> 
> And note that the servers are basically unchanged for quite some time,
> while the problem I'm having is new.  I want to find some server-related
> cause for this but so far I haven't been able to do so.  It seems my
> best option now seems to be to migrate all data off of this server and
> then wipe, reinstall and see if the problem reoccurs.
> 
>  - J<

I'm not familiar with RHEL7. But kernel 5.1.20 uses the inode64 mount option 
by default, as far as I know (see Documentation/filesystems/xfs.txt). So if 
you do not use the mount option inode32 your xfs may now uses inode64 for 
newly created files?

We had similar problems some time ago. Then, inode64 was indeed the cause of 
the problem. With inode64 it seems that only little room is left in the nfs4 
handle for the fsid. When nfs mangles fsid and the xfs inode number to form a 
nfs4 handle it seems that in large directories different files may end having 
the same handle if there inodes do not fit in 32bit.

You may try setting a rather small fsid (say 500) and reexport the fs and then 
see, if the problem disappears. I think our problems dissappered then, but I 
do not remember exactly. We now use inode32 to avoid the problem.


Regards,
-- 
Wolfgang Walter
Studentenwerk München
Anstalt des öffentlichen Rechts

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ