lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190711171046.GA13966@mit.edu>
Date:   Thu, 11 Jul 2019 13:10:46 -0400
From:   "Theodore Ts'o" <tytso@....edu>
To:     Geoffrey Thomas <Geoffrey.Thomas@...sigma.com>
Cc:     "'Jan Kara'" <jack@...e.cz>,
        Thomas Walker <Thomas.Walker@...sigma.com>,
        "'linux-ext4@...r.kernel.org'" <linux-ext4@...r.kernel.org>,
        "Darrick J. Wong" <darrick.wong@...cle.com>
Subject: Re: Phantom full ext4 root filesystems on 4.1 through 4.14 kernels

Can you try using "df -i" when the file system looks full, and then
reboot, and look at the results of "df -i" afterwards?

Also interesting would be to grab a metadata-only snapshot of the file
system when it is in its mysteriously full state, writing that
snapshot on some other file system *other* than on /dev/sda3:

     e2image -r /dev/sda3 /mnt/sda3.e2i

Then run e2fsck on it:

e2fsck -fy /mnt/sda3.e2i

What I'm curious about is how many "orphaned inodes" are reported, and
how much space they are taking up.  That will look like this:

% gunzip < /usr/src/e2fsprogs/tests/f_orphan/image.gz  > /tmp/foo.img
% e2fsck -fy /tmp/foo.img
e2fsck 1.45.2 (27-May-2019)
Clearing orphaned inode 15 (uid=0, gid=0, mode=040755, size=1024)
Clearing orphaned inode 17 (uid=0, gid=0, mode=0100644, size=0)
Clearing orphaned inode 16 (uid=0, gid=0, mode=040755, size=1024)
Clearing orphaned inode 14 (uid=0, gid=0, mode=0100644, size=69)
Clearing orphaned inode 13 (uid=0, gid=0, mode=040755, size=1024)
...

It's been theorized the bug is in overlayfs, where it's holding inodes
open so the space isn't released.  IIRC somewhat had reported a
similar problem with overlayfs on top of xfs.  (BTW, are you using
overlayfs or aufs with your Docker setup?)

		     	       	      - Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ