lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 26 Dec 2019 08:09:35 -0500
From:   "Theodore Y. Ts'o" <tytso@....edu>
To:     xiaohui li <lixiaohui1@...omi.corp-partner.google.com>
Cc:     Ext4 Developers List <linux-ext4@...r.kernel.org>
Subject: Re: the side effect of enlarger max mount count in ext4 superblock

On Thu, Dec 26, 2019 at 06:25:01PM +0800, xiaohui li wrote:
> so i wonder the reason why set EXT4_DFL_MAX_MNT_COUNT value to 20 in
> fs/ext4/ext4.h and not set a large value to it ?

It sounds like you're still using the old make_ext4fs program that is
in the older versions of AOSP?  More recently, AOSP uses mke2fs to
create the file system, in combination with e2fsdroid.  And newer
versions mke2fs sets the max count value to 0, which means it doesn't
automatically check the file system after 20 reboots.  This is for the
reason that you stated; for larger storage devices, a forced e2fsck
run can take a long time, and if it's not necessary we can skip it.

> is there any reason or any condition when file system data error or
> stability problems happens and ext4 can't get this information, can't
> set the error flag in superblock, and so will not call the e2fsck full
> check during next e2fsck check?
> and because of this reason or condition, it will have to do periodic
> e2fsck full check.

The reason why we used to set max mount count to 20 is because there
are indeed many kinds of file system inconsistencies which the kernel
can not detect at runtime or when it tries to mount the file system,
and that can lead to data loss or corruption.  So setting a max mount
count of 20 was way of trying to catch that early, hopefully before
*too* much data was lost.

Metadata inconsistencies should *not* be happening normally.  Typical
causes of inconsistencies are kernel bugs or media problems (e.g.,
eMMC, HDD, SSD failures of some sort; sometimes because they don't do
the right thing on power drops).

Unfortunately, many Android devices, especially the cheaper priced
versions, are using older SOC's, with older kernels, which are missing
a lot of bug fixes.  Even if they have been fixed upstream, kernel
coming from an old Board Support Package may not have those bug fixes.
This is one of the reasons my personal advice to friends is get higher
end Pixels and not some of the cheaper, low-quality Android devices
coming out of Asia.  (Sorry.)

If you're using one of those older, crappier BSP kernels, one of the
ways you can find out how horrible it is to see how many tests fail if
you use something like android-xfstests[1].  In some cases, especially
with an older kernel (for example, a 3.10 or 3.18 kernel), running
file system stress tests can cause the kernel to crash.

[1] https://thunk.org/android-xfstests

If you are using high quality eMMC flash (as opposed to the cheapest
possible grade flash to maximize profits), and you have tested your
flash to make sure they handle power drops correctly (e.g., that the
FTL metadata never gets corrupted on a power drop, and all data
written after a FLUSH CACHE command is retained after a power drop),
and you are using a kernel which is regularly getting updated to get
the latest security and bug fixes, then there is no need to set max
mount count to a non-zero value.

If you are not in that ideal state, then question really boils down to
"do you feel lucky?".  Although that's probably true with or without
max mount count set to 20.   :-)

Cheers,

					- Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ