lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 29 May 2019 11:13:03 -0400
From:   "Theodore Ts'o" <>
To:     Sahitya Tummala <>
Cc:     Andreas Dilger <>,
Subject: Re: fsync_mode mount option for ext4

On Wed, May 29, 2019 at 04:18:09PM +0530, Sahitya Tummala wrote:
> Yes, benchmarks for random write/fsync show huge improvement.
> For ex, without issuing flush in the ext4 fsync() the
> random write score improves from 13MB/s to 62MB/s on eMMC,
> using Androbench.
> And fsync_mode=nobarrier is enabled by default on pixel phones
> where f2fs is used.
> We have been getting requests to evaluate the same for EXT4 and
> hence, I was checking with the community on its feasibility.

Have you run some tests to see how much power fail robustness was
impacted with f2fs's fsync_mode=nobarrier?  Say, run fsstress on real
hardware then yank the power 100 times; how many times is the file
system corrupted?  And is of those corruptions do they result in:

* Unrecoverable failures --- e.g., requires a factory reset losing all
  user data?  (Possibly because f2fs's fsck crashes or refuses to fix things?)

* Failures which corrupts the data, but can be fixed by fsck?  (And in
  how many cases with data loss?)

I'll note that for a long time in the early days of linux, we ran with
ext2 w/o a journal and without CACHE FLUSH, and it was very surprising
how often the corruption could be fixed with fsck (due to those very
early days, we did a lot of work to make e2fsck do as good of a job as
possible at not losing data, and if you run with -y, it will try to
automatically recover even if accepting some data loss.)

So if your goal is, "some file system corruptions and some complete
user data loss is OK, feel free to use nobarrier".  After all, all the
user's data that we should care about is sync'ed to the cloud, right?  :-)
And winning the benchmarketing game can mean millions and millions of dollars
to companies, and that's _obviously_ more important that user data.... :-/

Also, for people who are wondering how reliable/robust f2fs in the
ftace of corruption / SSD failures, I call your attention to this
Usenix paper, which will be presented at the upcoming Usenix ATC
conference in July:

It's not available yet, but in a week or two, it should be available
to people who have registered for Usenix ATC 2019, and if you care
about user data, and you are using f2fs, it's worth the price of
admission all by itself IMHO.

					- Ted

P.S.  I have considered adding tuning knobs to make fsync/fdatasync be
tunable perhaps on a per-uid basis, maybe on a root vs non-root basis,
mostly to protect systems against hostile, mutually suspicious Docker
users from each other.  The problem is that it can also be used for
benchmarketing wars, which I really dislike, and I know there are
enterprise distros who hate these features because clueless sysadmins
turn them on, and then they lose data, and then they turn up at the
Distribution's Help Desk asking for help / complaining.

So if you really want a patch which does something like
fsync_mode=nobarrier, it's really not hard.  To quote Shakespeare
(when Hamlet was pondering how easy it would be to commit suicide), it
can be done "with a bare bodkin".  The question is whether it is a
*good* thing to do, not whether it can be done.  And a lot of this
depends on your morals --- after all, companies have been known to
disable the CPU thermal safeties in order to win benchmarketing

Powered by blists - more mailing lists