lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 1 Jun 2019 11:01:42 +0300
From:   Amir Goldstein <amir73il@...il.com>
To:     Dave Chinner <david@...morbit.com>
Cc:     "Theodore Ts'o" <tytso@....edu>, Jan Kara <jack@...e.cz>,
        "Darrick J . Wong" <darrick.wong@...cle.com>,
        Chris Mason <clm@...com>, Al Viro <viro@...iv.linux.org.uk>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        linux-xfs <linux-xfs@...r.kernel.org>,
        Ext4 <linux-ext4@...r.kernel.org>,
        Linux Btrfs <linux-btrfs@...r.kernel.org>,
        Linux API <linux-api@...r.kernel.org>
Subject: Re: [RFC][PATCH] link.2: AT_ATOMIC_DATA and AT_ATOMIC_METADATA

On Sat, Jun 1, 2019 at 2:28 AM Dave Chinner <david@...morbit.com> wrote:
>
> On Sat, Jun 01, 2019 at 08:45:49AM +1000, Dave Chinner wrote:
> > Given that we can already use AIO to provide this sort of ordering,
> > and AIO is vastly faster than synchronous IO, I don't see any point
> > in adding complex barrier interfaces that can be /easily implemented
> > in userspace/ using existing AIO primitives. You should start
> > thinking about expanding libaio with stuff like
> > "link_after_fdatasync()" and suddenly the whole problem of
> > filesystem data vs metadata ordering goes away because the
> > application directly controls all ordering without blocking and
> > doesn't need to care what the filesystem under it does....
>
> And let me point out that this is also how userspace can do an
> efficient atomic rename - rename_after_fdatasync(). i.e. on
> completion of the AIO_FSYNC, run the rename. This guarantees that
> the application will see either the old file of the complete new
> file, and it *doesn't have to wait for the operation to complete*.
> Once it is in flight, the file will contain the old data until some
> point in the near future when will it contain the new data....

What I am looking for is a way to isolate the effects of "atomic rename/link"
from the rest of the users. Sure there is I/O bandwidth and queued
bios, but at least isolate other threads working on other files or metadata
from contending with the "atomic rename" thread of journal flushes and
the like. Actually, one of my use cases is "atomic rename" of files with
no data (looking for atomicity w.r.t xattr and mtime), so this "atomic rename"
thread should not be interfering with other workloads at all.

>
> Seriously, sit down and work out all the "atomic" data vs metadata
> behaviours you want, and then tell me how many of them cannot be
> implemented as "AIO_FSYNC w/ completion callback function" in
> userspace. This mechanism /guarantees ordering/ at the application
> level, the application does not block waiting for these data
> integrity operations to complete, and you don't need any new kernel
> side functionality to implement this.

So I think what I could have used is AIO_BATCH_FSYNC, an interface
that was proposed by Ric Wheeler and discussed on LSF:
https://lwn.net/Articles/789024/
Ric was looking for a way to efficiently fsync a "bunch of files".
Submitting several AIO_FSYNC calls is not the efficient way of doing that.
So it is either a new AIO_BATCH_FSYNC and a kernel implementation
that flushes the inodes and then calls ->sync_fs(), or a new AIO operation
that just does the ->sync_fs() bit and using sync_file_range() for the inodes.

To be more accurate, the AIO operation that would emulate my
proposed API more closely is AIO_WAIT_FOR_SYNCFS, as I do not wish
to impose excessive journal flushes, I just need a completion callback
when they happened to perform the rename/link.

>
> Fundamentally, the assertion that disk cache flushes are not what
> causes fsync "to be slow" is incorrect. It's the synchronous

Too many double negatives. I am not sure I parsed this correctly.
But I think by now you understand that I don't care that fsync is "slow".
I care about frequent fsyncs making the entire system slow down.

Heck, xfs even has a mitigation in place to improve performance
of too frequent fsyncs, but that mitigation is partly gone since
47c7d0b19502 xfs: fix incorrect log_flushed on fsync

The situation with frequent fsync on ext4 at the moment is probably
worse.

I am trying to reduce the number of fsyncs from applications
and converting fsync to AIO_FSYNC is not going to help with that.

Thanks,
Amir.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ