lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 11 Apr 2019 21:05:24 +1000
From:   Dave Chinner <david@...morbit.com>
To:     Jens Axboe <axboe@...nel.dk>
Cc:     Chris Mason <clm@...com>, Christoph Hellwig <hch@...radead.org>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        "linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
        "linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] io_uring: add support for barrier fsync

On Tue, Apr 09, 2019 at 12:46:15PM -0600, Jens Axboe wrote:
> On 4/9/19 12:42 PM, Chris Mason wrote:
> > On 9 Apr 2019, at 14:23, Jens Axboe wrote:
> > 
> >> On 4/9/19 12:17 PM, Christoph Hellwig wrote:
> >>> On Tue, Apr 09, 2019 at 10:27:43AM -0600, Jens Axboe wrote:
> >>>> It's a quite common use case to issue a bunch of writes, then an 
> >>>> fsync
> >>>> or fdatasync when they complete. Since io_uring doesn't guarantee 
> >>>> any
> >>>> type of ordering, the application must track issued writes and wait
> >>>> with the fsync issue until they have completed.
> >>>>
> >>>> Add an IORING_FSYNC_BARRIER flag that helps with this so the 
> >>>> application
> >>>> doesn't have to do this manually. If this flag is set for the fsync
> >>>> request, we won't issue it until pending IO has already completed.
> >>>
> >>> I think we need a much more detailed explanation of the semantics,
> >>> preferably in man page format.
> >>>
> >>> Barrier at least in Linux traditionally means all previously 
> >>> submitted
> >>> requests have finished and no new ones are started until the
> >>> barrier request finishes, which is very heavy handed.  Is that what
> >>> this is supposed to do?  If not what are the exact guarantees vs
> >>> ordering and or barrier semantics?
> >>
> >> The patch description isn't that great, and maybe the naming isn't 
> >> that
> >> intuitive either. The way it's implemented, the fsync will NOT be 
> >> issued
> >> until previously issued IOs have completed. That means both reads and
> >> writes, since there's no way to wait for just one.  In terms of
> >> semantics, any previously submitted writes will have completed before
> >> this fsync is issued. The barrier fsync has no ordering wrt future
> >> writes, no ordering is implied there. Hence:
> >>
> >> W1, W2, W3, FSYNC_W_BARRIER, W4, W5
> >>
> >> W1..3 will have been completed by the hardware side before we start
> >> FSYNC_W_BARRIER. We don't wait with issuing W4..5 until after the 
> >> fsync
> >> completes, no ordering is provided there.
> > 
> > Looking at the patch, why is fsync special?  Seems like you could add 
> > this ordering bit to any write?
> 
> It's really not, the exact same technique could be used on any type of
> command to imply ordering. My initial idea was to have an explicit
> barrier/ordering command, but I didn't think that separating it from an
> actual command would be needed/useful.
> 
> > While you're here, do you want to add a way to FUA/cache flush?  
> > Basically the rest of what user land would need to make their own 
> > write-back-cache-safe implementation.
> 
> FUA would be a WRITEV/WRITE_FIXED flag, that should be trivially doable.

We already have plumbing to make pwritev2 and AIO issue FUA writes
via the RWF_DSYNC flag through the fs/iomap.c direct IO path. FUA is
only valid if the file does not have dirty metadata (e.g. because of
block allocation) and that requires the filesystem block mapping to
tell the IO path if FUA can be used. Otherwise a journal flush is
also required to make the data stable and there's no point in doing
a FUA write for the data in that case...

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ