lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 31 Oct 2017 21:47:07 -0600
From:   Ross Zwisler <ross.zwisler@...ux.intel.com>
To:     Dan Williams <dan.j.williams@...el.com>
Cc:     Jan Kara <jack@...e.cz>, Dave Chinner <david@...morbit.com>,
        Ross Zwisler <ross.zwisler@...ux.intel.com>,
        Christoph Hellwig <hch@...radead.org>,
        linux-ext4 <linux-ext4@...r.kernel.org>,
        "linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        linux-xfs@...r.kernel.org, Linux API <linux-api@...r.kernel.org>,
        Linux MM <linux-mm@...ck.org>, Christoph Hellwig <hch@....de>
Subject: Re: [PATCH 17/17] xfs: support for synchronous DAX faults

On Tue, Oct 31, 2017 at 02:50:01PM -0700, Dan Williams wrote:
> On Tue, Oct 31, 2017 at 8:19 AM, Jan Kara <jack@...e.cz> wrote:
> > On Fri 27-10-17 12:08:34, Jan Kara wrote:
> >> On Fri 27-10-17 08:16:11, Dave Chinner wrote:
> >> > On Thu, Oct 26, 2017 at 05:48:04PM +0200, Jan Kara wrote:
> >> > > > > diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> >> > > > > index f179bdf1644d..b43be199fbdf 100644
> >> > > > > --- a/fs/xfs/xfs_iomap.c
> >> > > > > +++ b/fs/xfs/xfs_iomap.c
> >> > > > > @@ -33,6 +33,7 @@
> >> > > > >  #include "xfs_error.h"
> >> > > > >  #include "xfs_trans.h"
> >> > > > >  #include "xfs_trans_space.h"
> >> > > > > +#include "xfs_inode_item.h"
> >> > > > >  #include "xfs_iomap.h"
> >> > > > >  #include "xfs_trace.h"
> >> > > > >  #include "xfs_icache.h"
> >> > > > > @@ -1086,6 +1087,10 @@ xfs_file_iomap_begin(
> >> > > > >               trace_xfs_iomap_found(ip, offset, length, 0, &imap);
> >> > > > >       }
> >> > > > >
> >> > > > > +     if ((flags & IOMAP_WRITE) && xfs_ipincount(ip) &&
> >> > > > > +         (ip->i_itemp->ili_fsync_fields & ~XFS_ILOG_TIMESTAMP))
> >> > > > > +             iomap->flags |= IOMAP_F_DIRTY;
> >> > > >
> >> > > > This is the very definition of an inode that is "fdatasync dirty".
> >> > > >
> >> > > > Hmmmm, shouldn't this also be set for read faults, too?
> >> > >
> >> > > No, read faults don't need to set IOMAP_F_DIRTY since user cannot write any
> >> > > data to the page which he'd then like to be persistent. The only reason why
> >> > > I thought it could be useful for a while was that it would be nice to make
> >> > > MAP_SYNC mapping provide the guarantee that data you see now is the data
> >> > > you'll see after a crash
> >> >
> >> > Isn't that the entire point of MAP_SYNC? i.e. That when we return
> >> > from a page fault, the app knows that the data and it's underlying
> >> > extent is on persistent storage?
> >> >
> >> > > but we cannot provide that guarantee for RO
> >> > > mapping anyway if someone else has the page mapped as well. So I just
> >> > > decided not to return IOMAP_F_DIRTY for read faults.
> >> >
> >> > If there are multiple MAP_SYNC mappings to the inode, I would have
> >> > expected that they all sync all of the data/metadata on every page
> >> > fault, regardless of who dirtied the inode. An RO mapping doesn't
> >>
> >> Well, they all do sync regardless of who dirtied the inode on every *write*
> >> fault.
> >>
> >> > mean the data/metadata on the inode can't change, it just means it
> >> > can't change through that mapping.  Running fsync() to guarantee the
> >> > persistence of that data/metadata doesn't actually changing any
> >> > data....
> >> >
> >> > IOWs, if read faults don't guarantee the mapped range has stable
> >> > extents on a MAP_SYNC mapping, then I think MAP_SYNC is broken
> >> > because it's not giving consistent guarantees to userspace. Yes, it
> >> > works fine when only one MAP_SYNC mapping is modifying the inode,
> >> > but the moment we have concurrent operations on the inode that
> >> > aren't MAP_SYNC or O_SYNC this goes out the window....
> >>
> >> MAP_SYNC as I've implemented it provides guarantees only for data the
> >> process has actually written. I agree with that and it was a conscious
> >> decision. In my opinion that covers most usecases, provides reasonably
> >> simple semantics (i.e., if you write data through MAP_SYNC mapping, you can
> >> persist it just using CPU instructions), and reasonable performance.
> >>
> >> Now you seem to suggest the semantics should be: "Data you have read from or
> >> written to a MAP_SYNC mapping can be persisted using CPU instructions." And
> >> from implementation POV we can do that rather easily (just rip out the
> >> IOMAP_WRITE checks). But I'm unsure whether this additional guarantee would
> >> be useful enough to justify the slowdown of read faults? I was not able to
> >> come up with a good usecase and so I've decided for current semantics. What
> >> do other people think?
> >
> > Nobody commented on this for couple of days so how do we proceed? I would
> > prefer to go just with a guarantee for data written and we can always make
> > the guarantee stronger (i.e. apply it also for read data) when some user
> > comes with a good usecase?
> 
> I think it is easier to strengthen the guarantee than loosen it later
> especially since it is not yet clear that we have a use case for the
> stronger semantic. At least the initial motivation for MAP_SYNC was
> for writers.

I agree.  It seems like all threads/processes in a given application need to
use MAP_SYNC consistently so they can be sure that data that is written (and
then possibly read) will be durable on media.  I think what you have is a good
starting point, and we can adjust later if necessary.

Powered by blists - more mailing lists