[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170824123126.GA21282@infradead.org>
Date: Thu, 24 Aug 2017 05:31:26 -0700
From: Christoph Hellwig <hch@...radead.org>
To: Jan Kara <jack@...e.cz>
Cc: linux-fsdevel@...r.kernel.org, linux-nvdimm@...ts.01.org,
Andy Lutomirski <luto@...nel.org>, linux-ext4@...r.kernel.org,
linux-xfs@...r.kernel.org, Christoph Hellwig <hch@...radead.org>,
Ross Zwisler <ross.zwisler@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>,
Boaz Harrosh <boazh@...app.com>
Subject: Re: [PATCH 13/13] ext4: Support for synchronous DAX faults
On Thu, Aug 17, 2017 at 06:08:15PM +0200, Jan Kara wrote:
> We return IOMAP_F_NEEDDSYNC flag from ext4_iomap_begin() for a
> synchronous write fault when inode has some uncommitted metadata
> changes. In the fault handler ext4_dax_fault() we then detect this case,
> call vfs_fsync_range() to make sure all metadata is committed, and call
> dax_pfn_mkwrite() to mark PTE as writeable. Note that this will also
> dirty corresponding radix tree entry which is what we want - fsync(2)
> will still provide data integrity guarantees for applications not using
> userspace flushing. And applications using userspace flushing can avoid
> calling fsync(2) and thus avoid the performance overhead.
Why is this only wiered up for the huge_fault handler and not the
regular?
Powered by blists - more mailing lists