lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20200708065127.GM2005@dread.disaster.area> Date: Wed, 8 Jul 2020 16:51:27 +1000 From: Dave Chinner <david@...morbit.com> To: Christoph Hellwig <hch@....de> Cc: Matthew Wilcox <willy@...radead.org>, Goldwyn Rodrigues <rgoldwyn@...e.de>, linux-fsdevel@...r.kernel.org, linux-btrfs@...r.kernel.org, fdmanana@...il.com, dsterba@...e.cz, darrick.wong@...cle.com, cluster-devel@...hat.com, linux-ext4@...r.kernel.org, linux-xfs@...r.kernel.org Subject: Re: always fall back to buffered I/O after invalidation failures, was: Re: [PATCH 2/6] iomap: IOMAP_DIO_RWF_NO_STALE_PAGECACHE return if page invalidation fails On Tue, Jul 07, 2020 at 03:00:30PM +0200, Christoph Hellwig wrote: > On Tue, Jul 07, 2020 at 01:57:05PM +0100, Matthew Wilcox wrote: > > On Tue, Jul 07, 2020 at 07:43:46AM -0500, Goldwyn Rodrigues wrote: > > > On 9:53 01/07, Christoph Hellwig wrote: > > > > On Mon, Jun 29, 2020 at 02:23:49PM -0500, Goldwyn Rodrigues wrote: > > > > > From: Goldwyn Rodrigues <rgoldwyn@...e.com> > > > > > > > > > > For direct I/O, add the flag IOMAP_DIO_RWF_NO_STALE_PAGECACHE to indicate > > > > > that if the page invalidation fails, return back control to the > > > > > filesystem so it may fallback to buffered mode. > > > > > > > > > > Reviewed-by: Darrick J. Wong <darrick.wong@...cle.com> > > > > > Signed-off-by: Goldwyn Rodrigues <rgoldwyn@...e.com> > > > > > > > > I'd like to start a discussion of this shouldn't really be the > > > > default behavior. If we have page cache that can't be invalidated it > > > > actually makes a whole lot of sense to not do direct I/O, avoid the > > > > warnings, etc. > > > > > > > > Adding all the relevant lists. > > > > > > Since no one responded so far, let me see if I can stir the cauldron :) > > > > > > What error should be returned in case of such an error? I think the > > > > Christoph's message is ambiguous. I don't know if he means "fail the > > I/O with an error" or "satisfy the I/O through the page cache". I'm > > strongly in favour of the latter. > > Same here. Sorry if my previous mail was unclear. > > > Indeed, I'm in favour of not invalidating > > the page cache at all for direct I/O. For reads, I think the page cache > > should be used to satisfy any portion of the read which is currently > > cached. For writes, I think we should write into the page cache pages > > which currently exist, and then force those pages to be written back, > > but left in cache. > > Something like that, yes. So are we really willing to take the performance regression that occurs from copying out of the page cache consuming lots more CPU than an actual direct IO read? Or that direct IO writes suddenly serialise because there are page cache pages and now we have to do buffered IO? Direct IO should be a deterministic, zero-copy IO path to/from storage. Using the CPU to copy data during direct IO is the complete opposite of the intended functionality, not to mention the behaviour that many applications have been careful designed and tuned for. Hence I think that forcing iomap to use cached pages for DIO is a non-starter. I have no problems with providing infrastructure that allows filesystems to -opt in- to using buffered IO for the direct IO path. However, the change in IO behaviour caused by unpredicably switching between direct IO and buffered IO (e.g. suddening DIO writes serialise -all IO-) will cause unacceptible performance regressions for many applications and be -very difficult to diagnose- in production systems. IOWs, we need to let the individual filesystems decide how they want to use the page cache for direct IO. Just because we have new direct IO infrastructure (i.e. iomap) it does not mean we can just make wholesale changes to the direct IO path behaviour... Cheers, Dave. -- Dave Chinner david@...morbit.com
Powered by blists - more mailing lists