[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1075260.1703004686@warthog.procyon.org.uk>
Date: Tue, 19 Dec 2023 16:51:26 +0000
From: David Howells <dhowells@...hat.com>
To: Jeff Layton <jlayton@...nel.org>
Cc: dhowells@...hat.com, Steve French <smfrench@...il.com>,
Matthew Wilcox <willy@...radead.org>,
Marc Dionne <marc.dionne@...istor.com>,
Paulo Alcantara <pc@...guebit.com>,
Shyam Prasad N <sprasad@...rosoft.com>, Tom Talpey <tom@...pey.com>,
Dominique Martinet <asmadeus@...ewreck.org>,
Eric Van Hensbergen <ericvh@...nel.org>,
Ilya Dryomov <idryomov@...il.com>,
Christian Brauner <christian@...uner.io>, linux-cachefs@...hat.com,
linux-afs@...ts.infradead.org, linux-cifs@...r.kernel.org,
linux-nfs@...r.kernel.org, ceph-devel@...r.kernel.org,
v9fs@...ts.linux.dev, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 36/39] netfs: Implement a write-through caching option
Jeff Layton <jlayton@...nel.org> wrote:
> > This can't be used with content encryption as that may require expansion of
> > the write RPC beyond the write being made.
> >
> > This doesn't affect writes via mmap - those are written back in the normal
> > way; similarly failed writethrough writes are marked dirty and left to
> > writeback to retry. Another option would be to simply invalidate them, but
> > the contents can be simultaneously accessed by read() and through mmap.
> >
>
> I do wish Linux were less of a mess in this regard. Different
> filesystems behave differently when writeback fails.
Cifs is particularly, um, entertaining in this regard as it allows the write
to fail on the server due to a checksum failure if the source data changes
during the write and then just retries it later.
> That said, the modern consensus with local filesystems is to just leave
> the pages clean when buffered writeback fails, but set a writeback error
> on the inode. That at least keeps dirty pages from stacking up in the
> cache. In the case of something like a netfs, we usually invalidate the
> inode and the pages -- netfs's usually have to spontaneously deal with
> that anyway, so we might as well.
>
> Marking the pages dirty here should mean that they'll effectively get a
> second try at writeback, which is a change in behavior from most
> filesystems. I'm not sure it's a bad one, but writeback can take a long
> time if you have a laggy network.
I'm not sure what the best thing to do is. If everything is doing
O_DSYNC/writethrough I/O on an inode and there is no mmap, then invalidating
the pages is probably not a bad way to deal with failure here.
> When a write has already failed once, why do you think it'll succeed on
> a second attempt (and probably with page-aligned I/O, I guess)?
See above with cifs. I wonder if the pages being written to should be made RO
and page_mkwrite() forced to lock against DSYNC writethrough.
> Another question: when the writeback is (re)attempted, will it end up
> just doing page-aligned I/O, or is the byte range still going to be
> limited to the written range?
At the moment, it then happens exactly as it would if it wasn't doing
writethrough - so it will write partial folios if it's doing a streaming write
and will do full folios otherwise.
> The more I consider it, I think it might be a lot simpler to just "fail
> fast" here rather than remarking the write dirty.
You may be right - but, again, mmap:-/
David
Powered by blists - more mailing lists