lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3edf1d47e2856572191c74d231f0bff4406adee6.camel@kernel.org>
Date: Tue, 19 Dec 2023 12:19:35 -0500
From: Jeff Layton <jlayton@...nel.org>
To: David Howells <dhowells@...hat.com>
Cc: Steve French <smfrench@...il.com>, Matthew Wilcox <willy@...radead.org>,
  Marc Dionne <marc.dionne@...istor.com>, Paulo Alcantara
 <pc@...guebit.com>, Shyam Prasad N <sprasad@...rosoft.com>, Tom Talpey
 <tom@...pey.com>, Dominique Martinet <asmadeus@...ewreck.org>, Eric Van
 Hensbergen <ericvh@...nel.org>, Ilya Dryomov <idryomov@...il.com>,
 Christian Brauner <christian@...uner.io>,  linux-cachefs@...hat.com,
 linux-afs@...ts.infradead.org,  linux-cifs@...r.kernel.org,
 linux-nfs@...r.kernel.org,  ceph-devel@...r.kernel.org,
 v9fs@...ts.linux.dev, linux-fsdevel@...r.kernel.org,  linux-mm@...ck.org,
 netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 36/39] netfs: Implement a write-through caching option

On Tue, 2023-12-19 at 16:51 +0000, David Howells wrote:
> Jeff Layton <jlayton@...nel.org> wrote:
> 
> > > This can't be used with content encryption as that may require expansion of
> > > the write RPC beyond the write being made.
> > > 
> > > This doesn't affect writes via mmap - those are written back in the normal
> > > way; similarly failed writethrough writes are marked dirty and left to
> > > writeback to retry.  Another option would be to simply invalidate them, but
> > > the contents can be simultaneously accessed by read() and through mmap.
> > > 
> > 
> > I do wish Linux were less of a mess in this regard. Different
> > filesystems behave differently when writeback fails.
> 
> Cifs is particularly, um, entertaining in this regard as it allows the write
> to fail on the server due to a checksum failure if the source data changes
> during the write and then just retries it later.
> 

Should they be using bounce pages here? Maybe that's more efficient in
the common case though and worth the extra hit if it happens seldom
enough.

> > That said, the modern consensus with local filesystems is to just leave
> > the pages clean when buffered writeback fails, but set a writeback error
> > on the inode. That at least keeps dirty pages from stacking up in the
> > cache. In the case of something like a netfs, we usually invalidate the
> > inode and the pages -- netfs's usually have to spontaneously deal with
> > that anyway, so we might as well.
> > 
> > Marking the pages dirty here should mean that they'll effectively get a
> > second try at writeback, which is a change in behavior from most
> > filesystems. I'm not sure it's a bad one, but writeback can take a long
> > time if you have a laggy network.
> 
> I'm not sure what the best thing to do is.  If everything is doing
> O_DSYNC/writethrough I/O on an inode and there is no mmap, then invalidating
> the pages is probably not a bad way to deal with failure here.
> 

That's a big if ;)

> > When a write has already failed once, why do you think it'll succeed on
> > a second attempt (and probably with page-aligned I/O, I guess)?
> 
> See above with cifs.  I wonder if the pages being written to should be made RO
> and page_mkwrite() forced to lock against DSYNC writethrough.
> 

That sounds pretty heavy handed, particularly if the server goes offline
for a bit. Now you're stuck in some locking call in page_mkwrite...

> > Another question: when the writeback is (re)attempted, will it end up
> > just doing page-aligned I/O, or is the byte range still going to be
> > limited to the written range?
> 
> At the moment, it then happens exactly as it would if it wasn't doing
> writethrough - so it will write partial folios if it's doing a streaming write
> and will do full folios otherwise.
> 
>
> > The more I consider it, I think it might be a lot simpler to just "fail
> > fast" here rather than remarking the write dirty.
> 
> You may be right - but, again, mmap:-/
> 

There's nothing we can do about mmap -- we're stuck page-sized I/Os
there.

With normal buffered I/O I still think just leaving the pages clean is
probably the least bad option. I think it's also sort of the Linux
"standard" behavior (for better or worse).

Willy, do you have any thoughts here?
-- 
Jeff Layton <jlayton@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ