[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <458060.1714582859@warthog.procyon.org.uk>
Date: Wed, 01 May 2024 18:00:59 +0100
From: David Howells <dhowells@...hat.com>
To: Christian Brauner <christian@...uner.io>
Cc: dhowells@...hat.com, Jeff Layton <jlayton@...nel.org>,
Gao Xiang <hsiangkao@...ux.alibaba.com>,
Dominique Martinet <asmadeus@...ewreck.org>,
Matthew Wilcox <willy@...radead.org>,
Steve French <smfrench@...il.com>,
Marc Dionne <marc.dionne@...istor.com>,
Paulo Alcantara <pc@...guebit.com>,
Shyam Prasad N <sprasad@...rosoft.com>, Tom Talpey <tom@...pey.com>,
Eric Van Hensbergen <ericvh@...nel.org>,
Ilya Dryomov <idryomov@...il.com>, netfs@...ts.linux.dev,
linux-cachefs@...hat.com, linux-afs@...ts.infradead.org,
linux-cifs@...r.kernel.org, linux-nfs@...r.kernel.org,
ceph-devel@...r.kernel.org, v9fs@...ts.linux.dev,
linux-erofs@...ts.ozlabs.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, Latchesar Ionkov <lucho@...kov.net>,
Christian Schoenebeck <linux_oss@...debyte.com>
Subject: Re: [PATCH v2 14/22] netfs: New writeback implementation
This needs the attached change. It needs to allow for netfs_perform_write()
changing i_size whilst we're doing writeback. The issue is that i_size is
cached in the netfs_io_request struct (as that's what we're going to tell the
server the new i_size should be), but we're not updating this properly if
i_size moves between us creating the request and us deciding to write out the
folio in which i_size was when we created the request.
This can lead to the folio_zero_segment() that can be seen in the patch below
clearing the wrong amount of the final page - assuming it's still the final
page.
David
---
diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
index 69c50f4cbf41..e190043bc0da 100644
--- a/fs/netfs/write_issue.c
+++ b/fs/netfs/write_issue.c
@@ -315,13 +315,19 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
struct netfs_group *fgroup; /* TODO: Use this with ceph */
struct netfs_folio *finfo;
size_t fsize = folio_size(folio), flen = fsize, foff = 0;
- loff_t fpos = folio_pos(folio);
+ loff_t fpos = folio_pos(folio), i_size;
bool to_eof = false, streamw = false;
bool debug = false;
_enter("");
- if (fpos >= wreq->i_size) {
+ /* netfs_perform_write() may shift i_size around the page or from out
+ * of the page to beyond it, but cannot move i_size into or through the
+ * page since we have it locked.
+ */
+ i_size = i_size_read(wreq->inode);
+
+ if (fpos >= i_size) {
/* mmap beyond eof. */
_debug("beyond eof");
folio_start_writeback(folio);
@@ -332,6 +338,9 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
return 0;
}
+ if (fpos + fsize > wreq->i_size)
+ wreq->i_size = i_size;
+
fgroup = netfs_folio_group(folio);
finfo = netfs_folio_info(folio);
if (finfo) {
@@ -342,14 +351,14 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
if (wreq->origin == NETFS_WRITETHROUGH) {
to_eof = false;
- if (flen > wreq->i_size - fpos)
- flen = wreq->i_size - fpos;
- } else if (flen > wreq->i_size - fpos) {
- flen = wreq->i_size - fpos;
+ if (flen > i_size - fpos)
+ flen = i_size - fpos;
+ } else if (flen > i_size - fpos) {
+ flen = i_size - fpos;
if (!streamw)
folio_zero_segment(folio, flen, fsize);
to_eof = true;
- } else if (flen == wreq->i_size - fpos) {
+ } else if (flen == i_size - fpos) {
to_eof = true;
}
flen -= foff;
Powered by blists - more mailing lists