[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2885897.1676990364@warthog.procyon.org.uk>
Date: Tue, 21 Feb 2023 14:39:24 +0000
From: David Howells <dhowells@...hat.com>
To: Stephen Rothwell <sfr@...b.auug.org.au>
Cc: dhowells@...hat.com, Matthew Wilcox <willy@...radead.org>,
"Vishal Moola (Oracle)" <vishal.moola@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Steve French <smfrench@...il.com>,
Steve French <stfrench@...rosoft.com>,
Shyam Prasad N <nspmangalore@...il.com>,
Rohith Surabattula <rohiths.msft@...il.com>,
Tom Talpey <tom@...pey.com>, Paulo Alcantara <pc@....nz>,
Jeff Layton <jlayton@...nel.org>, linux-cifs@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-next@...r.kernel.org
Subject: Re: linux-next: manual merge of the mm-stable tree with the cifs tree
Stephen Rothwell <sfr@...b.auug.org.au> wrote:
> Andrew has already asked for it to be merged, so its up to Linus.
>
> You could fetch it yourself and do a trial merge and send me your
> resolution ..
>
> git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm tags/mm-stable-2023-02-20-13-37
Okay, did that. See attached. Also here:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=iov-cifs-mm
David
---
commit 71ad4f67439e60fe04bbf7aed8870e6f83a5d15e
Author: David Howells <dhowells@...hat.com>
Date: Tue Feb 21 13:23:05 2023 +0000
cifs: Handle transition to filemap_get_folios_tag()
filemap_get_folios_tag() is being added and find_get_pages_range_tag() is
being removed in effectively a single event. This causes a problem for
the:
cifs: Change the I/O paths to use an iterator rather than a page list
patch[1] on the cifs/for-next branch as it's adding a new user of the
latter (which is going away), but can't yet be converted to using the
former (which doesn't yet exist upstream).
Here's a conversion patch that could be applied at merge time to deal with
this. The new cifs_writepages_region() is based directly on
afs_writepages_region() and the AFS changes in the mm tree[2]:
commit acc8d8588cb7e3e64b0d2fa611dad06574cd67b1
Author: Vishal Moola (Oracle) <vishal.moola@...il.com>
afs: convert afs_writepages_region() to use filemap_get_folios_tag()
can be replicated in cifs almost exactly.
Signed-off-by: David Howells <dhowells@...hat.com>
cc: Stephen Rothwell <sfr@...b.auug.org.au>
cc: Steve French <sfrench@...ba.org>
cc: Shyam Prasad N <nspmangalore@...il.com>
cc: Rohith Surabattula <rohiths.msft@...il.com>
cc: Tom Talpey <tom@...pey.com>
cc: Paulo Alcantara <pc@....nz>
cc: Jeff Layton <jlayton@...nel.org>
cc: linux-cifs@...r.kernel.org
cc: Vishal Moola (Oracle) <vishal.moola@...il.com>
Link: https://lore.kernel.org/r/20230216214745.3985496-15-dhowells@redhat.com/ [1]
Link: https://lore.kernel.org/r/20230104211448.4804-6-vishal.moola@gmail.com/ [2]
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 58801d39213a..52af9cf93c65 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -2856,78 +2856,85 @@ static int cifs_writepages_region(struct address_space *mapping,
struct writeback_control *wbc,
loff_t start, loff_t end, loff_t *_next)
{
+ struct folio_batch fbatch;
struct folio *folio;
- struct page *head_page;
+ unsigned int i;
ssize_t ret;
int n, skips = 0;
+ folio_batch_init(&fbatch);
+
do {
pgoff_t index = start / PAGE_SIZE;
- n = find_get_pages_range_tag(mapping, &index, end / PAGE_SIZE,
- PAGECACHE_TAG_DIRTY, 1, &head_page);
+ n = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE,
+ PAGECACHE_TAG_DIRTY, &fbatch);
if (!n)
break;
- folio = page_folio(head_page);
- start = folio_pos(folio); /* May regress with THPs */
+ for (i = 0; i < n; i++) {
+ folio = fbatch.folios[i];
+ start = folio_pos(folio); /* May regress with THPs */
- /* At this point we hold neither the i_pages lock nor the
- * page lock: the page may be truncated or invalidated
- * (changing page->mapping to NULL), or even swizzled
- * back from swapper_space to tmpfs file mapping
- */
- if (wbc->sync_mode != WB_SYNC_NONE) {
- ret = folio_lock_killable(folio);
- if (ret < 0) {
- folio_put(folio);
- return ret;
- }
- } else {
- if (!folio_trylock(folio)) {
- folio_put(folio);
- return 0;
+ /* At this point we hold neither the i_pages lock nor the
+ * page lock: the page may be truncated or invalidated
+ * (changing page->mapping to NULL), or even swizzled
+ * back from swapper_space to tmpfs file mapping
+ */
+ if (wbc->sync_mode != WB_SYNC_NONE) {
+ ret = folio_lock_killable(folio);
+ if (ret < 0) {
+ folio_batch_release(&fbatch);
+ return ret;
+ }
+ } else {
+ if (!folio_trylock(folio))
+ continue;
}
- }
- if (folio_mapping(folio) != mapping ||
- !folio_test_dirty(folio)) {
- start += folio_size(folio);
- folio_unlock(folio);
- folio_put(folio);
- continue;
- }
+ if (folio->mapping != mapping ||
+ !folio_test_dirty(folio)) {
+ start += folio_size(folio);
+ folio_unlock(folio);
+ continue;
+ }
- if (folio_test_writeback(folio) ||
- folio_test_fscache(folio)) {
- folio_unlock(folio);
- if (wbc->sync_mode != WB_SYNC_NONE) {
- folio_wait_writeback(folio);
+ if (folio_test_writeback(folio) ||
+ folio_test_fscache(folio)) {
+ folio_unlock(folio);
+ if (wbc->sync_mode != WB_SYNC_NONE) {
+ folio_wait_writeback(folio);
#ifdef CONFIG_CIFS_FSCACHE
- folio_wait_fscache(folio);
+ folio_wait_fscache(folio);
#endif
- } else {
- start += folio_size(folio);
- }
- folio_put(folio);
- if (wbc->sync_mode == WB_SYNC_NONE) {
- if (skips >= 5 || need_resched())
- break;
- skips++;
+ } else {
+ start += folio_size(folio);
+ }
+ if (wbc->sync_mode == WB_SYNC_NONE) {
+ if (skips >= 5 || need_resched()) {
+ *_next = start;
+ return 0;
+ }
+ skips++;
+ }
+ continue;
}
- continue;
- }
- if (!folio_clear_dirty_for_io(folio))
- /* We hold the page lock - it should've been dirty. */
- WARN_ON(1);
+ if (!folio_clear_dirty_for_io(folio))
+ /* We hold the page lock - it should've been dirty. */
+ WARN_ON(1);
- ret = cifs_write_back_from_locked_folio(mapping, wbc, folio, start, end);
- folio_put(folio);
- if (ret < 0)
- return ret;
+ ret = cifs_write_back_from_locked_folio(mapping, wbc,
+ folio, start, end);
+ if (ret < 0) {
+ folio_batch_release(&fbatch);
+ return ret;
+ }
+
+ start += ret;
+ }
- start += ret;
+ folio_batch_release(&fbatch);
cond_resched();
} while (wbc->nr_to_write > 0);
Powered by blists - more mailing lists