[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <162879971699.3306668.8977537647318498651.stgit@warthog.procyon.org.uk>
Date: Thu, 12 Aug 2021 21:21:57 +0100
From: David Howells <dhowells@...hat.com>
To: willy@...radead.org
Cc: "Darrick J. Wong" <darrick.wong@...cle.com>,
Seth Jennings <sjenning@...ux.vnet.ibm.com>,
Bob Liu <bob.liu@...cle.com>, linux-nfs@...r.kernel.org,
Christoph Hellwig <hch@....de>,
Dan Magenheimer <dan.magenheimer@...cle.com>,
Trond Myklebust <trond.myklebust@...merspace.com>,
Trond Myklebust <trond.myklebust@...marydata.com>,
Minchan Kim <minchan@...nel.org>, dhowells@...hat.com,
dhowells@...hat.com, trond.myklebust@...marydata.com,
darrick.wong@...cle.com, hch@....de, viro@...iv.linux.org.uk,
jlayton@...nel.org, sfrench@...ba.org,
torvalds@...ux-foundation.org, linux-nfs@...r.kernel.org,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [RFC PATCH v2 0/5] mm: Fix NFS swapfiles and use DIO for swapfiles
Hi Willy, Trond,
Here's v2 of a change to make reads and writes from the swapfile use async
DIO, via the ->direct_IO() method, rather than readpage(), as requested by
Willy.
The consensus seems to be that this is probably the wrong approach and
->direct_IO() needs replacing with a swap-specific method - but that will
require a bunch of filesystems to be modified also.
Note that I'm refcounting the kiocb struct around the call to
->direct_IO(). This is required in cachefiles where I'm going in through
the ->read_iter/->write_iter methods as both core routines and filesystems
touch kiocb *after* calling the completion routine. Should this practice
be disallowed?
I've also added an additional patch to try and remove the bio-submission
path entirely from swap_readpage/writepage code and only go down the
->direct_IO route, but this fails spectacularly (from I/O errors to ATA
failure messages on the test disk using a normal swapspace). This was
suggested by Willy as the bio-submission code is a potential data corruptor
if it's asked to write out a compound page that crosses extent boundaries.
Whilst trying to make this work, I found that NFS's support for swapfiles
seems to have been non-functional since Aug 2019 (I think), so the first
patch fixes that. Question is: do we actually *want* to keep this
functionality, given that it seems that no one's tested it with an upstream
kernel in the last couple of years?
My patches can be found here also:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=swap-dio
I tested this using the procedure and program outlined in the first patch.
I also encountered occasional instances of the following warning, so I'm
wondering if there's a scheduling problem somewhere:
BUG: workqueue lockup - pool cpus=0-3 flags=0x5 nice=0 stuck for 34s!
Showing busy workqueues and worker pools:
workqueue events: flags=0x0
pwq 6: cpus=3 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
in-flight: 1565:fill_page_cache_func
workqueue events_highpri: flags=0x10
pwq 3: cpus=1 node=0 flags=0x1 nice=-20 active=1/256 refcnt=2
in-flight: 1547:fill_page_cache_func
pwq 1: cpus=0 node=0 flags=0x0 nice=-20 active=1/256 refcnt=2
in-flight: 1811:fill_page_cache_func
workqueue events_unbound: flags=0x2
pwq 8: cpus=0-3 flags=0x5 nice=0 active=3/512 refcnt=5
pending: fsnotify_connector_destroy_workfn, fsnotify_mark_destroy_workfn, cleanup_offline_cgwbs_workfn
workqueue events_power_efficient: flags=0x82
pwq 8: cpus=0-3 flags=0x5 nice=0 active=4/256 refcnt=6
pending: neigh_periodic_work, neigh_periodic_work, check_lifetime, do_cache_clean
workqueue writeback: flags=0x4a
pwq 8: cpus=0-3 flags=0x5 nice=0 active=1/256 refcnt=4
in-flight: 433(RESCUER):wb_workfn
workqueue rpciod: flags=0xa
pwq 8: cpus=0-3 flags=0x5 nice=0 active=38/256 refcnt=40
in-flight: 7:rpc_async_schedule, 1609:rpc_async_schedule, 1610:rpc_async_schedule, 912:rpc_async_schedule, 1613:rpc_async_schedule, 1631:rpc_async_schedule, 34:rpc_async_schedule, 44:rpc_async_schedule
pending: rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule
workqueue ext4-rsv-conversion: flags=0x2000a
pool 1: cpus=0 node=0 flags=0x0 nice=-20 hung=59s workers=2 idle: 6
pool 3: cpus=1 node=0 flags=0x1 nice=-20 hung=43s workers=2 manager: 20
pool 6: cpus=3 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 498 29
pool 8: cpus=0-3 flags=0x5 nice=0 hung=34s workers=9 manager: 1623
pool 9: cpus=0-3 flags=0x5 nice=-20 hung=0s workers=2 manager: 5224 idle: 859
Note that this is due to DIO writes to NFS only, as far as I can tell, and
that no reads had happened yet.
Changes:
========
ver #2:
- Remove the callback param to __swap_writepage() as it's invariant.
- Allocate the kiocb on the stack in sync mode.
- Do an async DIO write if WB_SYNC_ALL isn't set.
- Try to remove the BIO submission paths.
David
Link: https://lore.kernel.org/r/162876946134.3068428.15475611190876694695.stgit@warthog.procyon.org.uk/ # v1
---
David Howells (5):
nfs: Fix write to swapfile failure due to generic_write_checks()
mm: Remove the callback func argument from __swap_writepage()
mm: Make swap_readpage() for SWP_FS_OPS use ->direct_IO() not ->readpage()
mm: Make __swap_writepage() do async DIO if asked for it
mm: Remove swap BIO paths and only use DIO paths [BROKEN]
fs/direct-io.c | 2 +
include/linux/bio.h | 2 +
include/linux/fs.h | 1 +
include/linux/swap.h | 4 +-
mm/page_io.c | 379 ++++++++++++++++++++++---------------------
mm/zswap.c | 2 +-
6 files changed, 204 insertions(+), 186 deletions(-)
Powered by blists - more mailing lists