[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <780211.1719318546@warthog.procyon.org.uk>
Date: Tue, 25 Jun 2024 13:29:06 +0100
From: David Howells <dhowells@...hat.com>
Cc: dhowells@...hat.com, Christian Brauner <christian@...uner.io>,
Matthew Wilcox <willy@...radead.org>,
Jeff Layton <jlayton@...nel.org>, netfs@...ts.linux.dev,
v9fs@...ts.linux.dev, linux-afs@...ts.infradead.org,
linux-cifs@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH v2] netfs: Fix netfs_page_mkwrite() to check folio->mapping is valid
Fix netfs_page_mkwrite() to check that folio->mapping is valid once it has
taken the folio lock (as filemap_page_mkwrite() does). Without this,
generic/247 occasionally oopses with something like the following:
BUG: kernel NULL pointer dereference, address: 0000000000000000
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
RIP: 0010:trace_event_raw_event_netfs_folio+0x61/0xc0
...
Call Trace:
<TASK>
? __die_body+0x1a/0x60
? page_fault_oops+0x6e/0xa0
? exc_page_fault+0xc2/0xe0
? asm_exc_page_fault+0x22/0x30
? trace_event_raw_event_netfs_folio+0x61/0xc0
trace_netfs_folio+0x39/0x40
netfs_page_mkwrite+0x14c/0x1d0
do_page_mkwrite+0x50/0x90
do_pte_missing+0x184/0x200
__handle_mm_fault+0x42d/0x500
handle_mm_fault+0x121/0x1f0
do_user_addr_fault+0x23e/0x3c0
exc_page_fault+0xc2/0xe0
asm_exc_page_fault+0x22/0x30
This is due to the invalidate_inode_pages2_range() issued at the end of the
DIO write interfering with the mmap'd writes.
Fixes: 102a7e2c598c ("netfs: Allow buffered shared-writeable mmap through netfs_page_mkwrite()")
Signed-off-by: David Howells <dhowells@...hat.com>
cc: Matthew Wilcox <willy@...radead.org>
cc: Jeff Layton <jlayton@...nel.org>
cc: netfs@...ts.linux.dev
cc: v9fs@...ts.linux.dev
cc: linux-afs@...ts.infradead.org
cc: linux-cifs@...r.kernel.org
cc: linux-mm@...ck.org
cc: linux-fsdevel@...r.kernel.org
---
Changes
=======
ver #2)
- Actually unlock the folio rather than returning VM_FAULT_LOCKED with
VM_FAULT_NOPAGE.
fs/netfs/buffered_write.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c
index 07bc1fd43530..270f8ebf8328 100644
--- a/fs/netfs/buffered_write.c
+++ b/fs/netfs/buffered_write.c
@@ -523,6 +523,7 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_gr
struct netfs_group *group;
struct folio *folio = page_folio(vmf->page);
struct file *file = vmf->vma->vm_file;
+ struct address_space *mapping = file->f_mapping;
struct inode *inode = file_inode(file);
struct netfs_inode *ictx = netfs_inode(inode);
vm_fault_t ret = VM_FAULT_RETRY;
@@ -534,6 +535,11 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_gr
if (folio_lock_killable(folio) < 0)
goto out;
+ if (folio->mapping != mapping) {
+ folio_unlock(folio);
+ ret = VM_FAULT_NOPAGE;
+ goto out;
+ }
if (folio_wait_writeback_killable(folio)) {
ret = VM_FAULT_LOCKED;
@@ -549,7 +555,7 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_gr
group = netfs_folio_group(folio);
if (group != netfs_group && group != NETFS_FOLIO_COPY_TO_CACHE) {
folio_unlock(folio);
- err = filemap_fdatawait_range(inode->i_mapping,
+ err = filemap_fdatawait_range(mapping,
folio_pos(folio),
folio_pos(folio) + folio_size(folio));
switch (err) {
Powered by blists - more mailing lists