[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190926134923.wqlkymjdfxd4iymh@box>
Date: Thu, 26 Sep 2019 16:49:23 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Josef Bacik <josef@...icpanda.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: drop mmap_sem before calling balance_dirty_pages()
in write fault
On Tue, Sep 24, 2019 at 05:43:37PM -0400, Johannes Weiner wrote:
> On Tue, Sep 24, 2019 at 01:46:08PM -0700, Matthew Wilcox wrote:
> > On Tue, Sep 24, 2019 at 03:42:38PM -0400, Johannes Weiner wrote:
> > > > I'm not a fan of moving file_update_time() to _before_ the
> > > > balance_dirty_pages call.
> > >
> > > Can you elaborate why? If the filesystem has a page_mkwrite op, it
> > > will have already called file_update_time() before this function is
> > > entered. If anything, this change makes the sequence more consistent.
> >
> > Oh, that makes sense. I thought it should be updated after all the data
> > was written, but it probably doesn't make much difference.
> >
> > > > Also, this is now the third place that needs
> > > > maybe_unlock_mmap_for_io, see
> > > > https://lore.kernel.org/linux-mm/20190917120852.x6x3aypwvh573kfa@box/
> > >
> > > Good idea, I moved the helper to internal.h and converted to it.
> > >
> > > I left the shmem site alone, though. It doesn't require the file
> > > pinning, so it shouldn't pointlessly bump the file refcount and
> > > suggest such a dependency - that could cost somebody later quite a bit
> > > of time trying to understand the code.
> >
> > The problem for shmem is this:
> >
> > spin_unlock(&inode->i_lock);
> > schedule();
> >
> > spin_lock(&inode->i_lock);
> > finish_wait(shmem_falloc_waitq, &shmem_fault_wait);
> > spin_unlock(&inode->i_lock);
> >
> > While scheduled, the VMA can go away and the inode be reclaimed, making
> > this a use-after-free. The initial suggestion was an increment on
> > the inode refcount, but since we already have a pattern which involves
> > pinning the file, I thought that was a better way to go.
>
> I completely read over the context of that email you linked - that
> there is a bug in the existing code - and looked at it as mere
> refactoring patch. My apologies.
>
> Switching that shmem site to maybe_unlock_mmap_for_io() to indirectly
> pin the inode (in a separate bug fix patch) indeed makes sense to me.
The patch on top of this one is below. Please post them together if you are
going to resend yours.
>
> > > Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> >
> > Reviewed-by: Matthew Wilcox (Oracle) <willy@...radead.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
>From bdf96fe9e3c1a319e9fd131efbe0118ea41a41b1 Mon Sep 17 00:00:00 2001
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Date: Thu, 26 Sep 2019 16:34:26 +0300
Subject: [PATCH] shmem: Pin the file in shmem_fault() if mmap_sem is dropped
syzbot found the following crash:
BUG: KASAN: use-after-free in perf_trace_lock_acquire+0x401/0x530
include/trace/events/lock.h:13
Read of size 8 at addr ffff8880a5cf2c50 by task syz-executor.0/26173
CPU: 0 PID: 26173 Comm: syz-executor.0 Not tainted 5.3.0-rc6 #146
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x172/0x1f0 lib/dump_stack.c:113
print_address_description.cold+0xd4/0x306 mm/kasan/report.c:351
__kasan_report.cold+0x1b/0x36 mm/kasan/report.c:482
kasan_report+0x12/0x17 mm/kasan/common.c:618
__asan_report_load8_noabort+0x14/0x20 mm/kasan/generic_report.c:132
perf_trace_lock_acquire+0x401/0x530 include/trace/events/lock.h:13
trace_lock_acquire include/trace/events/lock.h:13 [inline]
lock_acquire+0x2de/0x410 kernel/locking/lockdep.c:4411
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:151
spin_lock include/linux/spinlock.h:338 [inline]
shmem_fault+0x5ec/0x7b0 mm/shmem.c:2034
__do_fault+0x111/0x540 mm/memory.c:3083
do_shared_fault mm/memory.c:3535 [inline]
do_fault mm/memory.c:3613 [inline]
handle_pte_fault mm/memory.c:3840 [inline]
__handle_mm_fault+0x2adf/0x3f20 mm/memory.c:3964
handle_mm_fault+0x1b5/0x6b0 mm/memory.c:4001
do_user_addr_fault arch/x86/mm/fault.c:1441 [inline]
__do_page_fault+0x536/0xdd0 arch/x86/mm/fault.c:1506
do_page_fault+0x38/0x590 arch/x86/mm/fault.c:1530
page_fault+0x39/0x40 arch/x86/entry/entry_64.S:1202
It happens if the VMA got unmapped under us while we dropped mmap_sem
and inode got freed.
Pinning the file if we drop mmap_sem fixes the issue.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Reported-by: syzbot+03ee87124ee05af991bd@...kaller.appspotmail.com
Cc: Hillf Danton <hdanton@...a.com>
Cc: Matthew Wilcox <willy@...radead.org>
Cc: Hugh Dickins <hughd@...gle.com>
---
mm/shmem.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 30ce722c23fa..f672e4145cfd 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2022,16 +2022,14 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
shmem_falloc->waitq &&
vmf->pgoff >= shmem_falloc->start &&
vmf->pgoff < shmem_falloc->next) {
+ struct file *fpin = NULL;
wait_queue_head_t *shmem_falloc_waitq;
DEFINE_WAIT_FUNC(shmem_fault_wait, synchronous_wake_function);
ret = VM_FAULT_NOPAGE;
- if ((vmf->flags & FAULT_FLAG_ALLOW_RETRY) &&
- !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) {
- /* It's polite to up mmap_sem if we can */
- up_read(&vma->vm_mm->mmap_sem);
+ fpin = maybe_unlock_mmap_for_io(vmf, fpin);
+ if (fpin)
ret = VM_FAULT_RETRY;
- }
shmem_falloc_waitq = shmem_falloc->waitq;
prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait,
@@ -2049,6 +2047,9 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
spin_lock(&inode->i_lock);
finish_wait(shmem_falloc_waitq, &shmem_fault_wait);
spin_unlock(&inode->i_lock);
+
+ if (fpin)
+ fput(fpin);
return ret;
}
spin_unlock(&inode->i_lock);
--
2.21.0
--
Kirill A. Shutemov
Powered by blists - more mailing lists