[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d74a8b22-bd5e-102f-e896-79e66b09a4a4@kernel.dk>
Date: Mon, 2 Nov 2020 10:38:36 -0700
From: Jens Axboe <axboe@...nel.dk>
To: syzbot <syzbot+b57abf7ee60829090495@...kaller.appspotmail.com>,
io-uring@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, mingo@...nel.org, mingo@...hat.com,
peterz@...radead.org, rostedt@...dmis.org,
syzkaller-bugs@...glegroups.com, viro@...iv.linux.org.uk,
will@...nel.org
Subject: Re: KASAN: null-ptr-deref Write in kthread_use_mm
On 11/2/20 4:54 AM, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: 4e78c578 Add linux-next specific files for 20201030
> git tree: linux-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=148969d4500000
> kernel config: https://syzkaller.appspot.com/x/.config?x=83318758268dc331
> dashboard link: https://syzkaller.appspot.com/bug?extid=b57abf7ee60829090495
> compiler: gcc (GCC) 10.1.0-syz 20200507
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=17e1346c500000
> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1388fbca500000
>
> The issue was bisected to:
>
> commit 4d004099a668c41522242aa146a38cc4eb59cb1e
> Author: Peter Zijlstra <peterz@...radead.org>
> Date: Fri Oct 2 09:04:21 2020 +0000
>
> lockdep: Fix lockdep recursion
>
> bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=1354e614500000
> final oops: https://syzkaller.appspot.com/x/report.txt?x=10d4e614500000
> console output: https://syzkaller.appspot.com/x/log.txt?x=1754e614500000
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+b57abf7ee60829090495@...kaller.appspotmail.com
> Fixes: 4d004099a668 ("lockdep: Fix lockdep recursion")
>
> ==================================================================
> BUG: KASAN: null-ptr-deref in instrument_atomic_read_write include/linux/instrumented.h:101 [inline]
> BUG: KASAN: null-ptr-deref in atomic_inc include/asm-generic/atomic-instrumented.h:240 [inline]
> BUG: KASAN: null-ptr-deref in mmgrab include/linux/sched/mm.h:36 [inline]
> BUG: KASAN: null-ptr-deref in kthread_use_mm+0x11c/0x2a0 kernel/kthread.c:1257
> Write of size 4 at addr 0000000000000060 by task io_uring-sq/26191
>
> CPU: 1 PID: 26191 Comm: io_uring-sq Not tainted 5.10.0-rc1-next-20201030-syzkaller #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
> Call Trace:
> __dump_stack lib/dump_stack.c:77 [inline]
> dump_stack+0x107/0x163 lib/dump_stack.c:118
> __kasan_report mm/kasan/report.c:549 [inline]
> kasan_report.cold+0x5/0x37 mm/kasan/report.c:562
> check_memory_region_inline mm/kasan/generic.c:186 [inline]
> check_memory_region+0x13d/0x180 mm/kasan/generic.c:192
> instrument_atomic_read_write include/linux/instrumented.h:101 [inline]
> atomic_inc include/asm-generic/atomic-instrumented.h:240 [inline]
> mmgrab include/linux/sched/mm.h:36 [inline]
> kthread_use_mm+0x11c/0x2a0 kernel/kthread.c:1257
> __io_sq_thread_acquire_mm fs/io_uring.c:1092 [inline]
> __io_sq_thread_acquire_mm+0x1c4/0x220 fs/io_uring.c:1085
> io_sq_thread_acquire_mm_files.isra.0+0x125/0x180 fs/io_uring.c:1104
> io_init_req fs/io_uring.c:6661 [inline]
> io_submit_sqes+0x89d/0x25f0 fs/io_uring.c:6757
> __io_sq_thread fs/io_uring.c:6904 [inline]
> io_sq_thread+0x462/0x1630 fs/io_uring.c:6971
> kthread+0x3af/0x4a0 kernel/kthread.c:292
> ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296
> ==================================================================
> Kernel panic - not syncing: panic_on_warn set ...
> CPU: 1 PID: 26191 Comm: io_uring-sq Tainted: G B 5.10.0-rc1-next-20201030-syzkaller #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
> Call Trace:
> __dump_stack lib/dump_stack.c:77 [inline]
> dump_stack+0x107/0x163 lib/dump_stack.c:118
> panic+0x306/0x73d kernel/panic.c:231
> end_report+0x58/0x5e mm/kasan/report.c:106
> __kasan_report mm/kasan/report.c:552 [inline]
> kasan_report.cold+0xd/0x37 mm/kasan/report.c:562
> check_memory_region_inline mm/kasan/generic.c:186 [inline]
> check_memory_region+0x13d/0x180 mm/kasan/generic.c:192
> instrument_atomic_read_write include/linux/instrumented.h:101 [inline]
> atomic_inc include/asm-generic/atomic-instrumented.h:240 [inline]
> mmgrab include/linux/sched/mm.h:36 [inline]
> kthread_use_mm+0x11c/0x2a0 kernel/kthread.c:1257
> __io_sq_thread_acquire_mm fs/io_uring.c:1092 [inline]
> __io_sq_thread_acquire_mm+0x1c4/0x220 fs/io_uring.c:1085
> io_sq_thread_acquire_mm_files.isra.0+0x125/0x180 fs/io_uring.c:1104
> io_init_req fs/io_uring.c:6661 [inline]
> io_submit_sqes+0x89d/0x25f0 fs/io_uring.c:6757
> __io_sq_thread fs/io_uring.c:6904 [inline]
> io_sq_thread+0x462/0x1630 fs/io_uring.c:6971
> kthread+0x3af/0x4a0 kernel/kthread.c:292
> ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296
> Kernel Offset: disabled
> Rebooting in 86400 seconds..
I think this should fix it - we could _probably_ get by with a
READ_ONCE() of the task mm for this case, but let's play it safe and
lock down the task for a guaranteed consistent view of the current
state.
diff --git a/fs/io_uring.c b/fs/io_uring.c
index dd2ee77feec6..610332f443bd 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -995,20 +995,33 @@ static void io_sq_thread_drop_mm(void)
if (mm) {
kthread_unuse_mm(mm);
mmput(mm);
+ current->mm = NULL;
}
}
static int __io_sq_thread_acquire_mm(struct io_ring_ctx *ctx)
{
- if (!current->mm) {
- if (unlikely(!(ctx->flags & IORING_SETUP_SQPOLL) ||
- !ctx->sqo_task->mm ||
- !mmget_not_zero(ctx->sqo_task->mm)))
- return -EFAULT;
- kthread_use_mm(ctx->sqo_task->mm);
+ struct mm_struct *mm;
+
+ if (current->mm)
+ return 0;
+
+ /* Should never happen */
+ if (unlikely(!(ctx->flags & IORING_SETUP_SQPOLL)))
+ return -EFAULT;
+
+ task_lock(ctx->sqo_task);
+ mm = ctx->sqo_task->mm;
+ if (unlikely(!mm || !mmget_not_zero(mm)))
+ mm = NULL;
+ task_unlock(ctx->sqo_task);
+
+ if (mm) {
+ kthread_use_mm(mm);
+ return 0;
}
- return 0;
+ return -EFAULT;
}
static int io_sq_thread_acquire_mm(struct io_ring_ctx *ctx,
--
Jens Axboe
Powered by blists - more mailing lists