[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNN7GhF0e5gKPpn8mQS1Nry_8out4j_meDM0PqbZ9K5Ang@mail.gmail.com>
Date: Wed, 15 Jul 2020 18:45:33 +0200
From: Marco Elver <elver@...gle.com>
To: Eric Biggers <ebiggers@...nel.org>
Cc: syzbot <syzbot+0f1e470df6a4316e0a11@...kaller.appspotmail.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Will Deacon <will@...nel.org>,
Dmitry Vyukov <dvyukov@...gle.com>
Subject: Re: KCSAN: data-race in generic_file_buffered_read / generic_file_buffered_read
On Wed, 15 Jul 2020 at 18:33, Eric Biggers <ebiggers@...nel.org> wrote:
>
> [+Cc linux-fsdevel]
>
> On Wed, Jul 15, 2020 at 05:29:12PM +0200, 'Marco Elver' via syzkaller-bugs wrote:
> > On Wed, Jul 15, 2020 at 08:16AM -0700, syzbot wrote:
> > > Hello,
> > >
> > > syzbot found the following issue on:
> > >
> > > HEAD commit: e9919e11 Merge branch 'for-linus' of git://git.kernel.org/..
> > > git tree: upstream
> > > console output: https://syzkaller.appspot.com/x/log.txt?x=1217a83b100000
> > > kernel config: https://syzkaller.appspot.com/x/.config?x=570eb530a65cd98e
> > > dashboard link: https://syzkaller.appspot.com/bug?extid=0f1e470df6a4316e0a11
> > > compiler: clang version 11.0.0 (https://github.com/llvm/llvm-project.git ca2dcbd030eadbf0aa9b660efe864ff08af6e18b)
> > >
> > > Unfortunately, I don't have any reproducer for this issue yet.
> > >
> > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > Reported-by: syzbot+0f1e470df6a4316e0a11@...kaller.appspotmail.com
> > >
> > > ==================================================================
> > > BUG: KCSAN: data-race in generic_file_buffered_read / generic_file_buffered_read
> >
> > Our guess is that this is either misuse of an API from userspace, or a
> > bug. Can someone clarify?
> >
> > Below are the snippets of code around these accesses.
>
> Concurrent reads on the same file descriptor are allowed. Not with sys_read(),
> as that implicitly uses the file position. But it's allowed with sys_pread(),
> and also with sys_sendfile() which is the case syzbot is reporting here.
>
> >
> > > write to 0xffff8880968747b0 of 8 bytes by task 6336 on cpu 0:
> > > generic_file_buffered_read+0x18be/0x19e0 mm/filemap.c:2246
> >
> > ...
> > would_block:
> > error = -EAGAIN;
> > out:
> > ra->prev_pos = prev_index;
> > ra->prev_pos <<= PAGE_SHIFT;
> > 2246) ra->prev_pos |= prev_offset;
> >
> > *ppos = ((loff_t)index << PAGE_SHIFT) + offset;
> > file_accessed(filp);
> > return written ? written : error;
> > }
> > EXPORT_SYMBOL_GPL(generic_file_buffered_read);
> > ...
>
> Well, it's a data race. Each open file descriptor has just one readahead state
> (struct file_ra_state), and concurrent reads of the same file descriptor
> use/change that readahead state without any locking.
>
> Presumably this has traditionally been considered okay, since readahead is
> "only" for performance and doesn't affect correctness. And for performance
> reasons, we want to avoid locking during file reads.
>
> So we may just need to annotate all access to file_ra_state with
> READ_ONCE() and WRITE_ONCE()...
The thing that stood out here are the multiple accesses both on the
reader and writer side. If it was only 1 access, where the race is
expected, a simple READ/WRITE_ONCE might have been OK.
But here, we actually have several writes to the same variable
'prev_pos'. The reader is also doing several reads to the same
variable. Maybe we got lucky because the compiler just turns it into 1
load, keeps it in a register and does the various modifications, and
then 1 store to write back. Similar on the reader side, we may have
gotten lucky in that the compiler just does 1 actual load. If that
behaviour is safe, it needs to be made explicit to make it impossible
for the compiler to generate anything else.
> > > generic_file_read_iter+0x7d/0x3e0 mm/filemap.c:2326
> > > ext4_file_read_iter+0x2d6/0x420 fs/ext4/file.c:74
> > > call_read_iter include/linux/fs.h:1902 [inline]
> > > generic_file_splice_read+0x22a/0x310 fs/splice.c:312
> > > do_splice_to fs/splice.c:870 [inline]
> > > splice_direct_to_actor+0x2a8/0x660 fs/splice.c:950
> > > do_splice_direct+0xf2/0x170 fs/splice.c:1059
> > > do_sendfile+0x562/0xb10 fs/read_write.c:1540
> > > __do_sys_sendfile64 fs/read_write.c:1601 [inline]
> > > __se_sys_sendfile64 fs/read_write.c:1587 [inline]
> > > __x64_sys_sendfile64+0xf2/0x130 fs/read_write.c:1587
> > > do_syscall_64+0x51/0xb0 arch/x86/entry/common.c:384
> > > entry_SYSCALL_64_after_hwframe+0x44/0xa9
> > >
> > > read to 0xffff8880968747b0 of 8 bytes by task 6334 on cpu 1:
> > > generic_file_buffered_read+0x11e/0x19e0 mm/filemap.c:2011
> >
> > ...
> > index = *ppos >> PAGE_SHIFT;
> > prev_index = ra->prev_pos >> PAGE_SHIFT;
> > 2011) prev_offset = ra->prev_pos & (PAGE_SIZE-1);
> > last_index = (*ppos + iter->count + PAGE_SIZE-1) >> PAGE_SHIFT;
> > offset = *ppos & ~PAGE_MASK;
> > ...
> >
> > > generic_file_read_iter+0x7d/0x3e0 mm/filemap.c:2326
> > > ext4_file_read_iter+0x2d6/0x420 fs/ext4/file.c:74
> > > call_read_iter include/linux/fs.h:1902 [inline]
> > > generic_file_splice_read+0x22a/0x310 fs/splice.c:312
> > > do_splice_to fs/splice.c:870 [inline]
> > > splice_direct_to_actor+0x2a8/0x660 fs/splice.c:950
> > > do_splice_direct+0xf2/0x170 fs/splice.c:1059
> > > do_sendfile+0x562/0xb10 fs/read_write.c:1540
> > > __do_sys_sendfile64 fs/read_write.c:1601 [inline]
> > > __se_sys_sendfile64 fs/read_write.c:1587 [inline]
> > > __x64_sys_sendfile64+0xf2/0x130 fs/read_write.c:1587
> > > do_syscall_64+0x51/0xb0 arch/x86/entry/common.c:384
> > > entry_SYSCALL_64_after_hwframe+0x44/0xa9
> > >
> > > Reported by Kernel Concurrency Sanitizer on:
> > > CPU: 1 PID: 6334 Comm: syz-executor.0 Not tainted 5.8.0-rc5-syzkaller #0
> > > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
> > > ==================================================================
Powered by blists - more mailing lists