lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <001a113bb44672e783055f8787a3@google.com>
Date:   Mon, 04 Dec 2017 10:03:01 -0800
From:   syzbot 
        <bot+e6aa4df2569624fc2b37ff61b464f38c3440bb04@...kaller.appspotmail.com>
To:     linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        syzkaller-bugs@...glegroups.com, viro@...iv.linux.org.uk
Subject: possible deadlock in fifo_open

syzkaller has found reproducer for the following crash on  
ae64f9bd1d3621b5e60d7363bc20afb46aede215
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/master
compiler: gcc (GCC) 7.1.1 20170620
.config is attached
Raw console output is attached.
C reproducer is attached
syzkaller reproducer is attached. See https://goo.gl/kgGztJ
for information about syzkaller reproducers



======================================================
WARNING: possible circular locking dependency detected
4.15.0-rc2+ #206 Not tainted
------------------------------------------------------
syzkaller022699/3086 is trying to acquire lock:
  (&pipe->mutex/1){+.+.}, at: [<00000000698950dd>] __pipe_lock fs/pipe.c:88  
[inline]
  (&pipe->mutex/1){+.+.}, at: [<00000000698950dd>] fifo_open+0x15c/0xa40  
fs/pipe.c:916

but task is already holding lock:
  (&sig->cred_guard_mutex){+.+.}, at: [<0000000082fd15e8>]  
prepare_bprm_creds+0x53/0x110 fs/exec.c:1390

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&sig->cred_guard_mutex){+.+.}:
        lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4004
        __mutex_lock_common kernel/locking/mutex.c:756 [inline]
        __mutex_lock+0x16f/0x1a80 kernel/locking/mutex.c:893
        mutex_lock_killable_nested+0x16/0x20 kernel/locking/mutex.c:923
        do_io_accounting+0x1c2/0xf50 fs/proc/base.c:2682
        proc_tgid_io_accounting+0x22/0x30 fs/proc/base.c:2731
        proc_single_show+0xf8/0x170 fs/proc/base.c:744
        seq_read+0x385/0x13d0 fs/seq_file.c:234
        __vfs_read+0xef/0xa00 fs/read_write.c:411
        vfs_read+0x124/0x360 fs/read_write.c:447
        SYSC_read fs/read_write.c:573 [inline]
        SyS_read+0xef/0x220 fs/read_write.c:566
        entry_SYSCALL_64_fastpath+0x1f/0x96

-> #1 (&p->lock){+.+.}:
        lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4004
        __mutex_lock_common kernel/locking/mutex.c:756 [inline]
        __mutex_lock+0x16f/0x1a80 kernel/locking/mutex.c:893
        mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
        seq_read+0xd5/0x13d0 fs/seq_file.c:165
        do_loop_readv_writev fs/read_write.c:673 [inline]
        do_iter_read+0x3db/0x5b0 fs/read_write.c:897
        vfs_readv+0x121/0x1c0 fs/read_write.c:959
        kernel_readv fs/splice.c:361 [inline]
        default_file_splice_read+0x508/0xae0 fs/splice.c:416
        do_splice_to+0x110/0x170 fs/splice.c:880
        do_splice fs/splice.c:1173 [inline]
        SYSC_splice fs/splice.c:1402 [inline]
        SyS_splice+0x11a8/0x1630 fs/splice.c:1382
        entry_SYSCALL_64_fastpath+0x1f/0x96

-> #0 (&pipe->mutex/1){+.+.}:
        check_prevs_add kernel/locking/lockdep.c:2031 [inline]
        validate_chain kernel/locking/lockdep.c:2473 [inline]
        __lock_acquire+0x3498/0x47f0 kernel/locking/lockdep.c:3500
        lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4004
        __mutex_lock_common kernel/locking/mutex.c:756 [inline]
        __mutex_lock+0x16f/0x1a80 kernel/locking/mutex.c:893
        mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
        __pipe_lock fs/pipe.c:88 [inline]
        fifo_open+0x15c/0xa40 fs/pipe.c:916
        do_dentry_open+0x682/0xd70 fs/open.c:752
        vfs_open+0x107/0x230 fs/open.c:866
        do_last fs/namei.c:3379 [inline]
        path_openat+0x1157/0x3530 fs/namei.c:3519
        do_filp_open+0x25b/0x3b0 fs/namei.c:3554
        do_open_execat+0x1b9/0x5c0 fs/exec.c:849
        do_execveat_common.isra.30+0x90c/0x23c0 fs/exec.c:1741
        do_execve fs/exec.c:1848 [inline]
        SYSC_execve fs/exec.c:1929 [inline]
        SyS_execve+0x39/0x50 fs/exec.c:1924
        do_syscall_64+0x26c/0x920 arch/x86/entry/common.c:285
        return_from_SYSCALL_64+0x0/0x75

other info that might help us debug this:

Chain exists of:
   &pipe->mutex/1 --> &p->lock --> &sig->cred_guard_mutex

  Possible unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&sig->cred_guard_mutex);
                                lock(&p->lock);
                                lock(&sig->cred_guard_mutex);
   lock(&pipe->mutex/1);

  *** DEADLOCK ***

1 lock held by syzkaller022699/3086:
  #0:  (&sig->cred_guard_mutex){+.+.}, at: [<0000000082fd15e8>]  
prepare_bprm_creds+0x53/0x110 fs/exec.c:1390

stack backtrace:
CPU: 0 PID: 3086 Comm: syzkaller022699 Not tainted 4.15.0-rc2+ #206
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS  
Google 01/01/2011
Call Trace:
  __dump_stack lib/dump_stack.c:17 [inline]
  dump_stack+0x194/0x257 lib/dump_stack.c:53
  print_circular_bug+0x42d/0x610 kernel/locking/lockdep.c:1271
  check_prev_add+0x666/0x15f0 kernel/locking/lockdep.c:1914
  check_prevs_add kernel/locking/lockdep.c:2031 [inline]
  validate_chain kernel/locking/lockdep.c:2473 [inline]
  __lock_acquire+0x3498/0x47f0 kernel/locking/lockdep.c:3500
  lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4004
  __mutex_lock_common kernel/locking/mutex.c:756 [inline]
  __mutex_lock+0x16f/0x1a80 kernel/locking/mutex.c:893
  mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
  __pipe_lock fs/pipe.c:88 [inline]
  fifo_open+0x15c/0xa40 fs/pipe.c:916
  do_dentry_open+0x682/0xd70 fs/open.c:752
  vfs_open+0x107/0x230 fs/open.c:866
  do_last fs/namei.c:3379 [inline]
  path_openat+0x1157/0x3530 fs/namei.c:3519
  do_filp_open+0x25b/0x3b0 fs/namei.c:3554
  do_open_execat+0x1b9/0x5c0 fs/exec.c:849
  do_execveat_common.isra.30+0x90c/0x23c0 fs/exec.c:1741
  do_execve fs/exec.c:1848 [inline]
  SYSC_execve fs/exec.c:1929 [inline]
  SyS_execve+0x39/0x50 fs/exec.c:1924
  do_syscall_64+0x26c/0x920 arch/x86/entry/common.c:285
  entry_SYSCALL64_slow_path+0x25/0x25
RIP: 0033:0x440219
RSP: 002b:00007ffd9e4890b8 EFLAGS: 00000217 ORIG_RAX: 000000000000003b
RAX: ffffffffffffffda RBX: 0030656c69662f2e RCX: 0000000000440219
RDX: 0000000020324ff0 RSI: 0000000020a7bfc8 RDI: 0000000020f8aff8
RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000001
R10: 0000000000000001 R11: 0000000000000217 R12: 0000000000401ae0
R13: 0000000000401b70 R14: 0000000000000000 R15: 0000000000000000


View attachment "config.txt" of type "text/plain" (126531 bytes)

Download attachment "raw.log" of type "application/octet-stream" (9783 bytes)

View attachment "repro.txt" of type "text/plain" (770 bytes)

Download attachment "repro.c" of type "application/octet-stream" (1962 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ