lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 09 Oct 2017 13:44:24 +0100
From:   Ben Hutchings <ben@...adent.org.uk>
To:     linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC:     akpm@...ux-foundation.org, "Steve French" <smfrench@...il.com>,
        "Rabin Vincent" <rabinv@...s.com>,
        "Pavel Shilovsky" <pshilov@...rosoft.com>
Subject: [PATCH 3.16 093/192] CIFS: fix circular locking dependency

3.16.49-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: Rabin Vincent <rabinv@...s.com>

commit 966681c9f029afd5decee069b7658bb58ad0a863 upstream.

When a CIFS filesystem is mounted with the forcemand option and the
following command is run on it, lockdep warns about a circular locking
dependency between CifsInodeInfo::lock_sem and the inode lock.

 while echo foo > hello; do :; done & while touch -c hello; do :; done

cifs_writev() takes the locks in the wrong order, but note that we can't
only flip the order around because it releases the inode lock before the
call to generic_write_sync() while it holds the lock_sem across that
call.

But, AFAICS, there is no need to hold the CifsInodeInfo::lock_sem across
the generic_write_sync() call either, so we can release both the locks
before generic_write_sync(), and change the order.

 ======================================================
 WARNING: possible circular locking dependency detected
 4.12.0-rc7+ #9 Not tainted
 ------------------------------------------------------
 touch/487 is trying to acquire lock:
  (&cifsi->lock_sem){++++..}, at: cifsFileInfo_put+0x88f/0x16a0

 but task is already holding lock:
  (&sb->s_type->i_mutex_key#11){+.+.+.}, at: utimes_common+0x3ad/0x870

 which lock already depends on the new lock.

 the existing dependency chain (in reverse order) is:

 -> #1 (&sb->s_type->i_mutex_key#11){+.+.+.}:
        __lock_acquire+0x1f74/0x38f0
        lock_acquire+0x1cc/0x600
        down_write+0x74/0x110
        cifs_strict_writev+0x3cb/0x8c0
        __vfs_write+0x4c1/0x930
        vfs_write+0x14c/0x2d0
        SyS_write+0xf7/0x240
        entry_SYSCALL_64_fastpath+0x1f/0xbe

 -> #0 (&cifsi->lock_sem){++++..}:
        check_prevs_add+0xfa0/0x1d10
        __lock_acquire+0x1f74/0x38f0
        lock_acquire+0x1cc/0x600
        down_write+0x74/0x110
        cifsFileInfo_put+0x88f/0x16a0
        cifs_setattr+0x992/0x1680
        notify_change+0x61a/0xa80
        utimes_common+0x3d4/0x870
        do_utimes+0x1c1/0x220
        SyS_utimensat+0x84/0x1a0
        entry_SYSCALL_64_fastpath+0x1f/0xbe

 other info that might help us debug this:

  Possible unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&sb->s_type->i_mutex_key#11);
                                lock(&cifsi->lock_sem);
                                lock(&sb->s_type->i_mutex_key#11);
   lock(&cifsi->lock_sem);

  *** DEADLOCK ***

 2 locks held by touch/487:
  #0:  (sb_writers#10){.+.+.+}, at: mnt_want_write+0x41/0xb0
  #1:  (&sb->s_type->i_mutex_key#11){+.+.+.}, at: utimes_common+0x3ad/0x870

 stack backtrace:
 CPU: 0 PID: 487 Comm: touch Not tainted 4.12.0-rc7+ #9
 Call Trace:
  dump_stack+0xdb/0x185
  print_circular_bug+0x45b/0x790
  __lock_acquire+0x1f74/0x38f0
  lock_acquire+0x1cc/0x600
  down_write+0x74/0x110
  cifsFileInfo_put+0x88f/0x16a0
  cifs_setattr+0x992/0x1680
  notify_change+0x61a/0xa80
  utimes_common+0x3d4/0x870
  do_utimes+0x1c1/0x220
  SyS_utimensat+0x84/0x1a0
  entry_SYSCALL_64_fastpath+0x1f/0xbe

Fixes: 19dfc1f5f2ef03a52 ("cifs: fix the race in cifs_writev()")
Signed-off-by: Rabin Vincent <rabinv@...s.com>
Signed-off-by: Steve French <smfrench@...il.com>
Acked-by: Pavel Shilovsky <pshilov@...rosoft.com>
[bwh: Backported to 3.16:
 - Keep using mutex_{,un}lock()
 - Update both branches of if (!cifs_find_lock_conflict(...))]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -2593,18 +2593,19 @@ cifs_writev(struct kiocb *iocb, struct i
 	ssize_t rc = -EACCES;
 	loff_t lock_pos = iocb->ki_pos;
 
+	mutex_lock(&inode->i_mutex);
 	/*
 	 * We need to hold the sem to be sure nobody modifies lock list
 	 * with a brlock that prevents writing.
 	 */
 	down_read(&cinode->lock_sem);
-	mutex_lock(&inode->i_mutex);
 	if (file->f_flags & O_APPEND)
 		lock_pos = i_size_read(inode);
 	if (!cifs_find_lock_conflict(cfile, lock_pos, iov_iter_count(from),
 				     server->vals->exclusive_lock_type, NULL,
 				     CIFS_WRITE_OP)) {
 		rc = __generic_file_write_iter(iocb, from);
+		up_read(&cinode->lock_sem);
 		mutex_unlock(&inode->i_mutex);
 
 		if (rc > 0) {
@@ -2615,9 +2616,9 @@ cifs_writev(struct kiocb *iocb, struct i
 				rc = err;
 		}
 	} else {
+		up_read(&cinode->lock_sem);
 		mutex_unlock(&inode->i_mutex);
 	}
-	up_read(&cinode->lock_sem);
 	return rc;
 }
 

Powered by blists - more mailing lists