[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200820091607.890615373@linuxfoundation.org>
Date: Thu, 20 Aug 2020 11:19:42 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
syzbot+7a0d9d0b26efefe61780@...kaller.appspotmail.com,
Suren Baghdasaryan <surenb@...gle.com>,
"Joel Fernandes (Google)" <joel@...lfernandes.org>
Subject: [PATCH 4.14 007/228] staging: android: ashmem: Fix lockdep warning for write operation
From: Suren Baghdasaryan <surenb@...gle.com>
commit 3e338d3c95c735dc3265a86016bb4c022ec7cadc upstream.
syzbot report [1] describes a deadlock when write operation against an
ashmem fd executed at the time when ashmem is shrinking its cache results
in the following lock sequence:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&sb->s_type->i_mutex_key#13);
lock(fs_reclaim);
lock(&sb->s_type->i_mutex_key#13);
kswapd takes fs_reclaim and then inode_lock while generic_perform_write
takes inode_lock and then fs_reclaim. However ashmem does not support
writing into backing shmem with a write syscall. The only way to change
its content is to mmap it and operate on mapped memory. Therefore the race
that lockdep is warning about is not valid. Resolve this by introducing a
separate lockdep class for the backing shmem inodes.
[1]: https://lkml.kernel.org/lkml/0000000000000b5f9d059aa2037f@google.com/
Reported-by: syzbot+7a0d9d0b26efefe61780@...kaller.appspotmail.com
Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
Cc: stable <stable@...r.kernel.org>
Reviewed-by: Joel Fernandes (Google) <joel@...lfernandes.org>
Link: https://lore.kernel.org/r/20200730192632.3088194-1-surenb@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/staging/android/ashmem.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
--- a/drivers/staging/android/ashmem.c
+++ b/drivers/staging/android/ashmem.c
@@ -100,6 +100,15 @@ static DEFINE_MUTEX(ashmem_mutex);
static struct kmem_cache *ashmem_area_cachep __read_mostly;
static struct kmem_cache *ashmem_range_cachep __read_mostly;
+/*
+ * A separate lockdep class for the backing shmem inodes to resolve the lockdep
+ * warning about the race between kswapd taking fs_reclaim before inode_lock
+ * and write syscall taking inode_lock and then fs_reclaim.
+ * Note that such race is impossible because ashmem does not support write
+ * syscalls operating on the backing shmem.
+ */
+static struct lock_class_key backing_shmem_inode_class;
+
static inline unsigned long range_size(struct ashmem_range *range)
{
return range->pgend - range->pgstart + 1;
@@ -406,6 +415,7 @@ static int ashmem_mmap(struct file *file
if (!asma->file) {
char *name = ASHMEM_NAME_DEF;
struct file *vmfile;
+ struct inode *inode;
if (asma->name[ASHMEM_NAME_PREFIX_LEN] != '\0')
name = asma->name;
@@ -417,6 +427,8 @@ static int ashmem_mmap(struct file *file
goto out;
}
vmfile->f_mode |= FMODE_LSEEK;
+ inode = file_inode(vmfile);
+ lockdep_set_class(&inode->i_rwsem, &backing_shmem_inode_class);
asma->file = vmfile;
/*
* override mmap operation of the vmfile so that it can't be
Powered by blists - more mailing lists