[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230712195652.969194-1-surenb@google.com>
Date: Wed, 12 Jul 2023 12:56:52 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: akpm@...ux-foundation.org
Cc: Liam.Howlett@...cle.com, linux-mm@...ck.org, willy@...radead.org,
ldufour@...ux.ibm.com, michel@...pinasse.org, jglisse@...gle.com,
vbabka@...e.cz, paulmck@...nel.org, brauner@...nel.org,
linux-kernel@...r.kernel.org,
Suren Baghdasaryan <surenb@...gle.com>,
"Liam R . Howlett" <liam.howlett@...cle.com>,
syzbot+339b02f826caafd5f7a8@...kaller.appspotmail.com
Subject: [PATCH 1/1] mm: fix a lockdep issue in vma_assert_write_locked
__is_vma_write_locked() can be used only when mmap_lock is write-locked
to guarantee vm_lock_seq and mm_lock_seq stability during the check.
Therefore it asserts this condition before further checks. Because of
that it can't be used unless the user expects the mmap_lock to be
write-locked. vma_assert_locked() can't assume this before ensuring
that VMA is not read-locked.
Change the order of the checks in vma_assert_locked() to check if the
VMA is read-locked first and only then assert if it's not write-locked.
Fixes: 50b88b63e3e4 ("mm: handle userfaults under VMA lock")
Reported-by: Liam R. Howlett <liam.howlett@...cle.com>
Closes: https://lore.kernel.org/all/20230712022620.3yytbdh24b7i4zrn@revolver/
Reported-by: syzbot+339b02f826caafd5f7a8@...kaller.appspotmail.com
Closes: https://lore.kernel.org/all/0000000000002db68f05ffb791bc@google.com/
Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
---
include/linux/mm.h | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9687b48dfb1b..e3b022a66343 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -668,6 +668,7 @@ static inline void vma_end_read(struct vm_area_struct *vma)
rcu_read_unlock();
}
+/* WARNING! Can only be used if mmap_lock is expected to be write-locked */
static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq)
{
mmap_assert_write_locked(vma->vm_mm);
@@ -707,22 +708,17 @@ static inline bool vma_try_start_write(struct vm_area_struct *vma)
return true;
}
-static inline void vma_assert_locked(struct vm_area_struct *vma)
+static inline void vma_assert_write_locked(struct vm_area_struct *vma)
{
int mm_lock_seq;
- if (__is_vma_write_locked(vma, &mm_lock_seq))
- return;
-
- lockdep_assert_held(&vma->vm_lock->lock);
- VM_BUG_ON_VMA(!rwsem_is_locked(&vma->vm_lock->lock), vma);
+ VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma);
}
-static inline void vma_assert_write_locked(struct vm_area_struct *vma)
+static inline void vma_assert_locked(struct vm_area_struct *vma)
{
- int mm_lock_seq;
-
- VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma);
+ if (!rwsem_is_locked(&vma->vm_lock->lock))
+ vma_assert_write_locked(vma);
}
static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached)
--
2.41.0.455.g037347b96a-goog
Powered by blists - more mailing lists