[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230620235726.3873043-1-surenb@google.com>
Date: Tue, 20 Jun 2023 16:57:24 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: akpm@...ux-foundation.org
Cc: willy@...radead.org, torvalds@...uxfoundation.org,
vegard.nossum@...cle.com, mpe@...erman.id.au,
Liam.Howlett@...cle.com, lrh2000@....edu.cn, mgorman@...e.de,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kernel-team@...roid.com, surenb@...gle.com
Subject: [PATCH 1/3] mm: change vma_start_read to fail if VMA got detached
from under it
Current implementation of vma_start_read() checks VMA for being locked
before taking vma->vm_lock and then checks that again. This mechanism
fails to detect a case when the VMA gets write-locked, modified and
unlocked after the first check but before the vma->vm_lock is obtained.
While this is not strictly a problem (vma_start_read would not produce
a false unlocked result), this allows it to successfully lock a VMA which
got detached from the VMA tree while vma_start_read was locking it.
New condition checks for any change in vma->vm_lock_seq after we obtain
vma->vm_lock and will cause vma_start_read() to fail if the above race
occurs.
Fixes: 5e31275cc997 ("mm: add per-VMA lock and helper functions to control it")
Suggested-by: Matthew Wilcox <willy@...radead.org>
Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
---
include/linux/mm.h | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 27ce77080c79..8410da79c570 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -639,23 +639,24 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {}
*/
static inline bool vma_start_read(struct vm_area_struct *vma)
{
- /* Check before locking. A race might cause false locked result. */
- if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))
+ int vm_lock_seq = READ_ONCE(vma->vm_lock_seq);
+
+ /*
+ * Check if VMA is locked before taking vma->vm_lock. A race or
+ * mm_lock_seq overflow might cause false locked result.
+ */
+ if (vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))
return false;
if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0))
return false;
- /*
- * Overflow might produce false locked result.
- * False unlocked result is impossible because we modify and check
- * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq
- * modification invalidates all existing locks.
- */
- if (unlikely(vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) {
+ /* Fail if VMA was write-locked after we checked it earlier */
+ if (unlikely(vm_lock_seq != READ_ONCE(vma->vm_lock_seq))) {
up_read(&vma->vm_lock->lock);
return false;
}
+
return true;
}
--
2.41.0.162.gfafddb0af9-goog
Powered by blists - more mailing lists