[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250106003821.3gtfxq33fqj4wm5b@master>
Date: Mon, 6 Jan 2025 00:38:21 +0000
From: Wei Yang <richard.weiyang@...il.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: akpm@...ux-foundation.org, peterz@...radead.org, willy@...radead.org,
liam.howlett@...cle.com, lorenzo.stoakes@...cle.com,
mhocko@...e.com, vbabka@...e.cz, hannes@...xchg.org,
mjguzik@...il.com, oliver.sang@...el.com,
mgorman@...hsingularity.net, david@...hat.com, peterx@...hat.com,
oleg@...hat.com, dave@...olabs.net, paulmck@...nel.org,
brauner@...nel.org, dhowells@...hat.com, hdanton@...a.com,
hughd@...gle.com, lokeshgidra@...gle.com, minchan@...gle.com,
jannh@...gle.com, shakeel.butt@...ux.dev, souravpanda@...gle.com,
pasha.tatashin@...een.com, klarasmodin@...il.com, corbet@....net,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v7 12/17] mm: replace vm_lock and detached flag with a
reference count
On Thu, Dec 26, 2024 at 09:07:04AM -0800, Suren Baghdasaryan wrote:
[...]
> /*
> * Try to read-lock a vma. The function is allowed to occasionally yield false
> * locked result to avoid performance overhead, in which case we fall back to
>@@ -710,6 +733,8 @@ static inline void vma_lock_init(struct vm_area_struct *vma)
> */
> static inline bool vma_start_read(struct vm_area_struct *vma)
> {
>+ int oldcnt;
>+
> /*
> * Check before locking. A race might cause false locked result.
> * We can use READ_ONCE() for the mm_lock_seq here, and don't need
>@@ -720,13 +745,20 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
> if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence))
> return false;
>
>- if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0))
>+
>+ rwsem_acquire_read(&vma->vmlock_dep_map, 0, 0, _RET_IP_);
>+ /* Limit at VMA_REF_LIMIT to leave one count for a writer */
>+ if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt,
>+ VMA_REF_LIMIT))) {
>+ rwsem_release(&vma->vmlock_dep_map, _RET_IP_);
> return false;
>+ }
>+ lock_acquired(&vma->vmlock_dep_map, _RET_IP_);
>
> /*
>- * Overflow might produce false locked result.
>+ * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result.
> * False unlocked result is impossible because we modify and check
>- * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq
>+ * vma->vm_lock_seq under vma->vm_refcnt protection and mm->mm_lock_seq
> * modification invalidates all existing locks.
> *
> * We must use ACQUIRE semantics for the mm_lock_seq so that if we are
>@@ -734,10 +766,12 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
> * after it has been unlocked.
> * This pairs with RELEASE semantics in vma_end_write_all().
> */
>- if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) {
>- up_read(&vma->vm_lock.lock);
>+ if (unlikely(oldcnt & VMA_LOCK_OFFSET ||
>+ vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) {
I am not sure it worth mention. In case it is too trivial, just ignore.
If (oldcnt & VMA_LOCK_OFFSET), oldcnt + 1 > VMA_REF_LIMIT. This means
__refcount_inc_not_zero_limited() above would return false.
If my understanding is correct, we don't need to check it here.
>+ vma_refcount_put(vma);
> return false;
> }
>+
> return true;
> }
>
[...]
--
Wei Yang
Help you, Help me
Powered by blists - more mailing lists