[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ec71eaa7-a5e5-4d83-a405-782d63cf5c53@suse.cz>
Date: Wed, 8 Jan 2025 12:52:50 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Suren Baghdasaryan <surenb@...gle.com>, akpm@...ux-foundation.org
Cc: peterz@...radead.org, willy@...radead.org, liam.howlett@...cle.com,
lorenzo.stoakes@...cle.com, mhocko@...e.com, hannes@...xchg.org,
mjguzik@...il.com, oliver.sang@...el.com, mgorman@...hsingularity.net,
david@...hat.com, peterx@...hat.com, oleg@...hat.com, dave@...olabs.net,
paulmck@...nel.org, brauner@...nel.org, dhowells@...hat.com,
hdanton@...a.com, hughd@...gle.com, lokeshgidra@...gle.com,
minchan@...gle.com, jannh@...gle.com, shakeel.butt@...ux.dev,
souravpanda@...gle.com, pasha.tatashin@...een.com, klarasmodin@...il.com,
corbet@....net, linux-doc@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v7 12/17] mm: replace vm_lock and detached flag with a
reference count
On 12/26/24 18:07, Suren Baghdasaryan wrote:
> rw_semaphore is a sizable structure of 40 bytes and consumes
> considerable space for each vm_area_struct. However vma_lock has
> two important specifics which can be used to replace rw_semaphore
> with a simpler structure:
> 1. Readers never wait. They try to take the vma_lock and fall back to
> mmap_lock if that fails.
> 2. Only one writer at a time will ever try to write-lock a vma_lock
> because writers first take mmap_lock in write mode.
> Because of these requirements, full rw_semaphore functionality is not
> needed and we can replace rw_semaphore and the vma->detached flag with
> a refcount (vm_refcnt).
> When vma is in detached state, vm_refcnt is 0 and only a call to
> vma_mark_attached() can take it out of this state. Note that unlike
> before, now we enforce both vma_mark_attached() and vma_mark_detached()
> to be done only after vma has been write-locked. vma_mark_attached()
> changes vm_refcnt to 1 to indicate that it has been attached to the vma
> tree. When a reader takes read lock, it increments vm_refcnt, unless the
> top usable bit of vm_refcnt (0x40000000) is set, indicating presence of
> a writer. When writer takes write lock, it both increments vm_refcnt and
> sets the top usable bit to indicate its presence. If there are readers,
> writer will wait using newly introduced mm->vma_writer_wait. Since all
> writers take mmap_lock in write mode first, there can be only one writer
> at a time. The last reader to release the lock will signal the writer
> to wake up.
> refcount might overflow if there are many competing readers, in which case
> read-locking will fail. Readers are expected to handle such failures.
>
> Suggested-by: Peter Zijlstra <peterz@...radead.org>
> Suggested-by: Matthew Wilcox <willy@...radead.org>
> Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> */
> static inline bool vma_start_read(struct vm_area_struct *vma)
> {
> + int oldcnt;
> +
> /*
> * Check before locking. A race might cause false locked result.
> * We can use READ_ONCE() for the mm_lock_seq here, and don't need
> @@ -720,13 +745,20 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
> if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence))
> return false;
>
> - if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0))
> +
> + rwsem_acquire_read(&vma->vmlock_dep_map, 0, 0, _RET_IP_);
I don't know much about lockdep, but I see that down_read() does
rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_);
down_read_trylock() does
rwsem_acquire_read(&sem->dep_map, 0, 1, _RET_IP_);
This is passing the down_read()-like variant but it behaves like a trylock, no?
> + /* Limit at VMA_REF_LIMIT to leave one count for a writer */
It's mainly to not increase as much as VMA_LOCK_OFFSET bit could become
false positively set set by readers, right? The "leave one count" sounds
like an implementation detail of VMA_REF_LIMIT and will change if Liam's
suggestion is proven feasible?
> + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt,
> + VMA_REF_LIMIT))) {
> + rwsem_release(&vma->vmlock_dep_map, _RET_IP_);
> return false;
> + }
> + lock_acquired(&vma->vmlock_dep_map, _RET_IP_);
>
> /*
> - * Overflow might produce false locked result.
> + * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result.
> * False unlocked result is impossible because we modify and check
> - * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq
> + * vma->vm_lock_seq under vma->vm_refcnt protection and mm->mm_lock_seq
> * modification invalidates all existing locks.
> *
> * We must use ACQUIRE semantics for the mm_lock_seq so that if we are
> @@ -734,10 +766,12 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
> * after it has been unlocked.
> * This pairs with RELEASE semantics in vma_end_write_all().
> */
> - if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) {
> - up_read(&vma->vm_lock.lock);
> + if (unlikely(oldcnt & VMA_LOCK_OFFSET ||
> + vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) {
> + vma_refcount_put(vma);
> return false;
> }
> +
> return true;
> }
>
> @@ -749,8 +783,17 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
> */
> static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass)
> {
> + int oldcnt;
> +
> mmap_assert_locked(vma->vm_mm);
> - down_read_nested(&vma->vm_lock.lock, subclass);
> + rwsem_acquire_read(&vma->vmlock_dep_map, subclass, 0, _RET_IP_);
Same as above?
> + /* Limit at VMA_REF_LIMIT to leave one count for a writer */
Also
> + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt,
> + VMA_REF_LIMIT))) {
> + rwsem_release(&vma->vmlock_dep_map, _RET_IP_);
> + return false;
> + }
> + lock_acquired(&vma->vmlock_dep_map, _RET_IP_);
> return true;
> }
>
Powered by blists - more mailing lists