[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250113022545.56e2qaggdgqzlukz@master>
Date: Mon, 13 Jan 2025 02:25:45 +0000
From: Wei Yang <richard.weiyang@...il.com>
To: Wei Yang <richard.weiyang@...il.com>
Cc: Suren Baghdasaryan <surenb@...gle.com>,
Mateusz Guzik <mjguzik@...il.com>, akpm@...ux-foundation.org,
peterz@...radead.org, willy@...radead.org, liam.howlett@...cle.com,
lorenzo.stoakes@...cle.com, david.laight.linux@...il.com,
mhocko@...e.com, vbabka@...e.cz, hannes@...xchg.org,
oliver.sang@...el.com, mgorman@...hsingularity.net,
david@...hat.com, peterx@...hat.com, oleg@...hat.com,
dave@...olabs.net, paulmck@...nel.org, brauner@...nel.org,
dhowells@...hat.com, hdanton@...a.com, hughd@...gle.com,
lokeshgidra@...gle.com, minchan@...gle.com, jannh@...gle.com,
shakeel.butt@...ux.dev, souravpanda@...gle.com,
pasha.tatashin@...een.com, klarasmodin@...il.com, corbet@....net,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v9 11/17] mm: replace vm_lock and detached flag with a
reference count
On Mon, Jan 13, 2025 at 01:47:29AM +0000, Wei Yang wrote:
>On Sat, Jan 11, 2025 at 12:14:47PM -0800, Suren Baghdasaryan wrote:
>>On Sat, Jan 11, 2025 at 3:24 AM Mateusz Guzik <mjguzik@...il.com> wrote:
>>>
>>> On Fri, Jan 10, 2025 at 08:25:58PM -0800, Suren Baghdasaryan wrote:
>>>
>>> So there were quite a few iterations of the patch and I have not been
>>> reading majority of the feedback, so it may be I missed something,
>>> apologies upfront. :)
>>>
>
>Hi, I am new to memory barriers. Hope not bothering.
>
>>> > /*
>>> > * Try to read-lock a vma. The function is allowed to occasionally yield false
>>> > * locked result to avoid performance overhead, in which case we fall back to
>>> > @@ -710,6 +742,8 @@ static inline void vma_lock_init(struct vm_area_struct *vma)
>>> > */
>>> > static inline bool vma_start_read(struct vm_area_struct *vma)
>>> > {
>>> > + int oldcnt;
>>> > +
>>> > /*
>>> > * Check before locking. A race might cause false locked result.
>>> > * We can use READ_ONCE() for the mm_lock_seq here, and don't need
>>> > @@ -720,13 +754,19 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
>>> > if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence))
>>> > return false;
>>> >
>>> > - if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0))
>>> > + /*
>>> > + * If VMA_LOCK_OFFSET is set, __refcount_inc_not_zero_limited() will fail
>>> > + * because VMA_REF_LIMIT is less than VMA_LOCK_OFFSET.
>>> > + */
>>> > + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt,
>>> > + VMA_REF_LIMIT)))
>>> > return false;
>>> >
>>>
>>> Replacing down_read_trylock() with the new routine loses an acquire
>>> fence. That alone is not a problem, but see below.
>>
>>Hmm. I think this acquire fence is actually necessary. We don't want
>>the later vm_lock_seq check to be reordered and happen before we take
>>the refcount. Otherwise this might happen:
>>
>>reader writer
>>if (vm_lock_seq == mm_lock_seq) // check got reordered
>> return false;
>> vm_refcnt += VMA_LOCK_OFFSET
>> vm_lock_seq == mm_lock_seq
>> vm_refcnt -= VMA_LOCK_OFFSET
>>if (!__refcount_inc_not_zero_limited())
>> return false;
>>
>>Both reader's checks will pass and the reader would read-lock a vma
>>that was write-locked.
>>
>
>Here what we plan to do is define __refcount_inc_not_zero_limited() with
>acquire fence, e.g. with atomic_try_cmpxchg_acquire(), right?
>
BTW, usually we pair acquire with release.
The __vma_start_write() provide release fence when locked, so for this part
we are ok, right?
--
Wei Yang
Help you, Help me
Powered by blists - more mailing lists