lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250115120532.mgvjhcrzvmmjasv7@master>
Date: Wed, 15 Jan 2025 12:05:32 +0000
From: Wei Yang <richard.weiyang@...il.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: Wei Yang <richard.weiyang@...il.com>, akpm@...ux-foundation.org,
	peterz@...radead.org, willy@...radead.org, liam.howlett@...cle.com,
	lorenzo.stoakes@...cle.com, david.laight.linux@...il.com,
	mhocko@...e.com, vbabka@...e.cz, hannes@...xchg.org,
	mjguzik@...il.com, oliver.sang@...el.com,
	mgorman@...hsingularity.net, david@...hat.com, peterx@...hat.com,
	oleg@...hat.com, dave@...olabs.net, paulmck@...nel.org,
	brauner@...nel.org, dhowells@...hat.com, hdanton@...a.com,
	hughd@...gle.com, lokeshgidra@...gle.com, minchan@...gle.com,
	jannh@...gle.com, shakeel.butt@...ux.dev, souravpanda@...gle.com,
	pasha.tatashin@...een.com, klarasmodin@...il.com, corbet@....net,
	linux-doc@...r.kernel.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v9 11/17] mm: replace vm_lock and detached flag with a
 reference count

On Tue, Jan 14, 2025 at 07:12:20PM -0800, Suren Baghdasaryan wrote:
>On Tue, Jan 14, 2025 at 6:58 PM Wei Yang <richard.weiyang@...il.com> wrote:
>>
>> On Fri, Jan 10, 2025 at 08:25:58PM -0800, Suren Baghdasaryan wrote:
>> >@@ -6354,7 +6422,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
>> >       struct vm_area_struct *vma;
>> >
>> >       rcu_read_lock();
>> >-retry:
>> >       vma = mas_walk(&mas);
>> >       if (!vma)
>> >               goto inval;
>> >@@ -6362,13 +6429,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
>> >       if (!vma_start_read(vma))
>> >               goto inval;
>> >
>> >-      /* Check if the VMA got isolated after we found it */
>> >-      if (is_vma_detached(vma)) {
>> >-              vma_end_read(vma);
>> >-              count_vm_vma_lock_event(VMA_LOCK_MISS);
>> >-              /* The area was replaced with another one */
>> >-              goto retry;
>> >-      }
>>
>> We have a little behavior change here.
>>
>> Originally, if we found an detached vma, we may retry. But now, we would go to
>> the slow path directly.
>
>Hmm. Good point. I think the easiest way to keep the same
>functionality is to make vma_start_read() return vm_area_struct* on
>success, NULL on locking failure and EAGAIN if vma was detached
>(vm_refcnt==0). Then the same retry with VMA_LOCK_MISS can be done in
>the case of EAGAIN.
>

Looks good to me.

>>
>> Maybe we can compare the event VMA_LOCK_MISS and VMA_LOCK_ABORT
>> to see the percentage of this case. If it shows this is a too rare
>> case to impact performance, we can ignore it.
>>
>> Also the event VMA_LOCK_MISS recording is removed, but the definition is
>> there. We may record it in the vma_start_read() when oldcnt is 0.
>>
>> BTW, the name of VMA_LOCK_SUCCESS confuse me a little. I thought it indicates
>> lock_vma_under_rcu() successfully get a valid vma. But seems not. Sounds we
>> don't have an overall success/failure statistic in vmstat.
>
>Are you referring to the fact that we do not increment
>VMA_LOCK_SUCCESS if we successfully locked a vma but have to retry the

Something like this. I thought we would increase VMA_LOCK_SUCCESS on success.

>page fault (in which we increment VMA_LOCK_RETRY instead)?
>

I don't follow this.

>>
>> >       /*
>> >        * At this point, we have a stable reference to a VMA: The VMA is
>> >        * locked and we know it hasn't already been isolated.
>>
>> --
>> Wei Yang
>> Help you, Help me

-- 
Wei Yang
Help you, Help me

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ