lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c9168877-9d66-4963-b4a7-f0095ba2760f@linux.dev>
Date: Wed, 25 Jun 2025 20:35:37 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: David Hildenbrand <david@...hat.com>
Cc: akpm@...ux-foundation.org, baolin.wang@...ux.alibaba.com,
 chrisl@...nel.org, kasong@...cent.com, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org, linux-mm@...ck.org,
 linux-riscv@...ts.infradead.org, lorenzo.stoakes@...cle.com,
 ryan.roberts@....com, v-songbaohua@...o.com, x86@...nel.org,
 ying.huang@...el.com, zhengtangquan@...o.com,
 Lance Yang <ioworker0@...il.com>, Barry Song <21cnbao@...il.com>
Subject: Re: [PATCH v4 3/4] mm: Support batched unmap for lazyfree large
 folios during reclamation



On 2025/6/25 20:25, David Hildenbrand wrote:
> On 25.06.25 14:20, Lance Yang wrote:
>>
>>
>> On 2025/6/25 20:09, David Hildenbrand wrote:
>>>>
>>>> Somehow, I feel we could combine your cleanup code—which handles a 
>>>> batch
>>>> size of "nr" between 1 and nr_pages—with the
>>>> "if (nr_pages == folio_nr_pages(folio)) goto walk_done" check.
>>>
>>> Yeah, that's what I was suggesting. It would have to be part of the
>>> cleanup I think.
>>>
>>> I'm still wondering if there is a case where
>>>
>>> if (nr_pages == folio_nr_pages(folio))
>>>       goto walk_done;
>>>
>>> would be wrong when dealing with small folios.
>>>
>>>> In practice, this would let us skip almost all unnecessary checks,
>>>> except for a few rare corner cases.
>>>>
>>>> For those corner cases where "nr" truly falls between 1 and nr_pages,
>>>> we can just leave them as-is—performing the redundant check inside
>>>> page_vma_mapped_walk().
>>>
>>> I mean, batching mapcount+refcount updates etc. is always a win. If we
>>> end up doing some unnecessary pte_none() checks, that might be
>>> suboptimal but mostly noise in contrast to the other stuff we will
>>> optimize out 🙂
>>>
>>> Agreed that if we can easily avoid these pte_none() checks, we should do
>>> that. Optimizing that for "nr_pages == folio_nr_pages(folio)" makes 
>>> sense.
>>
>> Hmm... I have a question about the reference counting here ...
>>
>>         if (vma->vm_flags & VM_LOCKED)
>>             mlock_drain_local();
>>         folio_put(folio);
>>         /* We have already batched the entire folio */
>>
>> Does anyone else still hold a reference to this folio after folio_put()?
> 
> The caller of the unmap operation should better hold a reference :)

Ah, you're right. I should have realized that :(

Thanks,
Lance

> 
> Also, I am not sure why we don't perform a
> 
> folio_put_refs(folio, nr_pages);
> 
> ... :)
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ