[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8FF0358E-1ECE-42DA-AE4B-8D5A578450EC@nvidia.com>
Date: Sat, 04 Oct 2025 22:38:44 -0400
From: Zi Yan <ziy@...dia.com>
To: Lance Yang <lance.yang@...ux.dev>
Cc: Wei Yang <richard.weiyang@...il.com>, Dev Jain <dev.jain@....com>,
akpm@...ux-foundation.org, david@...hat.com, lorenzo.stoakes@...cle.com,
baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com, npache@...hat.com,
ryan.roberts@....com, baohua@...nel.org, ioworker0@...il.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH mm-new 2/2] mm/khugepaged: merge PTE scanning logic into a
new helper
On 4 Oct 2025, at 22:35, Lance Yang wrote:
> On 2025/10/4 21:11, Dev Jain wrote:
>>
>> On 04/10/25 3:12 pm, Wei Yang wrote:
>>> On Fri, Oct 03, 2025 at 10:35:12PM +0530, Dev Jain wrote:
>>>> On 02/10/25 1:02 pm, Lance Yang wrote:
>>>>> From: Lance Yang <lance.yang@...ux.dev>
>>>>>
>>>>> As David suggested, the PTE scanning logic in hpage_collapse_scan_pmd()
>>>>> and __collapse_huge_page_isolate() was almost duplicated.
>>>>>
>>>>> This patch cleans things up by moving all the common PTE checking logic
>>>>> into a new shared helper, thp_collapse_check_pte().
>>>>>
>>>>> Suggested-by: David Hildenbrand <david@...hat.com>
>>>>> Signed-off-by: Lance Yang <lance.yang@...ux.dev>
>>>>> ---
>>>> In hpage_collapse_scan_pmd(), we enter with mmap lock held, so for
>>> This is true for the first loop, but we will unlock/lock mmap and revalidate
>>> vma before isolation.
>>>
>>>> an anonymous vma, is it even possible to hit if (! folio_test_anon(folio))?
>>>> In which case we can replace this with VM_BUG_ON_FOLIO and abstract away
>>>> till the folio_maybe_mapped_shared() block?
>>> But it looks still valid, since hugepage_vma_revalidate() will check the vma
>>> is still anonymous vma after grab the mmap lock again.
>>>
>>> My concern is would VM_BUG_ON_FOLIO() be too heavy? How about warn on and
>>> return?
>>
>> Frankly I do not have much opinion on the BUG_ON/WARN_ON debate since I haven't
>> properly understood that, but this BUG_ON is under CONFIG_DEBUG_VM anways. But
>
> Yeah, VM_BUG_ON_FOLIO() is under CONFIG_DEBUG_VM, so it won't affect
> production kernels.
Many distros enable it by default. For mm, we are moving away from
using BUG_ON or VM_BUG_ON. No need to crash the system if it is possible
to handle it gracefully.
>
>> if you want to change this to WARN then you can do it at both places.
>
> It should flag such an impossible condition there during development.
> So, I'd prefer to stick with VM_BUG_ON_FOLIO().
>
> @Wei please let me know if you feel strongly otherwise :)
--
Best Regards,
Yan, Zi
Powered by blists - more mailing lists