[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1311738e-c9cc-4667-a758-f20d770ea329@arm.com>
Date: Wed, 18 Dec 2024 14:19:37 +0530
From: Dev Jain <dev.jain@....com>
To: Ryan Roberts <ryan.roberts@....com>, akpm@...ux-foundation.org,
david@...hat.com, willy@...radead.org, kirill.shutemov@...ux.intel.com
Cc: anshuman.khandual@....com, catalin.marinas@....com, cl@...two.org,
vbabka@...e.cz, mhocko@...e.com, apopple@...dia.com,
dave.hansen@...ux.intel.com, will@...nel.org, baohua@...nel.org,
jack@...e.cz, srivatsa@...il.mit.edu, haowenchao22@...il.com,
hughd@...gle.com, aneesh.kumar@...nel.org, yang@...amperecomputing.com,
peterx@...hat.com, ioworker0@...il.com, wangkefeng.wang@...wei.com,
ziy@...dia.com, jglisse@...gle.com, surenb@...gle.com,
vishal.moola@...il.com, zokeefe@...gle.com, zhengqi.arch@...edance.com,
jhubbard@...dia.com, 21cnbao@...il.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 06/12] khugepaged: Generalize
__collapse_huge_page_copy_failed()
On 17/12/24 10:52 pm, Ryan Roberts wrote:
> On 16/12/2024 16:50, Dev Jain wrote:
>> Upon failure, we repopulate the PMD in case of PMD-THP collapse. Hence, make
>> this logic specific for PMD case.
>>
>> Signed-off-by: Dev Jain <dev.jain@....com>
>> ---
>> mm/khugepaged.c | 14 ++++++++------
>> 1 file changed, 8 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index de044b1f83d4..886c76816963 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -766,7 +766,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
>> pmd_t *pmd,
>> pmd_t orig_pmd,
>> struct vm_area_struct *vma,
>> - struct list_head *compound_pagelist)
>> + struct list_head *compound_pagelist, int order)
> nit: suggest putting order on its own line.
>
>> {
>> spinlock_t *pmd_ptl;
>>
>> @@ -776,14 +776,16 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
>> * pages. Since pages are still isolated and locked here,
>> * acquiring anon_vma_lock_write is unnecessary.
>> */
>> - pmd_ptl = pmd_lock(vma->vm_mm, pmd);
>> - pmd_populate(vma->vm_mm, pmd, pmd_pgtable(orig_pmd));
>> - spin_unlock(pmd_ptl);
>> + if (order == HPAGE_PMD_ORDER) {
>> + pmd_ptl = pmd_lock(vma->vm_mm, pmd);
>> + pmd_populate(vma->vm_mm, pmd, pmd_pgtable(orig_pmd));
>> + spin_unlock(pmd_ptl);
>> + }
>> /*
>> * Release both raw and compound pages isolated
>> * in __collapse_huge_page_isolate.
>> */
>> - release_pte_pages(pte, pte + HPAGE_PMD_NR, compound_pagelist);
>> + release_pte_pages(pte, pte + (1UL << order), compound_pagelist);
>> }
> Given this function is clearly so geared towards re-establishing the pmd, given
> that it takes the *pmd and orig_pmd as params, and given that in the
> non-pmd-order case, we only call through to release_pte_pages(), I wonder if
> it's better to make the decision at a higher level and either call this function
> or release_pte_pages() directly? No strong opinion, just looks a bit weird at
> the moment.
Makes sense, we can probably get rid of this function and let the caller call
reestablish_pmd() or something for the PMD case.
>
>>
>> /*
>> @@ -834,7 +836,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
>> compound_pagelist);
>> else
>> __collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma,
>> - compound_pagelist);
>> + compound_pagelist, order);
>>
>> return result;
>> }
Powered by blists - more mailing lists