[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e160b45b-7220-47d0-83a3-9403ffb85bbe@arm.com>
Date: Wed, 11 Sep 2024 17:40:20 +0530
From: Dev Jain <dev.jain@....com>
To: David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org,
willy@...radead.org, kirill.shutemov@...ux.intel.com
Cc: ryan.roberts@....com, anshuman.khandual@....com, catalin.marinas@....com,
cl@...two.org, vbabka@...e.cz, mhocko@...e.com, apopple@...dia.com,
dave.hansen@...ux.intel.com, will@...nel.org, baohua@...nel.org,
jack@...e.cz, mark.rutland@....com, hughd@...gle.com,
aneesh.kumar@...nel.org, yang@...amperecomputing.com, peterx@...hat.com,
ioworker0@...il.com, jglisse@...gle.com, wangkefeng.wang@...wei.com,
ziy@...dia.com, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v3 2/2] mm: Allocate THP on hugezeropage wp-fault
On 9/11/24 15:06, David Hildenbrand wrote:
> On 11.09.24 08:56, Dev Jain wrote:
>> Introduce do_huge_zero_wp_pmd() to handle wp-fault on a hugezeropage and
>> replace it with a PMD-mapped THP. Change the helper introduced in the
>> previous patch to flush TLB entry corresponding to the hugezeropage.
>> In case of failure, fallback to splitting the PMD.
>>
>> Signed-off-by: Dev Jain <dev.jain@....com>
>> ---
>> mm/huge_memory.c | 52 +++++++++++++++++++++++++++++++++++++++++++++---
>> 1 file changed, 49 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index b96a1ff2bf40..3e28946a805f 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -987,16 +987,20 @@ static void
>> __pmd_thp_fault_success_stats(struct vm_area_struct *vma)
>> static void map_pmd_thp(struct folio *folio, struct vm_fault *vmf,
>> struct vm_area_struct *vma, unsigned long haddr)
>> {
>> - pmd_t entry;
>> + pmd_t entry, old_pmd;
>> + bool is_pmd_none = pmd_none(*vmf->pmd);
>> entry = mk_huge_pmd(&folio->page, vma->vm_page_prot);
>> entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
>> folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
>> folio_add_lru_vma(folio, vma);
>> + if (!is_pmd_none)
>> + old_pmd = pmdp_huge_clear_flush(vma, haddr, vmf->pmd);
>
> This should likely be done in the caller.
>
>> set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
>> update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
>> add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
>> - mm_inc_nr_ptes(vma->vm_mm);
>> + if (is_pmd_none)
>> + mm_inc_nr_ptes(vma->vm_mm);
>
> And this as well.
>
> No need to make this function deal with this if the callers exactly
> know what they are doing.
Sure, thanks.
>
>> }
>> static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>> @@ -1576,6 +1580,41 @@ void huge_pmd_set_accessed(struct vm_fault *vmf)
>> spin_unlock(vmf->ptl);
>> }
>> +static vm_fault_t do_huge_zero_wp_pmd(struct vm_fault *vmf,
>> unsigned long haddr)
>
> Is there a need to pass in "haddr" if we have the vmf?
Was passing it because it was getting used many times. But nowhere do vmf
and haddr get both passed in the codebase, so I'll drop it for
cleanliness and
consistency.
>
>> +{
>> + struct vm_area_struct *vma = vmf->vma;
>> + gfp_t gfp = vma_thp_gfp_mask(vma);
>> + struct mmu_notifier_range range;
>> + struct folio *folio;
>> + vm_fault_t ret = 0;
>> +
>> + folio = pmd_thp_fault_alloc(gfp, vma, haddr, vmf->address);
>> + if (unlikely(!folio)) {
>> + ret = VM_FAULT_FALLBACK;
>> + goto out;
>> + }
>> +
>> + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm,
>> haddr,
>> + haddr + HPAGE_PMD_SIZE);
>> + mmu_notifier_invalidate_range_start(&range);
>> + vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
>> + if (unlikely(!pmd_same(pmdp_get(vmf->pmd), vmf->orig_pmd)))
>> + goto release;
>> + ret = check_stable_address_space(vma->vm_mm);
>> + if (ret)
>> + goto release;
>
> The clear+flush really belongs here.
>
>> + map_pmd_thp(folio, vmf, vma, haddr);
>> + __pmd_thp_fault_success_stats(vma);
>> + goto unlock;
>> +release:
>> + folio_put(folio);
>> +unlock:
>> + spin_unlock(vmf->ptl);
>> + mmu_notifier_invalidate_range_end(&range);
>> +out:
>> + return ret;
>> +}
>> +
>
Powered by blists - more mailing lists