lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6eb09b5b-b23b-4bae-9629-b86df3570e06@redhat.com>
Date: Wed, 11 Sep 2024 11:29:11 +0200
From: David Hildenbrand <david@...hat.com>
To: Dev Jain <dev.jain@....com>, akpm@...ux-foundation.org,
 willy@...radead.org, kirill.shutemov@...ux.intel.com
Cc: ryan.roberts@....com, anshuman.khandual@....com, catalin.marinas@....com,
 cl@...two.org, vbabka@...e.cz, mhocko@...e.com, apopple@...dia.com,
 dave.hansen@...ux.intel.com, will@...nel.org, baohua@...nel.org,
 jack@...e.cz, mark.rutland@....com, hughd@...gle.com,
 aneesh.kumar@...nel.org, yang@...amperecomputing.com, peterx@...hat.com,
 ioworker0@...il.com, jglisse@...gle.com, wangkefeng.wang@...wei.com,
 ziy@...dia.com, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v3 1/2] mm: Abstract THP allocation

On 11.09.24 11:27, David Hildenbrand wrote:
> On 11.09.24 08:55, Dev Jain wrote:
>> In preparation for the second patch, abstract away the THP allocation
>> logic present in the create_huge_pmd() path, which corresponds to the
>> faulting case when no page is present.
>>
>> There should be no functional change as a result of applying
>> this patch.
>>
> 
> Hi,
> 
>> Signed-off-by: Dev Jain <dev.jain@....com>
>> ---
>>    mm/huge_memory.c | 110 +++++++++++++++++++++++++++++------------------
>>    1 file changed, 67 insertions(+), 43 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 67c86a5d64a6..b96a1ff2bf40 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -943,47 +943,88 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
>>    }
>>    EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
>>    
>> -static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
>> -			struct page *page, gfp_t gfp)
>> +static struct folio *pmd_thp_fault_alloc(gfp_t gfp, struct vm_area_struct *vma,
>> +					 unsigned long haddr, unsigned long addr)
> 
> I suggest calling this something like "vma_alloc_anon_folio_pmd()"? Then
> it's more consistent with vma_alloc_folio().
> 
> Also, likely we should just only pass in "addr" and calculate "haddr"
> ourselves, it's cheap and reduces the number of function parameters.
> 
>>    {
>> -	struct vm_area_struct *vma = vmf->vma;
>> -	struct folio *folio = page_folio(page);
>> -	pgtable_t pgtable;
>> -	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
>> -	vm_fault_t ret = 0;
>> +	const int order = HPAGE_PMD_ORDER;
>> +	struct folio *folio = vma_alloc_folio(gfp, order, vma, haddr, true);
>>    
>> -	VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
>> +	if (unlikely(!folio)) {
>> +		count_vm_event(THP_FAULT_FALLBACK);
>> +		count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
>> +		goto out;
>> +	}
>>    
>> +	VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
>>    	if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) {
>>    		folio_put(folio);
>>    		count_vm_event(THP_FAULT_FALLBACK);
>>    		count_vm_event(THP_FAULT_FALLBACK_CHARGE);
>> -		count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK);
>> -		count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
>> -		return VM_FAULT_FALLBACK;
>> +		count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
>> +		count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
>> +		goto out;
>>    	}
>>    	folio_throttle_swaprate(folio, gfp);
>>    
>> -	pgtable = pte_alloc_one(vma->vm_mm);
>> -	if (unlikely(!pgtable)) {
>> -		ret = VM_FAULT_OOM;
>> -		goto release;
>> -	}
>> -
>> -	folio_zero_user(folio, vmf->address);
>> +	folio_zero_user(folio, addr);
>>    	/*
>>    	 * The memory barrier inside __folio_mark_uptodate makes sure that
>>    	 * folio_zero_user writes become visible before the set_pmd_at()
>>    	 * write.
>>    	 */
>>    	__folio_mark_uptodate(folio);
>> +out:
>> +	return folio;
>> +}
>> +
>> +static void __pmd_thp_fault_success_stats(struct vm_area_struct *vma)
>> +{
>> +	count_vm_event(THP_FAULT_ALLOC);
>> +	count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
>> +	count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
>> +}
> 
> Why isn't that moved into map_pmd_thp()
> 
> Note that in this patch you do:
> 
> map_pmd_thp(folio, vmf, vma, haddr);
> spin_unlock(vmf->ptl);
> __pmd_thp_fault_success_stats(vma);
> 
> But in patch #2
> 
> map_pmd_thp(folio, vmf, vma, haddr);
> __pmd_thp_fault_success_stats(vma);
> goto unlock;
> release:
> 	folio_put(folio);
> unlock:
> 	spin_unlock(vmf->ptl);
> 
> Please make that consistent, meaning:
> 
> 1) Inline __pmd_thp_fault_success_stats() into map_pmd_thp(). No need to
> have the separated out.
> 
> 2) Either do the PTL unlocking in __pmd_thp_fault_success_stats() or in
>      the caller. In the caller is likely easiest. Adjusting the counters
>      should be cheap, if not we could revisit this later with real data.
> 
>> +
>> +static void map_pmd_thp(struct folio *folio, struct vm_fault *vmf,
>> +			struct vm_area_struct *vma, unsigned long haddr)
>> +{
>> +	pmd_t entry;
>> +
>> +	entry = mk_huge_pmd(&folio->page, vma->vm_page_prot);
>> +	entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
>> +	folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
>> +	folio_add_lru_vma(folio, vma);
>> +	set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
>> +	update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
> 
> It's quite weird to see a mixture of haddr and vmf->address, and likely
> this mixture is wrong or not not required.
> 
> Looking at arc's update_mmu_cache_pmd() implementation, I cannot see how
> passing in the unaligned address would do the right thing. But maybe arc
> also doesn't trigger that code path ... who knows :)
> 
> 
> Staring at some other update_mmu_cache_pmd() users, it's quite
> inconsistent. Primarily only do_huge_pmd_numa_page() and
> __do_huge_pmd_anonymous_page() use the unaligned address. The others
> seem to use the aligned address ... as one would expect when modifying a
> PMD.
> 
> 
> I suggest to change this function to *not* pass in the vmf, and rename
> it to something like:
> 
> static void folio_map_anon_pmd(struct folio *folio, struct
> vm_area_struct *vma, pmd_t *pmd, unsigned long haddr)

... or better "map_anon_folio_pmd" so it better matches 
vma_alloc_folio_pmd() suggested above.

Could also do

vma_map_anon_folio_pmd() :)

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ