lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f44fda6-c20c-4d90-ae83-e650c43a16ff@linux.alibaba.com>
Date: Thu, 27 Mar 2025 19:54:55 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Oscar Salvador <osalvador@...e.de>
Cc: akpm@...ux-foundation.org, hughd@...gle.com, willy@...radead.org,
 david@...hat.com, 21cnbao@...il.com, ryan.roberts@....com, ziy@...dia.com,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm: mincore: use folio_pte_batch() to batch process
 large folios



On 2025/3/27 18:49, Oscar Salvador wrote:
> On Wed, Mar 26, 2025 at 11:38:11AM +0800, Baolin Wang wrote:
>> @@ -118,16 +120,31 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>>   		walk->action = ACTION_AGAIN;
>>   		return 0;
>>   	}
>> -	for (; addr != end; ptep++, addr += PAGE_SIZE) {
>> +	for (; addr != end; ptep += step, addr += step * PAGE_SIZE) {
>>   		pte_t pte = ptep_get(ptep);
>>   
>> +		step = 1;
>>   		/* We need to do cache lookup too for pte markers */
>>   		if (pte_none_mostly(pte))
>>   			__mincore_unmapped_range(addr, addr + PAGE_SIZE,
>>   						 vma, vec);
>> -		else if (pte_present(pte))
>> -			*vec = 1;
>> -		else { /* pte is a swap entry */
>> +		else if (pte_present(pte)) {
>> +			if (pte_batch_hint(ptep, pte) > 1) {
> 
> AFAIU, you will only batch if the CONT_PTE is set, but that is only true for arm64,
> and so we lose the ability to batch in e.g: x86 when we have contiguous
> entries, right?
> 
> So why not have folio_pte_batch take care of it directly without involving
> pte_batch_hint here?

Good question, this was the first approach I tried.

However, I found there was a obvious performance regression with small 
folios (where CONT_PTE is not set). I think the overhead introduced by 
vm_normal_folio() and folio_pte_batch() is greater than the optimization 
gained from batch processing small folios.

For large folios where CONT_PTE is set, ptep_get()--->contpte_ptep_get() 
wastes a significant amount of CPU time, so using folio_pte_batch() can 
improve the performance obviously.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ