lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 27 Mar 2021 11:34:05 +0800
From:   Liu Shixin <liushixin2@...wei.com>
To:     Matthew Wilcox <willy@...radead.org>
CC:     Andrew Morton <akpm@...ux-foundation.org>,
        Stephen Rothwell <sfr@...b.auug.org.au>,
        Michal Hocko <mhocko@...e.com>,
        Vlastimil Babka <vbabka@...e.cz>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>,
        Kefeng Wang <wangkefeng.wang@...wei.com>
Subject: Re: [PATCH -next] mm, page_alloc: avoid page_to_pfn() in
 move_freepages()

    Sorry to reply to you after a so long time and thanks for your advice. It does seem that your proposed change will make the code cleaner and more efficient.

    I repeated move_freepages_block() 2000000 times on the VM and counted jiffies. The average value before and after the change was both about 12,000. I think it's probably because I'm using the Sparse Memory Model, so pfn_to_page() is not time-consuming.


On 2021/3/23 20:54, Matthew Wilcox wrote:
> On Tue, Mar 23, 2021 at 09:12:15PM +0800, Liu Shixin wrote:
>> From: Kefeng Wang <wangkefeng.wang@...wei.com>
>>
>> The start_pfn and end_pfn are already available in move_freepages_block(),
>> there is no need to go back and forth between page and pfn in move_freepages
>> and move_freepages_block, and pfn_valid_within() should validate pfn first
>> before touching the page.
> This looks good to me:
>
> Reviewed-by: Matthew Wilcox (Oracle) <willy@...radead.org>
>
>>  static int move_freepages(struct zone *zone,
>> -			  struct page *start_page, struct page *end_page,
>> +			  unsigned long start_pfn, unsigned long end_pfn,
>>  			  int migratetype, int *num_movable)
>>  {
>>  	struct page *page;
>> +	unsigned long pfn;
>>  	unsigned int order;
>>  	int pages_moved = 0;
>>  
>> -	for (page = start_page; page <= end_page;) {
>> -		if (!pfn_valid_within(page_to_pfn(page))) {
>> -			page++;
>> +	for (pfn = start_pfn; pfn <= end_pfn;) {
>> +		if (!pfn_valid_within(pfn)) {
>> +			pfn++;
>>  			continue;
>>  		}
>>  
>> +		page = pfn_to_page(pfn);
> I wonder if this wouldn't be even better if we did:
>
> 	struct page *start_page = pfn_to_page(start_pfn);
>
> 	for (pfn = start_pfn; pfn <= end_pfn; pfn++) {
> 		struct page *page = start_page + pfn - start_pfn;
>
> 		if (!pfn_valid_within(pfn))
> 			continue;
>
>> -
>> -			page++;
>> +			pfn++;
>>  			continue;
> ... then we can drop the increment of pfn here
>
>>  		}
>>  
>> @@ -2458,7 +2459,7 @@ static int move_freepages(struct zone *zone,
>>  
>>  		order = buddy_order(page);
>>  		move_to_free_list(page, zone, order, migratetype);
>> -		page += 1 << order;
>> +		pfn += 1 << order;
> ... and change this to pfn += (1 << order) - 1;
>
> Do you have any numbers to quantify the benefit of this change?
> .
>

Powered by blists - more mailing lists