lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5EA10872.3010500@samsung.com>
Date:   Thu, 23 Apr 2020 12:16:02 +0900
From:   Jaewon Kim <jaewon31.kim@...sung.com>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     minchan@...nel.org, mgorman@...e.de, m.szyprowski@...sung.com,
        mina86@...a86.com, shli@...com, akpm@...ux-foundation.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        jaewon31.kim@...il.com, ytk.lee@...sung.com
Subject: Re: [PATCH v2] mm/vmscan: count layzfree pages and fix
 nr_isolated_* mismatch



On 2020년 04월 22일 22:07, Johannes Weiner wrote:
> On Wed, Apr 22, 2020 at 05:48:15PM +0900, Jaewon Kim wrote:
>> @@ -1295,11 +1295,15 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>>  		 */
>>  		if (page_mapped(page)) {
>>  			enum ttu_flags flags = ttu_flags | TTU_BATCH_FLUSH;
>> +			bool lazyfree = PageAnon(page) && !PageSwapBacked(page);
>>  
>>  			if (unlikely(PageTransHuge(page)))
>>  				flags |= TTU_SPLIT_HUGE_PMD;
>> +
>>  			if (!try_to_unmap(page, flags)) {
>>  				stat->nr_unmap_fail += nr_pages;
>> +				if (lazyfree && PageSwapBacked(page))
> This looks pretty strange, until you remember that try_to_unmap()
> could SetPageSwapbacked again.
>
> This might be more obvious?
>
> 			was_swapbacked = PageSwapBacked(page);
> 			if (!try_to_unmap(page, flags)) {
> 				stat->nr_unmap_fail += nr_pages;
> 				if (!was_swapbacked && PageSwapBacked(page))
Hello Johannes, thank you for your comment.

The name can changed from layzyfree to was_swapbacked.
By the way, did you mean removing PageAnon(page), too? It seems to be OK, though.
>> +					stat->nr_lazyfree_fail += nr_pages;
>>  				goto activate_locked;
> Or at least was_lazyfree.
Sorry but I'm confused.
I think you meant additional comment to previous your comment
rather than you wanted to rename stat->nr_lazyfree_fail to stat->was_lazyfree.
>
>> @@ -1491,8 +1495,8 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
>>  		.priority = DEF_PRIORITY,
>>  		.may_unmap = 1,
>>  	};
>> -	struct reclaim_stat dummy_stat;
>> -	unsigned long ret;
>> +	struct reclaim_stat stat;
>> +	unsigned long reclaimed;
> nr_reclaimed would be better.
I will add nr_ prefix on next patch.
>
> I also prefer keeping dummy_stat, since that's still what it is.
This patch uses stat.nr_lazyfree_fail, I do not understand why it is still dummy_stat.
If you want, I will keep dummy_stat, though.

Thank you
Jaewon Kim
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ