lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5EE18C38.3090601@samsung.com>
Date:   Thu, 11 Jun 2020 10:43:20 +0900
From:   Jaewon Kim <jaewon31.kim@...sung.com>
To:     Mel Gorman <mgorman@...hsingularity.net>,
        Baoquan He <bhe@...hat.com>
Cc:     minchan@...nel.org, mgorman@...e.de, hannes@...xchg.org,
        akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, jaewon31.kim@...il.com,
        ytk.lee@...sung.com, cmlaika.kim@...sung.com
Subject: Re: [PATCH] page_alloc: consider highatomic reserve in wmartermark
 fast



On 2020년 06월 10일 00:13, Mel Gorman wrote:
> On Tue, Jun 09, 2020 at 10:27:47PM +0800, Baoquan He wrote:
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index 13cc653122b7..00869378d387 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -3553,6 +3553,11 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
>>>  {
>>>  	long free_pages = zone_page_state(z, NR_FREE_PAGES);
>>>  	long cma_pages = 0;
>>> +	long highatomic = 0;
>>> +	const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM));
>>> +
>>> +	if (likely(!alloc_harder))
>>> +		highatomic = z->nr_reserved_highatomic;
>>>  
>>>  #ifdef CONFIG_CMA
>>>  	/* If allocation can't use CMA areas don't use free CMA pages */
>>> @@ -3567,8 +3572,12 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
>>>  	 * the caller is !atomic then it'll uselessly search the free
>>>  	 * list. That corner case is then slower but it is harmless.
>>>  	 */
>>> -	if (!order && (free_pages - cma_pages) > mark + z->lowmem_reserve[classzone_idx])
>>> -		return true;
>>> +	if (!order) {
>>> +		long fast_free = free_pages - cma_pages - highatomic;
>>> +
>>> +		if (fast_free > mark + z->lowmem_reserve[classzone_idx])
>> This looks reasonable to me. However, this change may not be rebased on
>> top of the latest mainline tree or mm tree. E.g in this commit 97a225e69a1f8
>> ("mm/page_alloc: integrate classzone_idx and high_zoneidx"), classzone_idx
>> has been changed to highest_zoneidx.

Hello Baoquan

Thank you for the review.
I may change code to high_zoneidx in next version.
By the way let me consider Minchan's comment regarding sharing code.
>>
> That's fine, I simply wanted to illustrate where I thought the check
> should go to minimise the impact to the majority of allocations.
Hello Mel.
Can I understand that you also agrees on checking highatomic reserved?

Additionally I've wondered why the number of  highatomic free pages is not
accurately counted like cma free. Is there any concern on counting it?
>

Thank you
Jaewon Kim

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ