lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZFp1EB7863TAxzbs@P9FQF9L96D.corp.robot.car>
Date:   Tue, 9 May 2023 09:30:08 -0700
From:   Roman Gushchin <roman.gushchin@...ux.dev>
To:     "zhaoyang.huang" <zhaoyang.huang@...soc.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Roman Gushchin <guro@...com>, Minchan Kim <minchan@...nel.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        Zhaoyang Huang <huangzhaoyang@...il.com>, ke.wang@...soc.com
Subject: Re: [PATCHv3] mm: optimization on page allocation when CMA enabled

On Sat, May 06, 2023 at 02:45:47PM +0800, zhaoyang.huang wrote:
> From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> 
> Let us look at the series of scenarios below with WMARK_LOW=25MB,WMARK_MIN=5MB
> (managed pages 1.9GB). We can know that current 'fixed 1/2 ratio' start to use
> CMA since C which actually has caused U&R lower than WMARK_LOW (this should be
> deemed as against current memory policy, that is, UNMOVABLE & RECLAIMABLE should
> either stay around WATERMARK_LOW when no allocation or do reclaim via entering
> slowpath)
> 
> -- Free_pages
> |
> |
> -- WMARK_LOW
> |
> -- Free_CMA
> |
> |
> --
> 
> Free_CMA/Free_pages(MB)      A(12/30)     B(12/25)     C(12/20)
> fixed 1/2 ratio                 N             N           Y
> this commit                     Y             Y           Y
> 
> Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> ---
> v2: do proportion check when zone_watermark_ok, update commit message
> v3: update coding style and simplify the logic when zone_watermark_ok

We're getting closer, thank you for working on it.

> ---
> ---
>  mm/page_alloc.c | 46 ++++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 40 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 0745aed..7aca49d 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3071,6 +3071,41 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  
>  }
>  
> +#ifdef CONFIG_CMA
> +/*
> + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks via
> + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_watermark_ok
> + * again without ALLOC_CMA to see if to use CMA first.
> + */
> +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags)

Can you, please, rename it to use_cma_first()?

> +{
> +	unsigned long watermark;
> +	bool cma_first = false;
> +
> +	watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK);
> +	/* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of CMA */
> +	if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA)))

Please, add {} for both "if" and "else" parts. Also, please, invert the order to make it
easier to follow:

if (zone_watermark_ok(...)) {
	...
} else {
	...
}

> +		/*
> +		 * watermark failed means UNMOVABLE & RECLAIMBLE is not enough
> +		 * now, we should use cma first to keep them stay around the
> +		 * corresponding watermark
> +		 */
> +		cma_first = true;
> +	else
> +		/*
> +		 * remain previous fixed 1/2 logic when watermark ok as we have

Nobody knows what was previously here once your change is applied. Please, do not refer
to the previous state, add a comment about the current state. You can probably (partially)
move the comment from __rmqueue().

> +		 * above protection now
> +		 */
> +		cma_first = (zone_page_state(zone, NR_FREE_CMA_PAGES) >
> +				zone_page_state(zone, NR_FREE_PAGES) / 2);
> +	return cma_first;
> +}
> +#else
> +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags)
> +{
> +	return false;
> +}
> +#endif
>  /*
>   * Do the hard work of removing an element from the buddy allocator.
>   * Call me with the zone->lock already held.
> @@ -3084,13 +3119,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  	if (IS_ENABLED(CONFIG_CMA)) {
>  		/*
>  		 * Balance movable allocations between regular and CMA areas by
> -		 * allocating from CMA when over half of the zone's free memory
> -		 * is in the CMA area.
> +		 * allocating from CMA base on judging zone_watermark_ok again
> +		 * to see if the latest check got pass via the help of CMA
>  		 */
> -		if (alloc_flags & ALLOC_CMA &&
> -		    zone_page_state(zone, NR_FREE_CMA_PAGES) >
> -		    zone_page_state(zone, NR_FREE_PAGES) / 2) {
> -			page = __rmqueue_cma_fallback(zone, order);
> +		if (migratetype == MIGRATE_MOVABLE) {
> +			page = __if_use_cma_first(zone, order, alloc_flags) ?
> +				__rmqueue_cma_fallback(zone, order) : NULL;

Can you put it like
if (migratetype == MIGRATE_MOVABLE && use_cma_first(...)) {
	page = ...
	if (page)
		return page;
}

to avoid using a ternary operator without a good reason?

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ