lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 26 Sep 2017 12:47:52 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Hui Zhu <zhuhui@...omi.com>
Cc:     akpm@...ux-foundation.org, vbabka@...e.cz,
        mgorman@...hsingularity.net, hillf.zj@...baba-inc.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        teawater@...il.com
Subject: Re: [RFC 1/2] Try to use HighAtomic if try to alloc umovable page
 that order is not 0

On Tue 26-09-17 16:46:43, Hui Zhu wrote:
> The page add a new condition to let gfp_to_alloc_flags return
> alloc_flags with ALLOC_HARDER if the order is not 0 and migratetype is
> MIGRATE_UNMOVABLE.

Apart from what Mel has already said this changelog is really lacking
the crucial information. It says what but it doesn't explain why we need
this and why it is safe to do. What kind of workload will benefit from
this change and how much. What about those users who are relying on high
atomic reserves currently and now would need to share it with other
users.

Without knowing all that background and from a quick look this looks
like a very crude hack to me, to be completely honest.

> Then alloc umovable page that order is not 0 will try to use HighAtomic.
> 
> Signed-off-by: Hui Zhu <zhuhui@...omi.com>
> ---
>  mm/page_alloc.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c841af8..b54e94a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3642,7 +3642,7 @@ static void wake_all_kswapds(unsigned int order, const struct alloc_context *ac)
>  }
>  
>  static inline unsigned int
> -gfp_to_alloc_flags(gfp_t gfp_mask)
> +gfp_to_alloc_flags(gfp_t gfp_mask, int order, int migratetype)
>  {
>  	unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET;
>  
> @@ -3671,6 +3671,8 @@ static void wake_all_kswapds(unsigned int order, const struct alloc_context *ac)
>  		alloc_flags &= ~ALLOC_CPUSET;
>  	} else if (unlikely(rt_task(current)) && !in_interrupt())
>  		alloc_flags |= ALLOC_HARDER;
> +	else if (order > 0 && migratetype == MIGRATE_UNMOVABLE)
> +		alloc_flags |= ALLOC_HARDER;
>  
>  #ifdef CONFIG_CMA
>  	if (gfpflags_to_migratetype(gfp_mask) == MIGRATE_MOVABLE)
> @@ -3903,7 +3905,7 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
>  	 * kswapd needs to be woken up, and to avoid the cost of setting up
>  	 * alloc_flags precisely. So we do that now.
>  	 */
> -	alloc_flags = gfp_to_alloc_flags(gfp_mask);
> +	alloc_flags = gfp_to_alloc_flags(gfp_mask, order, ac->migratetype);
>  
>  	/*
>  	 * We need to recalculate the starting point for the zonelist iterator
> -- 
> 1.9.1
> 

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists