lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20210611170045.b79a238fa3fc4bc9e4cd1140@linux-foundation.org>
Date:   Fri, 11 Jun 2021 17:00:45 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     chengkaitao <pilgrimtao@...il.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org, smcdef@...il.com,
        Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH] mm: delete duplicate order checking, when stealing
 whole pageblock

On Fri, 11 Jun 2021 14:38:34 +0800 chengkaitao <pilgrimtao@...il.com> wrote:

> From: chengkaitao <pilgrimtao@...il.com>
> 
> 1. Already has (order >= pageblock_order / 2) here, we don't neet
> (order >= pageblock_order)
> 2. set function can_steal_fallback to inline
> 
> ...
>
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2619,18 +2619,8 @@ static void change_pageblock_range(struct page *pageblock_page,
>   * is worse than movable allocations stealing from unmovable and reclaimable
>   * pageblocks.
>   */
> -static bool can_steal_fallback(unsigned int order, int start_mt)
> +static inline bool can_steal_fallback(unsigned int order, int start_mt)
>  {
> -	/*
> -	 * Leaving this order check is intended, although there is
> -	 * relaxed order check in next check. The reason is that
> -	 * we can actually steal whole pageblock if this condition met,
> -	 * but, below check doesn't guarantee it and that is just heuristic
> -	 * so could be changed anytime.
> -	 */
> -	if (order >= pageblock_order)
> -		return true;
> -
>  	if (order >= pageblock_order / 2 ||
>  		start_mt == MIGRATE_RECLAIMABLE ||
>  		start_mt == MIGRATE_UNMOVABLE ||

Well, that redundant check was put there deliberately, as the comment
explains.

The reasoning is perhaps a little dubious, but it seems that the
compiler has optimized away the redundant check anyway (your patch
doesn't alter code size).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ