lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 29 Mar 2022 10:05:20 +0800
From:   "Huang, Ying" <ying.huang@...el.com>
To:     Wei Yang <richard.weiyang@...il.com>
Cc:     akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, mgorman@...hsingularity.net
Subject: Re: [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone

Wei Yang <richard.weiyang@...il.com> writes:

> On Tue, Mar 29, 2022 at 08:43:23AM +0800, Huang, Ying wrote:
> [...]
>>>>> --- a/mm/migrate.c
>>>>> +++ b/mm/migrate.c
>>>>> @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>>>>  		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
>>>>>  			return 0;
>>>>>  		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
>>>>> -			if (populated_zone(pgdat->node_zones + z))
>>>>> +			if (managed_zone(pgdat->node_zones + z))
>>>>
>>>>This looks good to me!  Thanks!  It seems that we can replace
>>>>populated_zone() in migrate_balanced_pgdat() too.  Right?
>>>>
>>>
>>> Yes, you are right. I didn't spot this.
>>>
>>> While this patch comes from the clue of wakeup_kswapd(), I am not sure it is
>>> nice to put it in this patch together.
>>>
>>> Which way you prefer to include this: merge the change into this one, or a
>>> separate one?
>>
>>Either is OK for me.
>>
>
> After reading the code, I am willing to do a little simplification. Does this
> look good to you?
>
> From 85c8a5cd708ada3e9f5b0409413407b7be1bc446 Mon Sep 17 00:00:00 2001
> From: Wei Yang <richard.weiyang@...il.com>
> Date: Tue, 29 Mar 2022 09:24:36 +0800
> Subject: [PATCH] mm/migrate.c: return valid zone for wakeup_kswapd from
>  migrate_balanced_pgdat()
>
> To wakeup kswapd, we need to iterate pgdat->node_zones and get the
> proper zone. While this work has already been done in
> migrate_balanced_pgdat().
>
> Let's return the valid zone directly instead of do the iteration again.
>
> Signed-off-by: Wei Yang <richard.weiyang@...il.com>
> ---
>  mm/migrate.c | 21 ++++++++-------------
>  1 file changed, 8 insertions(+), 13 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 5adc55b5347c..b086bd781956 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1973,7 +1973,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages,
>   * Returns true if this is a safe migration target node for misplaced NUMA
>   * pages. Currently it only checks the watermarks which is crude.
>   */
> -static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
> +static struct zone *migrate_balanced_pgdat(struct pglist_data *pgdat,
>  				   unsigned long nr_migrate_pages)
>  {
>  	int z;
> @@ -1985,14 +1985,13 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
>  			continue;
>  
>  		/* Avoid waking kswapd by allocating pages_to_migrate pages. */
> -		if (!zone_watermark_ok(zone, 0,
> +		if (zone_watermark_ok(zone, 0,
>  				       high_wmark_pages(zone) +
>  				       nr_migrate_pages,
>  				       ZONE_MOVABLE, 0))
> -			continue;
> -		return true;
> +			return zone;
>  	}
> -	return false;
> +	return NULL;
>  }
>  
>  static struct page *alloc_misplaced_dst_page(struct page *page,
> @@ -2032,6 +2031,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  	int page_lru;
>  	int nr_pages = thp_nr_pages(page);
>  	int order = compound_order(page);
> +	struct zone *zone;
>  
>  	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>  
> @@ -2040,16 +2040,11 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  		return 0;
>  
>  	/* Avoid migrating to a node that is nearly full */
> -	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
> -		int z;
> -
> +	if ((zone = migrate_balanced_pgdat(pgdat, nr_pages))) {

I think that this reverses the original semantics.  Originally, we give
up and wake up kswapd if there's no enough free pages on the target
node.  But now, you give up and wake up if there's enough free pages.

Best Regards,
Huang, Ying

>  		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
>  			return 0;
> -		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
> -			if (managed_zone(pgdat->node_zones + z))
> -				break;
> -		}
> -		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
> +
> +		wakeup_kswapd(zone, 0, order, ZONE_MOVABLE);
>  		return 0;
>  	}
>  
> -- 
>
> 2.33.1

Powered by blists - more mailing lists