lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 1 Mar 2017 16:21:44 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Jia He <hejianet@...il.com>, Mel Gorman <mgorman@...e.de>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        kernel-team@...com
Subject: Re: [PATCH 5/9] mm: don't avoid high-priority reclaim on
 unreclaimable nodes

On Tue 28-02-17 16:40:03, Johannes Weiner wrote:
> 246e87a93934 ("memcg: fix get_scan_count() for small targets") sought
> to avoid high reclaim priorities for kswapd by forcing it to scan a
> minimum amount of pages when lru_pages >> priority yielded nothing.
> 
> b95a2f2d486d ("mm: vmscan: convert global reclaim to per-memcg LRU
> lists"), due to switching global reclaim to a round-robin scheme over
> all cgroups, had to restrict this forceful behavior to unreclaimable
> zones in order to prevent massive overreclaim with many cgroups.
> 
> The latter patch effectively neutered the behavior completely for all
> but extreme memory pressure. But in those situations we might as well
> drop the reclaimers to lower priority levels. Remove the check.
> 
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>

Acked-by: Michal Hocko <mhocko@...e.com>

> ---
>  mm/vmscan.c | 19 +++++--------------
>  1 file changed, 5 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 911957b66622..46b6223fe7f3 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2129,22 +2129,13 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
>  	int pass;
>  
>  	/*
> -	 * If the zone or memcg is small, nr[l] can be 0.  This
> -	 * results in no scanning on this priority and a potential
> -	 * priority drop.  Global direct reclaim can go to the next
> -	 * zone and tends to have no problems. Global kswapd is for
> -	 * zone balancing and it needs to scan a minimum amount. When
> +	 * If the zone or memcg is small, nr[l] can be 0. When
>  	 * reclaiming for a memcg, a priority drop can cause high
> -	 * latencies, so it's better to scan a minimum amount there as
> -	 * well.
> +	 * latencies, so it's better to scan a minimum amount. When a
> +	 * cgroup has already been deleted, scrape out the remaining
> +	 * cache forcefully to get rid of the lingering state.
>  	 */
> -	if (current_is_kswapd()) {
> -		if (!pgdat_reclaimable(pgdat))
> -			force_scan = true;
> -		if (!mem_cgroup_online(memcg))
> -			force_scan = true;
> -	}
> -	if (!global_reclaim(sc))
> +	if (!global_reclaim(sc) || !mem_cgroup_online(memcg))
>  		force_scan = true;
>  
>  	/* If we have no swap space, do not bother scanning anon pages. */
> -- 
> 2.11.1

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ