lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 4 Nov 2019 14:58:22 -0500
From:   Brian Foster <bfoster@...hat.com>
To:     Dave Chinner <david@...morbit.com>
Cc:     linux-xfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 15/28] mm: back off direct reclaim on excessive shrinker
 deferral

On Fri, Nov 01, 2019 at 10:46:05AM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@...hat.com>
> 
> When the majority of possible shrinker reclaim work is deferred by
> the shrinkers (e.g. due to GFP_NOFS context), and there is more work
> defered than LRU pages were scanned, back off reclaim if there are

  deferred

> large amounts of IO in progress.
> 
> This tends to occur when there are inode cache heavy workloads that
> have little page cache or application memory pressure on filesytems
> like XFS. Inode cache heavy workloads involve lots of IO, so if we
> are getting device congestion it is indicative of memory reclaim
> running up against an IO throughput limitation. in this situation
> we need to throttle direct reclaim as we nee dto wait for kswapd to

					   need to

> get some of the deferred work done.
> 
> However, if there is no device congestion, then the system is
> keeping up with both the workload and memory reclaim and so there's
> no need to throttle.
> 
> Hence we should only back off scanning for a bit if we see this
> condition and there is block device congestion present.
> 
> Signed-off-by: Dave Chinner <dchinner@...hat.com>
> ---
>  include/linux/swap.h |  2 ++
>  mm/vmscan.c          | 30 +++++++++++++++++++++++++++++-
>  2 files changed, 31 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 72b855fe20b0..da0913e14bb9 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -131,6 +131,8 @@ union swap_header {
>   */
>  struct reclaim_state {
>  	unsigned long	reclaimed_pages;	/* pages freed by shrinkers */
> +	unsigned long	scanned_objects;	/* quantity of work done */ 

Trailing whitespace at the end of the above line.

> +	unsigned long	deferred_objects;	/* work that wasn't done */
>  };
>  
>  /*
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 967e3d3c7748..13c11e10c9c5 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -570,6 +570,8 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  		deferred_count = min(deferred_count, freeable_objects * 2);
>  
>  	}
> +	if (current->reclaim_state)
> +		current->reclaim_state->scanned_objects += scanned_objects;

Looks like scanned_objects is always zero here.

>  
>  	/*
>  	 * Avoid risking looping forever due to too large nr value:
> @@ -585,8 +587,11 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  	 * If the shrinker can't run (e.g. due to gfp_mask constraints), then
>  	 * defer the work to a context that can scan the cache.
>  	 */
> -	if (shrinkctl->defer_work)
> +	if (shrinkctl->defer_work) {
> +		if (current->reclaim_state)
> +			current->reclaim_state->deferred_objects += scan_count;
>  		goto done;
> +	}
>  
>  	/*
>  	 * Normally, we should not scan less than batch_size objects in one
> @@ -2871,7 +2876,30 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
>  
>  		if (reclaim_state) {
>  			sc->nr_reclaimed += reclaim_state->reclaimed_pages;
> +
> +			/*
> +			 * If we are deferring more work than we are actually
> +			 * doing in the shrinkers, and we are scanning more
> +			 * objects than we are pages, the we have a large amount
> +			 * of slab caches we are deferring work to kswapd for.
> +			 * We better back off here for a while, otherwise
> +			 * we risk priority windup, swap storms and OOM kills
> +			 * once we empty the page lists but still can't make
> +			 * progress on the shrinker memory.
> +			 *
> +			 * kswapd won't ever defer work as it's run under a
> +			 * GFP_KERNEL context and can always do work.
> +			 */
> +			if ((reclaim_state->deferred_objects >
> +					sc->nr_scanned - nr_scanned) &&

Out of curiosity, what's the reasoning behind the direct comparison
between ->deferred_objects and pages? Shouldn't we generally expect more
slab objects to exist than pages by the nature of slab?

Also, the comment says "if we are scanning more objects than we are
pages," yet the code is checking whether we defer more objects than
scanned pages. Which is more accurate?

Brian

> +			    (reclaim_state->deferred_objects >
> +					reclaim_state->scanned_objects)) {
> +				wait_iff_congested(BLK_RW_ASYNC, HZ/50);
> +			}
> +
>  			reclaim_state->reclaimed_pages = 0;
> +			reclaim_state->deferred_objects = 0;
> +			reclaim_state->scanned_objects = 0;
>  		}
>  
>  		/* Record the subtree's reclaim efficiency */
> -- 
> 2.24.0.rc0
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ