lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 29 Aug 2013 21:07:41 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Tim Chen <tim.c.chen@...ux.intel.com>
Cc:	Alexander Viro <viro@...iv.linux.org.uk>, Jan Kara <jack@...e.cz>,
	Dave Chinner <dchinner@...hat.com>,
	Dave Hansen <dave.hansen@...el.com>,
	Andi Kleen <ak@...ux.intel.com>,
	Matthew Wilcox <willy@...ux.intel.com>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] Avoid useless inodes and dentries reclamation

On Wed, Aug 28, 2013 at 02:52:12PM -0700, Tim Chen wrote:
> This patch detects that when free inodes and dentries are really
> low, their reclamation is skipped so we do not have to contend
> on the global sb_lock uselessly under memory pressure. Otherwise
> we create a log jam trying to acquire the sb_lock in prune_super(),
> with little or no freed memory to show for the effort.
> 
> The profile below shows a multi-threaded large file read exerting
> pressure on memory with page cache usage.  It is dominated
> by the sb_lock contention in the cpu cycles profile.  The patch
> eliminates the sb_lock contention almost entirely for prune_super().
> 
>     43.94%           usemem  [kernel.kallsyms]             [k] _raw_spin_lock
>                      |
>                      --- _raw_spin_lock
>                         |
>                         |--32.44%-- grab_super_passive
>                         |          prune_super
>                         |          shrink_slab
>                         |          do_try_to_free_pages
>                         |          try_to_free_pages
>                         |          __alloc_pages_nodemask
>                         |          alloc_pages_current
>                         |
>                         |--32.18%-- put_super
>                         |          drop_super
>                         |          prune_super
>                         |          shrink_slab
>                         |          do_try_to_free_pages
>                         |          try_to_free_pages
>                         |          __alloc_pages_nodemask
>                         |          alloc_pages_current
> 
> Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
> ---
>  fs/super.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/fs/super.c b/fs/super.c
> index 68307c0..70fa26c 100644
> --- a/fs/super.c
> +++ b/fs/super.c
> @@ -53,6 +53,7 @@ static char *sb_writers_name[SB_FREEZE_LEVELS] = {
>   * shrinker path and that leads to deadlock on the shrinker_rwsem. Hence we
>   * take a passive reference to the superblock to avoid this from occurring.
>   */
> +#define SB_CACHE_LOW 5
>  static int prune_super(struct shrinker *shrink, struct shrink_control *sc)
>  {
>  	struct super_block *sb;
> @@ -68,6 +69,13 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc)
>  	if (sc->nr_to_scan && !(sc->gfp_mask & __GFP_FS))
>  		return -1;
>  
> +	/*
> +	 * Don't prune if we have few cached objects to reclaim to
> +	 * avoid useless sb_lock contention
> +	 */
> +	if ((sb->s_nr_dentry_unused + sb->s_nr_inodes_unused) <= SB_CACHE_LOW)
> +		return -1;

Those counters no longer exist in the current mmotm tree and the
shrinker infrastructure is somewhat different, so this patch isn't
the right way to solve this problem.

Given that superblock LRUs and shrinkers in mmotm are node aware,
there may even be more pressure on the sblock in such a workload.  I
think the right way to deal with this is to give the shrinker itself
a "minimum call count" so that we can avoid even attempting to
shrink caches that does have enough entries in them to be worthwhile
shrinking.

That said, the memcg guys have been saying that even small numbers
of items per cache can be meaningful in terms of memory reclaim
(e.g. when there are lots of memcgs) then such a threshold might
only be appropriate for caches that are not memcg controlled. In
that case, handling it in the shrinker infrastructure itself is a
much better idea than hacking thresholds into individual shrinker
callouts.

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ