[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1336507209.3796.90.camel@schen9-DESK>
Date: Tue, 08 May 2012 13:00:09 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Alexander Viro <viro@...iv.linux.org.uk>
Cc: Matthew Wilcox <matthew@....cx>, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>
Subject: Re: [RFC, PATCH] Make memory reclaim from inodes and dentry cache
more scalable
On Wed, 2012-05-02 at 15:06 -0700, Tim Chen wrote:
> The following patch detects when inodes and dentries cache are really
> low in free entries, and skip reclamation of memory from them when it is
> futile to do so. We only resume reclaiming memory from inodes and
> dentries cache when we have a reasonable amount of memory there.
> This avoided us bottlenecking on sb_lock to do useless memory
> reclamation.
>
> I assume that it is okay to check super block's number of free objects
> content without sb_lock as we are holding shrinker list's read lock. The
> shrinker is still registered so super block is not yet deactivated which
> requires shrinker un-registration. It will be great if Al can help to
> comment on whether this assumption is okay.
>
> In a test scenario where page cache is putting heavy pressure on memory
> usage with large number of processes, we saw very heavy contention on
> the sb_lock to get free pages as seen in the following profile. The
> patch helped to reduce the runtime by almost a factor of 4.
>
> 62.81% cp [kernel.kallsyms] [k] _raw_spin_lock
> |
> --- _raw_spin_lock
> |
> |--45.19%-- grab_super_passive
> | prune_super
> | shrink_slab
> | do_try_to_free_pages
> | try_to_free_pages
> | __alloc_pages_nodemask
> | alloc_pages_current
>
>
> Tim
Hi Al,
Want to ping you again to see what your thoughts are on this patch I've
sent a week ago.
Thanks.
Tim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists