[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140122184836.GE4407@cmpxchg.org>
Date: Wed, 22 Jan 2014 13:48:36 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Dave Chinner <david@...morbit.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andi Kleen <andi@...stfloor.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Bob Liu <bob.liu@...cle.com>,
Christoph Hellwig <hch@...radead.org>,
Greg Thelen <gthelen@...gle.com>,
Hugh Dickins <hughd@...gle.com>, Jan Kara <jack@...e.cz>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Luigi Semenzato <semenzato@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Metin Doslu <metin@...usdata.com>,
Michel Lespinasse <walken@...gle.com>,
Minchan Kim <minchan.kim@...il.com>,
Ozgun Erdogan <ozgun@...usdata.com>,
Peter Zijlstra <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>,
Roman Gushchin <klamm@...dex-team.ru>,
Ryan Mallon <rmallon@...il.com>, Tejun Heo <tj@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 9/9] mm: keep page cache radix tree nodes in check
On Wed, Jan 22, 2014 at 01:57:14AM -0500, Johannes Weiner wrote:
> Not at this time, I'll try to look into that. For now, I am updating
> the patch to revert the shrinker back to DEFAULT_SEEKS and change the
> object count to only include objects above a certain threshold, which
> assumes a worst-case population of 4 in 64 slots. It's not perfect,
> but neither was the seeks magic, and it's easier to reason about what
> it's actually doing.
Ah, the quality of 2am submissions... 8 out of 64 of course.
> @@ -266,14 +269,38 @@ struct list_lru workingset_shadow_nodes;
> static unsigned long count_shadow_nodes(struct shrinker *shrinker,
> struct shrink_control *sc)
> {
> - return list_lru_count_node(&workingset_shadow_nodes, sc->nid);
> + unsigned long shadow_nodes;
> + unsigned long max_nodes;
> + unsigned long pages;
> +
> + shadow_nodes = list_lru_count_node(&workingset_shadow_nodes, sc->nid);
> + pages = node_present_pages(sc->nid);
> + /*
> + * Active cache pages are limited to 50% of memory, and shadow
> + * entries that represent a refault distance bigger than that
> + * do not have any effect. Limit the number of shadow nodes
> + * such that shadow entries do not exceed the number of active
> + * cache pages, assuming a worst-case node population density
> + * of 1/16th on average.
1/8th. The actual code is consistent:
> + * On 64-bit with 7 radix_tree_nodes per page and 64 slots
> + * each, this will reclaim shadow entries when they consume
> + * ~2% of available memory:
> + *
> + * PAGE_SIZE / radix_tree_nodes / node_entries / PAGE_SIZE
> + */
> + max_nodes = pages >> (1 + RADIX_TREE_MAP_SHIFT - 3);
> +
> + if (shadow_nodes <= max_nodes)
> + return 0;
> +
> + return shadow_nodes - max_nodes;
> }
>
> static enum lru_status shadow_lru_isolate(struct list_head *item,
> spinlock_t *lru_lock,
> void *arg)
> {
> - unsigned long *nr_reclaimed = arg;
> struct address_space *mapping;
> struct radix_tree_node *node;
> unsigned int i;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists