[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130418164306.GJ2018@cmpxchg.org>
Date: Thu, 18 Apr 2013 09:43:06 -0700
From: Johannes Weiner <hannes@...xchg.org>
To: Mel Gorman <mgorman@...e.de>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Jiri Slaby <jslaby@...e.cz>,
Valdis Kletnieks <Valdis.Kletnieks@...edu>,
Rik van Riel <riel@...hat.com>,
Zlatko Calusic <zcalusic@...sync.net>,
dormando <dormando@...ia.net>, Michal Hocko <mhocko@...e.cz>,
Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 08/10] mm: vmscan: Have kswapd shrink slab only once per
priority
On Thu, Apr 11, 2013 at 08:57:56PM +0100, Mel Gorman wrote:
> If kswaps fails to make progress but continues to shrink slab then it'll
> either discard all of slab or consume CPU uselessly scanning shrinkers.
> This patch causes kswapd to only call the shrinkers once per priority.
But the priority level changes _only_ when kswapd is not making
progress, so I don't see how this fixes this case.
On the other hand, what about shrinkable memory like dentries and
inodes that build up during a streaming IO load like a backup program?
Kswapd may be cooperating with the page allocator and never change
priority as it reclaims the continuous file page stream, but it won't
do the same for the stream of slab memory.
So if anything, I would expect us to lay off slab memory when lru
reclaim is struggling, but receive continuous aging and pushback as
long as lru reclaim is comfortably running alongside the workload.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists