[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52D4DF97.1010409@parallels.com>
Date: Tue, 14 Jan 2014 10:56:23 +0400
From: Vladimir Davydov <vdavydov@...allels.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<devel@...nvz.org>, Mel Gorman <mgorman@...e.de>,
Michal Hocko <mhocko@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
Rik van Riel <riel@...hat.com>,
Dave Chinner <dchinner@...hat.com>,
Glauber Costa <glommer@...il.com>
Subject: Re: [PATCH 3/5] mm: vmscan: respect NUMA policy mask when shrinking
slab on direct reclaim
On 01/14/2014 03:11 AM, Andrew Morton wrote:
> On Sat, 11 Jan 2014 16:36:33 +0400 Vladimir Davydov <vdavydov@...allels.com> wrote:
>
>> When direct reclaim is executed by a process bound to a set of NUMA
>> nodes, we should scan only those nodes when possible, but currently we
>> will scan kmem from all online nodes even if the kmem shrinker is NUMA
>> aware. That said, binding a process to a particular NUMA node won't
>> prevent it from shrinking inode/dentry caches from other nodes, which is
>> not good. Fix this.
> Seems right. I worry that reducing the amount of shrinking which
> node-bound processes perform might affect workloads in unexpected ways.
Theoretically, it might, especially for NUMA unaware shrinkers. But
that's how it works for cpusets right now - we do not count pages from
nodes that are not allowed for the current process. Besides, when
counting lru pages for kswapd_shrink_zones(), we also consider only the
node this kswapd runs on so that NUMA unaware shrinkers will be scanned
more aggressively on NUMA enabled setups than NUMA aware ones. So, in
fact, this patch makes policy masks handling consistent with the rest of
the vmscan code.
> I think I'll save this one for 3.15-rc1, OK?
OK, thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists