[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1210171237130.28214@chino.kir.corp.google.com>
Date: Wed, 17 Oct 2012 12:38:55 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Dave Jones <davej@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
KOSAKI Motohiro <kosaki.motohiro@...il.com>,
bhutchings@...arflare.com,
Konstantin Khlebnikov <khlebnikov@...nvz.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Hugh Dickins <hughd@...gle.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [patch for-3.7] mm, mempolicy: fix printing stack contents in
numa_maps
On Wed, 17 Oct 2012, Dave Jones wrote:
> > Sounds good. Is it possible to verify that policy_cache isn't getting
> > larger than normal in /proc/slabinfo, i.e. when all processes with a
> > task mempolicy or shared vma policy have exited, are there still a
> > significant number of active objects?
>
> Killing the fuzzer caused it to drop dramatically.
>
> Before:
> (15:29:59:davej@...crush:trinity[master])$ sudo cat /proc/slabinfo | grep policy
> shared_policy_node 2931 2967 376 43 4 : tunables 0 0 0 : slabdata 69 69 0
> numa_policy 2971 6545 464 35 4 : tunables 0 0 0 : slabdata 187 187 0
>
> After:
> (15:30:16:davej@...crush:trinity[master])$ sudo cat /proc/slabinfo | grep policy
> shared_policy_node 0 215 376 43 4 : tunables 0 0 0 : slabdata 5 5 0
> numa_policy 15 175 464 35 4 : tunables 0 0 0 : slabdata 5 5 0
>
Excellent, thanks. This shows that the refcounting is working properly
and we're not leaking any references as a result of this change causing
the mempolicies to never be freed. ("numa_policy" turns out to be
policy_cache in the code, so thanks for checking both of them.)
Could I add your tested-by?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists