[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHGf_=ojnEA6FG2eE_P6oAO-WtybAbTF8xNUMWb4geF4PXQbhA@mail.gmail.com>
Date: Wed, 17 Oct 2012 17:05:37 -0400
From: KOSAKI Motohiro <kosaki.motohiro@...il.com>
To: David Rientjes <rientjes@...gle.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Dave Jones <davej@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
bhutchings@...arflare.com,
Konstantin Khlebnikov <khlebnikov@...nvz.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Hugh Dickins <hughd@...gle.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [patch for-3.7] mm, mempolicy: fix printing stack contents in numa_maps
On Wed, Oct 17, 2012 at 3:50 PM, David Rientjes <rientjes@...gle.com> wrote:
> On Wed, 17 Oct 2012, KOSAKI Motohiro wrote:
>
>> > I think this refcounting is better than using task_lock().
>>
>> I don't think so. get_vma_policy() is used from fast path. In other
>> words, number of
>> atomic ops is sensible for allocation performance.
>
> There are enhancements that we can make with refcounting: for instance, we
> may want to avoid doing it in the super-fast path when the policy is
> default_policy and then just do
>
> if (mpol != &default_policy)
> mpol_put(mpol);
>
>> Instead, I'd like
>> to use spinlock
>> for shared mempolicy instead of mutex.
>>
>
> Um, this was just changed to a mutex last week in commit b22d127a39dd
> ("mempolicy: fix a race in shared_policy_replace()") so that sp_alloc()
> can be done with GFP_KERNEL, so I didn't consider reverting that behavior.
> Are you nacking that patch, which you acked, now?
Yes, sadly. /proc usage is a corner case issue. It's not worth to
strike main path.
see commit 52cd3b0740 and around patches. That explain why we avoided your
approach.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists