[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20081024144138.9C4C.KOSAKI.MOTOHIRO@jp.fujitsu.com>
Date: Fri, 24 Oct 2008 14:51:32 +0900 (JST)
From: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To: Nick Piggin <npiggin@...e.de>
Cc: kosaki.motohiro@...fujitsu.com,
Heiko Carstens <heiko.carstens@...ibm.com>,
linux-kernel@...r.kernel.org, Hugh Dickins <hugh@...itas.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>,
Lee Schermerhorn <lee.schermerhorn@...com>, linux-mm@...ck.org,
Christoph Lameter <cl@...ux-foundation.org>
Subject: Re: [RFC][PATCH] lru_add_drain_all() don't use schedule_on_each_cpu()
> On Fri, Oct 24, 2008 at 02:29:18PM +0900, KOSAKI Motohiro wrote:
> > > > > I don't see a better way to solve it, other than avoiding lru_add_drain_all
> > > >
> > > > Well,
> > > >
> > > > Unfortunately, lru_add_drain_all is also used some other VM place
> > > > (page migration and memory hotplug).
> > > > and page migration's usage is the same of this mlock usage.
> > > > (1. grab mmap_sem 2. call lru_add_drain_all)
> > > >
> > > > Then, change mlock usage isn't solution ;-)
> > >
> > > No, not mlock alone.
> >
> > Ah, I see.
> > It seems difficult but valuable. I'll think this way for a while.
>
> Well, I think it would be nice if we can reduce lru_add_drain_all,
> however your patch might be the least intrusive and best short term
> solution.
Yup, thanks.
I also think my way is the best solustion of 2.6.28 age.
and I should work on your better solution for long term.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists