[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090910083340.9CB7.A69D9226@jp.fujitsu.com>
Date: Thu, 10 Sep 2009 08:43:56 +0900 (JST)
From: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To: Christoph Lameter <cl@...ux-foundation.org>
Cc: kosaki.motohiro@...fujitsu.com,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Galbraith <efault@....de>, Ingo Molnar <mingo@...e.hu>,
linux-mm <linux-mm@...ck.org>,
Oleg Nesterov <onestero@...hat.com>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: [rfc] lru_add_drain_all() vs isolation
> On Wed, 9 Sep 2009, KOSAKI Motohiro wrote:
>
> > Christoph, I'd like to discuss a bit related (and almost unrelated) thing.
> > I think page migration don't need lru_add_drain_all() as synchronous, because
> > page migration have 10 times retry.
>
> True this is only an optimization that increases the chance of isolation
> being successful. You dont need draining at all.
>
> > Then asynchronous lru_add_drain_all() cause
> >
> > - if system isn't under heavy pressure, retry succussfull.
> > - if system is under heavy pressure or RT-thread work busy busy loop, retry failure.
> >
> > I don't think this is problematic bahavior. Also, mlock can use asynchrounous lru drain.
> >
> > What do you think?
>
> The retries can be very fast if the migrate pages list is small. The
> migrate attempts may be finished before the IPI can be processed by the
> other cpus.
Ah, I see. Yes, my last proposal is not good. small migration might be fail.
How about this?
- pass 1-2, lru_add_drain_all_async()
- pass 3-10, lru_add_drain_all()
this scheme might save RT-thread case and never cause regression. (I think)
The last remain problem is, if RT-thread binding cpu's pagevec has migrate
targetted page, migration still face the same issue.
but we can't solve it...
RT-thread must use /proc/sys/vm/drop_caches properly.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists