[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110712094841.GC10552@tiehlicka.suse.cz>
Date: Tue, 12 Jul 2011 11:48:41 +0200
From: Michal Hocko <mhocko@...e.cz>
To: Hiroyuki Kamezawa <kamezawa.hiroyuki@...il.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Christoph Hellwig <hch@...radead.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
Rik van Riel <riel@...hat.com>,
Michel Lespinasse <walken@...gle.com>,
Mel Gorman <mgorman@...e.de>, Lutz Vieweg <lvml@....de>
Subject: Re: [PATCH] mm: preallocate page before lock_page at filemap COW.
(WasRe: [PATCH V2] mm: Do not keep page locked during page fault while
charging it for memcg
On Fri 24-06-11 20:46:29, Hiroyuki Kamezawa wrote:
> 2011/6/24 Michal Hocko <mhocko@...e.cz>:
> > Sorry, forgot to send my
> > Reviewed-by: Michal Hocko <mhocko@...e>
> >
>
> Thanks.
>
> > I still have concerns about this way to handle the issue. See the follow
> > up discussion in other thread (https://lkml.org/lkml/2011/6/23/135).
> >
> > Anyway I think that we do not have many other options to handle this.
> > Either we unlock, charge, lock&restes or we preallocate, fault in
> >
> I agree.
>
> > Or am I missing some other ways how to do it? What do others think about
> > these approaches?
> >
>
> Yes, I'd like to hear other mm specialists' suggestion. and I'll think
> other way, again.
> Anyway, memory reclaim with holding a lock_page() can cause big latency
> or starvation especially when memcg is used. It's better to avoid it.
Is there any intereset in discussing this or the email just got lost?
Just for reference preallocation patch from Kamezawa is already in the
Andrew's tree.
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists