[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120529084848.GC10469@localhost>
Date: Tue, 29 May 2012 16:48:48 +0800
From: Fengguang Wu <fengguang.wu@...el.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Michal Hocko <mhocko@...e.cz>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujtisu.com>,
Mel Gorman <mgorman@...e.de>, Minchan Kim <minchan@...nel.org>,
Rik van Riel <riel@...hat.com>,
Ying Han <yinghan@...gle.com>,
Greg Thelen <gthelen@...gle.com>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: [RFC -mm] memcg: prevent from OOM with too many dirty pages
On Tue, May 29, 2012 at 09:28:53AM +0200, Johannes Weiner wrote:
> On Tue, May 29, 2012 at 11:08:57AM +0800, Fengguang Wu wrote:
> > Hi Michal,
> >
> > On Mon, May 28, 2012 at 05:38:55PM +0200, Michal Hocko wrote:
> > > Current implementation of dirty pages throttling is not memcg aware which makes
> > > it easy to have LRUs full of dirty pages which might lead to memcg OOM if the
> > > hard limit is small and so the lists are scanned faster than pages written
> > > back.
> > >
> > > This patch fixes the problem by throttling the allocating process (possibly
> > > a writer) during the hard limit reclaim by waiting on PageReclaim pages.
> > > We are waiting only for PageReclaim pages because those are the pages
> > > that made one full round over LRU and that means that the writeback is much
> > > slower than scanning.
> > > The solution is far from being ideal - long term solution is memcg aware
> > > dirty throttling - but it is meant to be a band aid until we have a real
> > > fix.
> >
> > IMHO it's still an important "band aid" -- perhaps worthwhile for
> > sending to Greg's stable trees. Because it fixes a really important
> > use case: it enables the users to put backups into a small memcg.
> >
> > The users visible changes are:
> >
> > the backup program get OOM killed
> > =>
> > it runs now, although being a bit slow and bumpy
>
> The problem is workloads that /don't/ have excessive dirty pages, but
> instantiate clean page cache at a much faster rate than writeback can
> clean the few dirties. The dirty/writeback pages reach the end of the
> lru several times while there are always easily reclaimable pages
> around.
Good point!
> This was the rationale for introducing the backoff function that
> considers the dirty page percentage of all pages looked at (bottom of
> shrink_active_list) and removing all other sleeps that didn't look at
> the bigger picture and made problems. I'd hate for them to come back.
>
> On the other hand, is there a chance to make this backoff function
> work for memcgs? Right now it only applies to the global case to not
> mark a whole zone congested because of some dirty pages on a single
> memcg LRU. But maybe it can work by considering congestion on a
> per-lruvec basis rather than per-zone?
Johannes, would you paste the backoff code? Sorry I'm not sure about
the exact logic you are talking.
As for this patch, can it be improved by adding some test like
(priority < DEF_PRIORITY/2)? That should reasonably filter out the
"fast read rotating dirty pages fast" situation and still avoid OOM
for "heavy write inside small memcg".
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists