lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 29 May 2012 09:28:53 +0200
From:	Johannes Weiner <hannes@...xchg.org>
To:	Fengguang Wu <fengguang.wu@...el.com>
Cc:	Michal Hocko <mhocko@...e.cz>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujtisu.com>,
	Mel Gorman <mgorman@...e.de>, Minchan Kim <minchan@...nel.org>,
	Rik van Riel <riel@...hat.com>,
	Ying Han <yinghan@...gle.com>,
	Greg Thelen <gthelen@...gle.com>,
	Hugh Dickins <hughd@...gle.com>
Subject: Re: [RFC -mm] memcg: prevent from OOM with too many dirty pages

On Tue, May 29, 2012 at 11:08:57AM +0800, Fengguang Wu wrote:
> Hi Michal,
> 
> On Mon, May 28, 2012 at 05:38:55PM +0200, Michal Hocko wrote:
> > Current implementation of dirty pages throttling is not memcg aware which makes
> > it easy to have LRUs full of dirty pages which might lead to memcg OOM if the
> > hard limit is small and so the lists are scanned faster than pages written
> > back.
> > 
> > This patch fixes the problem by throttling the allocating process (possibly
> > a writer) during the hard limit reclaim by waiting on PageReclaim pages.
> > We are waiting only for PageReclaim pages because those are the pages
> > that made one full round over LRU and that means that the writeback is much
> > slower than scanning.
> > The solution is far from being ideal - long term solution is memcg aware
> > dirty throttling - but it is meant to be a band aid until we have a real
> > fix.
> 
> IMHO it's still an important "band aid" -- perhaps worthwhile for
> sending to Greg's stable trees. Because it fixes a really important
> use case: it enables the users to put backups into a small memcg.
> 
> The users visible changes are:
> 
>         the backup program get OOM killed
> =>
>         it runs now, although being a bit slow and bumpy

The problem is workloads that /don't/ have excessive dirty pages, but
instantiate clean page cache at a much faster rate than writeback can
clean the few dirties.  The dirty/writeback pages reach the end of the
lru several times while there are always easily reclaimable pages
around.

This was the rationale for introducing the backoff function that
considers the dirty page percentage of all pages looked at (bottom of
shrink_active_list) and removing all other sleeps that didn't look at
the bigger picture and made problems.  I'd hate for them to come back.

On the other hand, is there a chance to make this backoff function
work for memcgs?  Right now it only applies to the global case to not
mark a whole zone congested because of some dirty pages on a single
memcg LRU.  But maybe it can work by considering congestion on a
per-lruvec basis rather than per-zone?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ