[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131122001859.GA9510@logfs.org>
Date: Thu, 21 Nov 2013 19:19:00 -0500
From: Jörn Engel <joern@...fs.org>
To: Michal Hocko <mhocko@...e.cz>
Cc: linux-mm@...ck.org, Greg Thelen <gthelen@...gle.com>,
Glauber Costa <glommer@...il.com>,
Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
David Rientjes <rientjes@...gle.com>,
Rik van Riel <riel@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: user defined OOM policies
On Tue, 19 November 2013 14:14:00 +0100, Michal Hocko wrote:
>
> We have basically ended up with 3 options AFAIR:
> 1) allow memcg approach (memcg.oom_control) on the root level
> for both OOM notification and blocking OOM killer and handle
> the situation from the userspace same as we can for other
> memcgs.
> 2) allow modules to hook into OOM killer path and take the
> appropriate action.
> 3) create a generic filtering mechanism which could be
> controlled from the userspace by a set of rules (e.g.
> something analogous to packet filtering).
One ancient option I sometime miss was this:
- Kill the biggest process.
Doesn't always make the optimal choice, but neither did any of the
refinements. But it had the nice advantage that even I could predict
which bad choice it would make and why. Every bit of sophistication
means that you still get it wrong sometimes, but in less obvious and
more annoying ways.
Then again, an alternative I actually use in production is to reboot
the machine on OOM. Again, very simple, very blunt and very
predictable.
Jörn
--
No art, however minor, demands less than total dedication if you want
to excel in it.
-- Leon Battista Alberti
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists