[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B0C60A1.6030509@nortel.com>
Date: Tue, 24 Nov 2009 16:39:29 -0600
From: "Chris Friesen" <cfriesen@...tel.com>
To: Andi Kleen <andi@...stfloor.org>
CC: Eyal Lotem <eyal.lotem@...il.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] Using "page credits" as a solution for common thrashing
scenarios
On 11/24/2009 03:15 PM, Andi Kleen wrote:
> Eyal Lotem <eyal.lotem@...il.com> writes:
>
> Replying to an old email.
>
>> * I think it is wrong for the kernel to evict the 15 pages of the bash,
>> xterm, X server's working set, as an example, in order for a
>> misbehaving process to have 1000015 instead of 1000000 pages in its
>> working set. EVEN if that misbehaving process is accessing its working
>> set far more aggressively.
>
> One problem in practice tends to be that it's hard to realiably detect
> that a process is misbehaving. The 1000000 page process might be your
> critical database, while the 15 page process is something very
> unimportant.
Quite a while ago now I proposed the ability for an app (with suitable
privileges) to register with the system the amount of memory it expected
to use. As long as it was under that amount it would be immune to the
oom-killer.
We've been using this in production for some time now as we have several
very large memory footprint apps that otherwise become quite attractive
to the oom killer.
The oom_adj proc entry made this less of an issue so I never bothered
pushing it to mainline.
Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists