[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1311181719390.4292@chino.kir.corp.google.com>
Date: Mon, 18 Nov 2013 17:22:16 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Michal Hocko <mhocko@...e.cz>
cc: Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
cgroups@...r.kernel.org
Subject: Re: [patch 1/2] mm, memcg: avoid oom notification when current needs
access to memory reserves
On Mon, 18 Nov 2013, Michal Hocko wrote:
> > Even though the situation may not require a kill, the user still wants
> > to know that the memory hard limit was breached and the isolation
> > broken in order to prevent a kill. We just came really close and the
>
> You can observe that you are getting into troubles from fail counter
> already. The usability without more reclaim statistics is a bit
> questionable but you get a rough impression that something is wrong at
> least.
>
Agreed, but it seems like the appropriate mechanism for this is through
the memory.{,memsw.}usage_in_bytes notifiers which already exist.
> > fact that current is exiting is coincidental. Not everybody is having
> > OOM situations on a frequent basis and they might want to know when
> > they are redlining the system and that the same workload might blow up
> > the next time it's run.
>
> I am just concerned that signaling temporal OOM conditions which do not
> require any OOM killer action (user or kernel space) might be confusing.
> Userspace would have harder times to tell whether any action is required
> or not.
>
Completely agreed, in fact there is no reliable and non-racy way in
userspace to determine "is this a real oom condition that I must act upon
or can the kernel handle it?"
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists