lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 15 Jan 2014 20:00:15 +0100
From:	Michal Hocko <mhocko@...e.cz>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>,
	David Rientjes <rientjes@...gle.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC 1/3] memcg: notify userspace about OOM only when and action
 is due

On Wed 15-01-14 12:56:55, Johannes Weiner wrote:
> On Wed, Jan 15, 2014 at 04:01:06PM +0100, Michal Hocko wrote:
> > Userspace is currently notified about OOM condition after reclaim
> > fails to uncharge any memory after MEM_CGROUP_RECLAIM_RETRIES rounds.
> > This usually means that the memcg is really in troubles and an
> > OOM action (either done by userspace or kernel) has to be taken.
> > The kernel OOM killer however bails out and doesn't kill anything
> > if it sees an already dying/exiting task in a good hope a memory
> > will be released and the OOM situation will be resolved.
> > 
> > Therefore it makes sense to notify userspace only after really all
> > measures have been taken and an userspace action is required or
> > the kernel kills a task.
> > 
> > This patch is based on idea by David Rientjes to not notify
> > userspace when the current task is killed or in a late exiting.
> > The original patch, however, didn't handle in kernel oom killer
> > back offs which is implemtented by this patch.
> > 
> > Signed-off-by: Michal Hocko <mhocko@...e.cz>
> 
> OOM is a temporary state because any task can exit at a time that is
> not under our control and outside our knowledge.  That's why the OOM
> situation is defined by failing an allocation after a certain number
> of reclaim and charge attempts.
> 
> As of right now, the OOM sampling window is MEM_CGROUP_RECLAIM_RETRIES
> loops of charge attempts and reclaim.  If a racing task is exiting and
> releasing memory during that window, the charge will succeed fine.  If
> the sampling window is too short in practice, it will have to be
> extended, preferrably through increasing MEM_CGROUP_RECLAIM_RETRIES.

The patch doesn't try to address the above race because that one is
unfixable. I hope that is clear.

It just tries to reduce burden on the userspace oom notification
consumers and given them a simple semantic. Notification comes only if
an action will be necessary (either kernel kills something or user space
is expected).

E.g. consider a handler which tries to clean up after kernel handled
OOM and killed something. If the kernel could back off and refrain
from killing anything after the norification already fired up then the
userspace has no practical way to detect that (except for checking the
kernel log to search for OOM messages which might get suppressed due to
rate limitting etc.. Nothing I would call optimal).
Or do you think that such a use case doesn't make much sense and it is
an abuse of the notification interface?

> But a random task exiting a split second after the sampling window has
> closed will always be a possibility, regardless of how long it is.

Agreed and this is not what the patch is about. If the kernel oom killer
couldn't back off then I would completely agree with you here.

> There is nothing to be gained from this layering violation and it's
> mind-boggling that you two still think this is a meaningful change.
> 
> Nacked-by: Johannes Weiner <hannes@...xchg.org>

-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ