lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Nov 2013 11:34:36 -0500
From:	Johannes Weiner <hannes@...xchg.org>
To:	David Rientjes <rientjes@...gle.com>
Cc:	Michal Hocko <mhocko@...e.cz>,
	Andrew Morton <akpm@...ux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	cgroups@...r.kernel.org
Subject: Re: [patch 1/2] mm, memcg: avoid oom notification when current needs
 access to memory reserves

On Tue, Nov 26, 2013 at 04:53:47PM -0800, David Rientjes wrote:
> On Fri, 22 Nov 2013, Johannes Weiner wrote:
> 
> > But userspace in all likeliness DOES need to take action.
> > 
> > Reclaim is a really long process.  If 5 times doing 12 priority cycles
> > and scanning thousands of pages is not enough to reclaim a single
> > page, what does that say about the health of the memcg?
> > 
> > But more importantly, OOM handling is just inherently racy.  A task
> > might receive the kill signal a split second *after* userspace was
> > notified.  Or a task may exit voluntarily a split second after a
> > victim was chosen and killed.
> > 
> 
> That's not true even today without the userspace oom handling proposal 
> currently being discussed if you have a memcg oom handler attached to a 
> parent memcg with access to more memory than an oom child memcg.  The oom 
> handler can disable the child memcg's oom killer with memory.oom_control 
> and implement its own policy to deal with any notification of oom.

I was never implying the kernel handler.  All the races exist with
userspace handling as well.

> This patch is required to ensure that in such a scenario that the oom 
> handler sitting in the parent memcg only wakes up when it's required to 
> intervene.

A task could receive an unrelated kill between the OOM notification
and going to sleep to wait for userspace OOM handling.  Or another
task could exit voluntarily between the notification and waitqueue
entry, which would again be short-cut by the oom_recover of the exit
uncharges.

oom:                           other tasks:
check signal/exiting
                               could exit or get killed here
mem_cgroup_oom_trylock()
                               could exit or get killed here
mem_cgroup_oom_notify()
                               could exit or get killed here
if (userspace_handler)
  sleep()                      could exit or get killed here
else
  oom_kill()
                               could exit or get killed here

It does not matter where your signal/exiting check is, OOM
notification can never be race free because OOM is just an arbitrary
line we draw.  We have no idea what all the tasks are up to and how
close they are to releasing memory.  Even if we freeze the whole group
to handle tasks, it does not change the fact that the userspace OOM
handler might kill one task and after the unfreeze another task
immediately exits voluntarily or got a kill signal a split second
after it was frozen.

You can't fix this.  We just have to draw the line somewhere and
accept that in rare situations the OOM kill was unnecessary.  So
again, I don't see this patch is doing anything but blur the current
line and make notification less predictable.  And, as someone else in
this thread already said, it's a uservisible change in behavior and
would break known tuning usecases.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ