lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160218122012.GE18149@dhcp22.suse.cz>
Date:	Thu, 18 Feb 2016 13:20:13 +0100
From:	Michal Hocko <mhocko@...nel.org>
To:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc:	akpm@...ux-foundation.org, rientjes@...gle.com, mgorman@...e.de,
	oleg@...hat.com, torvalds@...ux-foundation.org, hughd@...gle.com,
	andrea@...nel.org, riel@...hat.com, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/6] mm,oom: wait for OOM victims when using
 oom_kill_allocating_task == 1

On Thu 18-02-16 19:45:45, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Wed 17-02-16 19:36:36, Tetsuo Handa wrote:
> > > From 0b36864d4100ecbdcaa2fc2d1927c9e270f1b629 Mon Sep 17 00:00:00 2001
> > > From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
> > > Date: Wed, 17 Feb 2016 16:37:59 +0900
> > > Subject: [PATCH 6/6] mm,oom: wait for OOM victims when using oom_kill_allocating_task == 1
> > >
> > > Currently, out_of_memory() does not wait for existing TIF_MEMDIE threads
> > > if /proc/sys/vm/oom_kill_allocating_task is set to 1. This can result in
> > > killing more OOM victims than needed. We can wait for the OOM reaper to
> > > reap memory used by existing TIF_MEMDIE threads if possible. If the OOM
> > > reaper is not available, the system will be kept OOM stalled until an
> > > OOM-unkillable thread does a GFP_FS allocation request and calls
> > > oom_kill_allocating_task == 0 path.
> > >
> > > This patch changes oom_kill_allocating_task == 1 case to call
> > > select_bad_process() in order to wait for existing TIF_MEMDIE threads.
> >
> > The primary motivation for oom_kill_allocating_task was to reduce the
> > overhead of select_bad_process. See fe071d7e8aae ("oom: add
> > oom_kill_allocating_task sysctl"). So this basically defeats the whole
> > purpose of the feature.
> >
> 
> I didn't know that. But I think that printk()ing all candidates much more
> significantly degrades performance than scanning the tasklist.

I assume those who care do set oom_dump_tasks = 0.

> It would be
> nice if setting /proc/sys/vm/oom_dump_tasks = N (N > 1) shows only top N
> memory-hog processes.

You would need scanning of all tasks anyway and sorting etc... Not worth
bothering IMO.
 
[...]
> We have
> 
>   "Out of memory (oom_kill_allocating_task)"
>   "Out of memory"
>   "Memory cgroup out of memory"
> 
> but we don't have
> 
>   "Memory cgroup out of memory (oom_kill_allocating_task)"
> 
> I don't know whether we should use this condition for memcg OOM case.

memcg oom killer ignores follow oom_kill_allocating_task.
 
> >  	/*
> >  	 * If any of p's children has a different mm and is eligible for kill,
> >  	 * the one with the highest oom_badness() score is sacrificed for its
> > @@ -734,6 +737,7 @@ void oom_kill_process(struct oom_control *oc, struct task_struct *p,
> >  	}
> >  	read_unlock(&tasklist_lock);
> >
> > +kill:
> >  	p = find_lock_task_mm(victim);
> >  	if (!p) {
> >  		put_task_struct(victim);
> > @@ -888,6 +892,9 @@ bool out_of_memory(struct oom_control *oc)
> >  	if (sysctl_oom_kill_allocating_task && current->mm &&
> >  	    !oom_unkillable_task(current, NULL, oc->nodemask) &&
> >  	    current->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) {
> > +		if (test_thread_flag(TIF_MEMDIE))
> > +			panic("Out of memory (oom_kill_allocating_task) not able to make a forward progress");
> > +
> 
> If current thread got TIF_MEMDIE, current thread will not call out_of_memory()
> again because current thread will exit the allocation (unless __GFP_NOFAIL)
> due to use of ALLOC_NO_WATERMARKS.

exactly __GFP_NOFAIL has to be handled properly.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ