lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201602070033.GFC13307.MOJQtFHOFOVLFS@I-love.SAKURA.ne.jp>
Date:	Sun, 7 Feb 2016 00:33:38 +0900
From:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To:	mhocko@...nel.org
Cc:	akpm@...ux-foundation.org, rientjes@...gle.com, mgorman@...e.de,
	oleg@...hat.com, torvalds@...ux-foundation.org, hughd@...gle.com,
	andrea@...nel.org, riel@...hat.com, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/5] mm, oom_reaper: implement OOM victims queuing

Michal Hocko wrote:
> On Sat 06-02-16 14:54:24, Tetsuo Handa wrote:
> > Michal Hocko wrote:
> > > > But if we consider non system-wide OOM events, it is not very unlikely to hit
> > > > this race. This queue is useful for situations where memcg1 and memcg2 hit
> > > > memcg OOM at the same time and victim1 in memcg1 cannot terminate immediately.
> > > 
> > > This can happen of course but the likelihood is _much_ smaller without
> > > the global OOM because the memcg OOM killer is invoked from a lockless
> > > context so the oom context cannot block the victim to proceed.
> > 
> > Suppose mem_cgroup_out_of_memory() is called from a lockless context via
> > mem_cgroup_oom_synchronize() called from pagefault_out_of_memory(), that
> > "lockless" is talking about only current thread, doesn't it?
> 
> Yes and you need the OOM context to sit on the same lock as the victim
> to form a deadlock. So while the victim might be blocked somewhere it is
> much less likely it would be deadlocked.
> 
> > Since oom_kill_process() sets TIF_MEMDIE on first mm!=NULL thread of a
> > victim process, it is possible that non-first mm!=NULL thread triggers
> > pagefault_out_of_memory() and first mm!=NULL thread gets TIF_MEMDIE,
> > isn't it?
> 
> I got lost here completely. Maybe it is your usage of thread terminology
> again.

I'm using "process" == "thread group" which contains at least one "thread",
and "thread" == "struct task_struct".
My assumption is

   (1) app1 process has two threads named app1t1 and app1t2
   (2) app2 process has two threads named app2t1 and app2t2
   (3) app1t1->mm == app1t2->mm != NULL and app2t1->mm == app2t2->mm != NULL
   (4) app1 is in memcg1 and app2 is in memcg2

and sequence is

   (1) app1t2 triggers pagefault_out_of_memory()
   (2) app1t2 calls mem_cgroup_out_of_memory() via mem_cgroup_oom_synchronize()
   (3) oom_scan_process_thread() selects app1 as an OOM victim process
   (4) find_lock_task_mm() selects app1t1 as an OOM victim thread
   (5) app1t1 gets TIF_MEMDIE
   (6) app2t2 triggers pagefault_out_of_memory()
   (7) app2t2 calls mem_cgroup_out_of_memory() via mem_cgroup_oom_synchronize()
   (8) oom_scan_process_thread() selects app2 as an OOM victim process
   (9) find_lock_task_mm() selects app2t1 as an OOM victim thread
   (10) app2t1 gets TIF_MEMDIE

.

I'm talking about situation where app1t1 is blocked at down_write(&app1t1->mm->mmap_sem)
because somebody else is already waiting at down_read(&app1t1->mm->mmap_sem) or is
doing memory allocation between down_read(&app1t1->mm->mmap_sem) and
up_read(&app1t1->mm->mmap_sem). In this case, this [PATCH 5/5] helps the OOM reaper to
reap app2t1->mm after giving up waiting for down_read(&app1t1->mm->mmap_sem) to succeed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ